venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Graph Classification with Geometric Scattering
Abstract
One of the most notable contributions of deep learning is the application of convolutional neural networks (ConvNets) to structured signal classification, and in particular image classification. Beyond their impressive performances in supervised learning, the structure of such networks inspired the development of deep filter banks referred to as scattering transforms. These transforms apply a cascade of wavelet transforms and complex modulus operators to extract features that are invariant to group operations and stable to deformations. Furthermore, ConvNets inspired recent advances in geometric deep learning, which aim to generalize these networks to graph data by applying notions from graph signal processing to learn deep graph filter cascades. We further advance these lines of research by proposing a geometric scattering transform using graph wavelets defined in terms of random walks on the graph. We demonstrate the utility of features extracted with this designed deep filter bank in graph classification of biochemistry and social network data (incl. state of the art results in the latter case), and in data exploration, where they enable inference of EC exchange preferences in enzyme evolution.
1 INTRODUCTION
Over the past decade, numerous examples have established that deep neural networks (i.e., cascades of linear operations and simple nonlinearities) typically outperform traditional “shallow” models in various modern machine learning applications, especially given the increasing Big Data availability nowadays. Perhaps the most well known example of the advantages of deep networks is in computer vision, where the utilization of 2D convolutions enable network designs that learn cascades of convolutional filters, which have several advantages over fully connected network architectures, both computationally and conceptually. Indeed, in terms of supervised learning, convolutional neural networks (ConvNets) hold the current state of the art in image classification, and have become the standard machine learning approach towards processing big structured-signal data, including audio and video processing. See, e.g., Goodfellow et al. (2016, Chapter 9) for a detailed discussion.
Beyond their performances when applied to specific tasks, pretrained ConvNet layers have been explored as image feature extractors by freezing the first few pretrained convolutional layers and then retraining only the last few layers for specific datasets or applications (e.g., Yosinski et al., 2014; Oquab et al., 2014). Such transfer learning approaches provide evidence that suitably constructed deep filter banks should be able to extract task-agnostic semantic information from structured data, and in some sense mimic the operation of human visual and auditory cortices, thus supporting the neural terminology in deep learning. An alternative approach towards such universal feature extraction was presented in Mallat (2012), where a deep filter bank, known as the scattering transform, is designed, rather than trained, based on predetermined families of distruptive patterns that should be eliminated to extract informative representations. The scattering transform is constructed as a cascade of linear wavelet transforms and nonlinear complex modulus operations that provides features with guaranteed invariance to a predetermined Lie group of operations such as rotations, translations, or scaling. Further, it also provides Lipschitz stability to small diffeomorphisms of the inputted signal. Scattering features have been shown to be effective in several audio (e.g., Bruna & Mallat, 2013a; Andén & Mallat, 2014; Lostanlen & Mallat, 2015) and image (e.g., Bruna & Mallat, 2013b; Sifre & Mallat, 2014; Oyallon & Mallat, 2015) processing applications, and their advantages over learned features are especially relevant in applications with relatively low data availability, such as quantum chemistry (e.g., Hirn et al., 2017; Eickenberg et al., 2017; 2018).
Following the recent interest in geometric deep learning approaches for processing graph-structured data (see, for example, Bronstein et al. (2017) and references therein), we present here a generalization of the scattering transform from Euclidean domains to graphs. Similar to the Euclidean case, our construction is based on a cascade of bandpass filters, defined in this case using graph signal processing (Shuman et al., 2013) notions, and complex moduli, which in this case take the form of absolute values (see Sec. 3). While several choices of filter banks could generally be used with the proposed cascade, we focus here on graph wavelet filters defined by lazy random walks (see Sec. 2). These wavelet filters are also closely related to diffusion geometry and related notions of geometric harmonic analysis, e.g. the diffusion maps algorithm of Coifman & Lafon (2006) and the associated diffusion wavelets of Coifman & Maggioni (2006). Therefore, we call the constructed cascade geometric scattering, which also follows the same terminology from geometric deep learning.
We note that similar attempts at generalizing the scattering transform to graphs have been presented in Chen et al. (2014) as well as Zou & Lerman (2018) and Gama et al. (2018). The latter two works are most closely related to the present paper. In them, the authors focus on theoretical properties of the proposed graph scattering transforms, and show that such transforms are invariant to graph isomorphism. The geometric scattering transform that we define here also possesses the same invariance property, and we expect similar stability properties to hold for the proposed construction as well. However, in this paper we focus mainly on the practical applicability of geometric scattering transforms for graph-structured data analysis, with particular emphasis on the task of graph classification, which has received much attention recently in geometric deep learning (see Sec. 4)
In supervised graph classification problems one is given a training database of graph/label pairs {(Gi, yi)}Ni=1 ⊂ G × Y sampled from a set of potential graphs G and potential labels Y . The goal is to use the training data to learn a model f : G → Y that associates to any graph G ∈ G a label y = f(G) ∈ Y . These types of databases arise in biochemistry, in which the graphs may be molecules and the labels some property of the molecule (e.g., its toxicity), as well as in various types of social network databases. Until recently, most approaches were kernel based methods, in which the model f was selected from the reproducing kernel Hilbert space generated by a kernel that measures the similarity between two graphs; one of the most successful examples of this approach is the Weisfeiler-Lehman graph kernel of Shervashidze et al. (2011). Numerous feed forward deep learning algorithms, though, have appeared over the last few years. In many of these algorithms, task based (i.e., dependent upon the labels Y) graph filters are learned from the training data as part of the larger network architecture. These filters act on a characteristic signal xG that is defined on the vertices of any graph G, e.g., xG may be a vector of degrees of each vertex (we remark there are also edge based algorithms, such as Gilmer et al. (2017) and references within, but these have largely been developed for and tested on databases not considered in Sec. 4). Here, we propose an alternative to these methods in the form of a geometric scattering classifier (GSC) that leverages graph-dependent (but not label dependent) scattering transforms to map each graph G to the scattering features extracted from xG. Furthermore, inspired by transfer learning approaches such as Oquab et al. (2014), we consider treatment of our scattering cascade as frozen layers on xG, either followed by fully connected classification layers (see Fig. 2), or fed into other classifiers such as SVM or logistic regression. We note that while the formulation in Sec. 3 is phrased for a single signal xG, it naturally extends to multiple signals by concatenating their scattering features.
In Sec. 4.1 we evaluate the quality of the scattering features and resulting classification by comparing it to numerous graph kernel and deep learning methods over 13 datasets (7 biochemistry ones and 6 social network ones) commonly studied in related literature. In terms of classification accuracy on individual datasets, we show that the proposed approach obtains state of the art results on two datasets and performs competitively on the rest, despite only learning a classifier that come after the geometric scattering transform. Furthermore, while other methods may excel on specific datasets, when considering average accuracy: within social network data, our proposed GSC outperforms all other methods; in biochemistry or over all datasets, it outperforms nearly all feed forward neural network approaches, and is competitive with state of the art results of graph kernels (Kriege et al., 2016) and graph recurrent neural networks (Taheri et al., 2018). We regard this result as crucial in establishing the universality of graph features extracted by geometric scattering, as they provide an effective task-independent representation of analyzed graphs. Finally, to establish their unsupervised qualities, in Sec. 4.2 we use geometric scattering features extracted from enzyme data (Borgwardt et al., 2005a) to infer emergent patterns of enzyme commission (EC) exchange preferences in enzyme evolution, validated with established knowledge from Cuesta et al. (2015).
2 GRAPH RANDOM WALKS AND GRAPH WAVELETS
We define graph wavelets as the difference between lazy random walks that have propagated at different time scales, which mimics classical wavelet constructions found in Meyer (1993) as well as more recent constructions found in Coifman & Maggioni (2006). The underpinnings for this construction arise out of graph signal processing, and in particular the properties of the graph Laplacian.
Let G = (V,E,W ) be a weighted graph, consisting of n vertices V = {v1, . . . , vn}, edges E ⊆ {(v`, vm) : 1 ≤ `,m ≤ n}, and weights W = {w(v`, vm) > 0 : (v`, vm) ∈ E}. Note that unweighted graphs are considered as a special case, by setting w(v`, vm) = 1 for each (v`, vm) ∈ E. Define the n × n (weighted) adjacency matrix AG = A of G by A(v`, vm) = w(v`, vm) if (v`, vm) ∈ E and zero otherwise, where we use the notation A(v`, vm) to denote the (`,m) entry of the matrix A so as to emphasize the correspondence with the vertices in the graph and to reserve sub-indices for enumerating objects. Define the (weighted) degree of vertex v` as deg(v`) = ∑ m A(v`, vm) and the corresponding diagonal n × n degree matrix D given by D(v`, v`) = deg(v`), D(v`, vm) = 0, ` 6= m. Finally, the n × n graph Laplacian matrix LG = L on G is defined as L = D−A. The graph Laplacian is a symmetric, real valued positive semi-definite matrix, and thus has n nonnegative eigenvalues. Furthermore, if we set 0 = (0, . . . , 0)T to to be the n× 1 vector of all zeroes, and 1 = (1, . . . , 1)T to be the analogous vector of all ones, then it is easy to see that L1 = 0. Therefore 0 is an eigenvalue of L and we write the n eigenvalues of L as 0 = λ0 ≤ λ1 ≤ · · · ≤ λn−1 with corresponding n × 1 orthonormal eigenvectors 1/ √ n = ϕ0,ϕ1, . . . ,ϕn−1. If the graph G is connected, then λ1 > 0. In order to simplify the following discussion we assume that this is the case, although the discussion below can be amended to include disconnected graphs as well.
Since ϕ0 is constant and every other eigenvector is orthogonal to ϕ0, it is natural to view the eigenvectors ϕk as the Fourier modes of the graph G, with a frequency magnitude √ λk. Let x : V → R be a signal defined on the vertices of the graph G, which we will consider as an n × 1 vector with entries x(v`). It follows that the Fourier transform of x can be defined as x̂(k) = x · ϕk, where x · y is the standard dot product. This analogy is one of the foundations of graph signal processing and indeed we could use this correspondence to define wavelet operators on the graph G, as in Hammond et al. (2011). Rather than follow this path, though, we instead take a related path similar to Coifman & Maggioni (2006); Gama et al. (2018) by defining the graph wavelet operators in terms of random walks defined on G, which will avoid diagonalizing L and will allow us to control the “spatial” graph support of the filters directly.
Define the n×n transition matrix of a lazy random random walk as P = 12 ( D−1A + I ) . Note that the row sums of P are all one and thus the entry P(v`, vm) corresponds to the transition probability of walking from vertex v` to vm in one step. Powers of P run the random walk forward, so that in particular Pt(v`, vm) is the transition probability of walking from v` to vm in exactly t steps. We will use P as a left multiplier, in which case P acts a diffusion operator. To understand this idea more precisely, first note that a simple calculation shows that P1 = 1 and furthermore if the graph G is connected, every other eigenvalue of P is contained in [0, 1). Note in particular that L and P share the eigenvector 1. It follows that Ptx responds most significantly to the zero frequency x̂(0) of x while depressing the non-zero frequencies of x (where the frequency modes are defined in terms of the graph Laplacian L, as described above). On the spatial side, the value Ptx(v`) is the weighted average of x(v`) with all values x(vm) such that vm is within t steps of v` in the graph G.
High frequency responses of x can be recovered in multiple different fashions, but we utilize multiscale wavelet transforms that group the non-zero frequencies of G into approximately dyadic bands. As shown in Mallat (2012, Lemma 2.12), wavelet transforms are provably stable operators in the Euclidean domain, and the proof of Zou & Lerman (2018, Theorem 5.1) indicates that similar results on graphs may be possible. Furthermore, the multiscale nature of wavelet transforms will allow the resulting geometric scattering transform (Sec. 3) to traverse the entire graph G in one layer, which is valuable for obtaining global descriptions of G. Following Coifman & Maggioni (2006), define the n× n diffusion wavelet matrix at the scale 2j as
Ψj = P 2j−1 −P2 j = P2 j−1 (I−P2 j−1 ) (1)
Since Pt1 = 1 for every t, we see that Ψj1 = 0 for each j ≥ 1. Thus each Ψjx partially recovers x̂(k) for k ≥ 1. The value Ψjx(v`) aggregates the signal information x(vm) from the vertices vm
that are within 2j steps of v`, but does not average the information like the operator P2 j
. Instead, it responds to sharp transitions or oscillations of the signal x within the neighborhood of v` with radius 2j (in terms of the graph path distance). Generally, the smaller j the higher the frequencies Ψjx recovers in x. These high frequency wavelet coefficients up to the scale 2J are denoted by:
Ψ(J)x(v`) = [Ψjx(v`) : 1 ≤ j ≤ J ] , ` = 1, . . . , n . (2)
Since 2J controls the maximum scale of the wavelet, in the experiments of Sec. 4 we select J such that 2J ∼ diam(G). Figure 1 plots the diffusion wavelets at different scales on two different graphs.
3 GEOMETRIC SCATTERING ON GRAPHS
A geometric wavelet scattering transform follows a similar construction as the (Euclidean) wavelet scattering transform of Mallat (2012), but leverages a graph wavelet transform. In this paper we utilize the wavelet transform defined in (2) of the previous section, but remark that in principle any graph wavelet transform could be used (see, e.g., Zou & Lerman, 2018). In Sec. 3.1 we define the graph scattering transform, in Sec. 3.2 we discuss its relation to other recently proposed graph scattering constructions (Gama et al., 2018; Zou & Lerman, 2018), and in Sec. 3.3 we describe several of its desirable properties as compared to other geometric deep learning algorithms on graphs.
3.1 GEOMETRIC SCATTERING DEFINITIONS
Machine learning algorithms that compare and classify graphs must be invariant to graph isomorphism, i.e., re-indexations of the vertices and corresponding edges. A common way to obtain invariant graph features is via summation operators, which act on a signal x = xG that can be defined on any graph G, e.g., x(v`) = deg(v`) for each vertex v` in G. The geometric scattering transform, which is described in the remainder of this section, follows such an approach.
The simplest of such summation operators computes the sum of the responses of the signal x. As described in Verma & Zhang (2018), this invariant can be complemented by higher order summary statistics of x, the collection of which form statistical moments, and which are also referred to as “capsules” in that work. For example, the unnormalized qth moments of x yield the following “zero” order geometric scattering moments:
Sx(q) = n∑ `=1 x(v`) q, 1 ≤ q ≤ Q (3)
We can also replace (3) with normalized (i.e., standardized) moments of x, in which case we store its mean (q = 1), variance (q = 2), skew (q = 3), kurtosis (q = 4), and so on. In the numerical experiments described in Sec. 4 we take Q = 2, 3, 4 depending upon the database, where Q is chosen via cross validation to optimize classification performance. Higher order moments are not considered as they become increasingly unstable, and we report results for both normalized and unnormalized moments. In what follows we discuss the unnormalized moments, since their presentation is simpler and we use them in conjunction with fully connected layers (FCL) for classification purposes, but
the same principles also apply to normalized moments (e.g., used with SVM and logistic regression in our classification results). The invariants Sx(q) do not capture the full variability of x and hence the graph G upon which the signal x is defined. We thus complement these moments with summary statistics derived from the wavelet coefficients of x, which in turn will lead naturally to the graph ConvNet structure of the geometric scattering transform.
Observe, analogously to the Euclidean setting, that in computing Sx(1), which is the summation of x(v`) over V , we have captured the zero frequency of x since ∑n `=1 x(v`) = x · 1 = √ n x̂(0).
Higher order moments of x can incorporate the full range of frequencies in x, e.g. Sx(2) =∑n `=1 x(v`) 2 = ∑n k=1 x̂(k) 2, but they are mixed into one invariant coefficient. We can separate and recapture the high frequencies of x by computing its wavelet coefficients Ψ(J)x, which were defined in (2). However, Ψ(J)x is not invariant to permutations of the vertex indices; in fact, it is covariant (or equivariant). Before summing the individual wavelet coefficient vectors Ψjx, though, we must first apply a pointwise nonlinearity. Indeed, define the n× 1 vector d(v`) = deg(v`), and note that Ψjx · d = 0 since one can show that d is a left eigenvector of P with eigenvalue 1. If G is a regular graph then d = c1 from which it follows that Ψjx · 1 = 0. For more general graphs d(v`) ≥ 0 for v` ∈ V , which implies that for many graphs 1 ·d will be the dominating coefficient in an expansion of 1 in an orthogonal basis containing d; it follows that in these cases |Ψjx · 1| 1.
We thus apply the absolute value nonlinearity, to obtain nonlinear covariant coefficients |Ψ(J)x| = {|Ψjx| : 1 ≤ j ≤ J}. We use absolute value because it is covariant to vertex permutations, nonexpansive, and when combined with traditional wavelet transforms on Euclidean domains, yields a provably stable scattering transform for q = 1. Furthermore, initial theoretical results in Zou & Lerman (2018); Gama et al. (2018) indicate that similar graph based scattering transforms possess certain types of stability properties as well. As in (3), we extract invariant coefficients from |Ψjx| by computing its moments, which define the first order geometric scattering moments:
Sx(j, q) = n∑ `=1 |Ψjx(v`)|q, 1 ≤ j ≤ J, 1 ≤ q ≤ Q (4)
These first order scattering moments aggregate complimentary multiscale geometric descriptions of G into a collection of invariant multiscale statistics. These invariants give a finer partition of the frequency responses of x. For example, whereas Sx(2) mixed all frequencies of x, we see that Sx(j, 2) only mixes the frequencies of x captured by the graph wavelet Ψj .
First order geometric scattering moments can be augmented with second order geometric scattering moments by iterating the graph wavelet and absolute value transforms, which leads naturally to the structure of a graph ConvNet. These moments are defined as:
Sx(j, j′, q) = n∑
i=1
|Ψj′ |Ψjx(vi)||q, 1 ≤ j < j′ ≤ J, 1 ≤ q ≤ Q (5)
which consists of reapplying the wavelet transform operator Ψ(J) to each |Ψjx| and computing the summary statistics of the magnitudes of the resulting coefficients. The intermediate covariant coefficients |Ψj′ |Ψjx|| and resulting invariant statistics Sx(j, j′, q) couple two scales 2j and 2j ′ within the graph G, thus creating features that bind patterns of smaller subgraphs within G with patterns of larger subgraphs (e.g., circles of friends of individual people with larger community structures in social network graphs). The transform can be iterated additional times, leading to third order features and beyond, and thus has the general structure of a graph ConvNet.
The collection of graph scattering moments Sx = {Sx(q), Sx(j, q), Sx(j, j′, q)} (illustrated in Fig. 2(a)) provides a rich set of multiscale invariants of the graphG. These can be used in supervised settings as input to graph classification or regression models, or in unsupervised settings to embed graphs into a Euclidean feature space for further exploration, as demonstrated in Sec. 4.
3.2 STABILITY AND CAPACITY OF GEOMETRIC SCATTERING
In order to assess the utility of scattering features for representing graphs, two properties have to be considered: stability and capacity. First, the stability property aims to essentially provide an upper bound on distances between similar graphs that only differ by types of deformations that can
be treated as noise. This property has been the focus of both Zou & Lerman (2018) and Gama et al. (2018), and in particular the latter shows that a diffusion scattering transform yields features that are stable to graph structure deformations whose size can be computed via the diffusion framework (Coifman & Maggioni, 2006) that forms the basis for their construction. While there are some technical differences between the geometric scattering here and the diffusion scattering in Gama et al. (2018), these constructions are sufficiently similar that we can expect both of them to have analogous stability properties. Therefore, we mainly focus here on the complementary property of the scattering transform capacity to provide a rich feature space for representing graph data without eliminating informative variance in them.
We note that even in the classical Euclidean case, while the stability of scattering transforms to deformations can be established analytically (Mallat, 2012), their capacity is typically examined by empirical evidence when applied to machine learning tasks (e.g., Bruna & Mallat, 2011; Sifre & Mallat, 2012; Andén & Mallat, 2014). Similarly, in the graph processing settings, we examine the capacity of our proposed geometric scattering features via their discriminaive power in graph data analysis tasks. In Sec. 4.1, we describe extensive numerical experiments for graph classification problems in which our scattering coefficients are utilized in conjunction with several classifiers, namely, fully connected layers (FCL, illustrated in Fig. 2(b)), support vector machine (SVM), and logistic regression. We note that SVM classification over scattering features leads to state of the art results on social network data, as well as outperforming all feed-forward neural network methods in general. Furthermore, for biochemistry data (where graphs represent molecule structures), FCL classification over scattering features outperforms all other feed-forward neural networks, even though we only train the fully connected layers. Finally, to assess the scattering feature space for data representation and exploration, in Sec. 4.2 we examine its qualities when analyzing biochemistry data, with emphasis on enzyme graphs. We show that geometric scattering enables graph embedding in a relatively low dimensional Euclidean space, while preserving insightful properties in the data. Beyond establishing the capacity of our specific construction, these results also indicate the viability of graph scattering transforms in general, as universal feature extractors on graph data, and complement the stability results established in Zou & Lerman (2018) and Gama et al. (2018).
3.3 GEOMETRIC SCATTERING COMPARED TO OTHER FEED FORWARD GRAPH CONVNETS
We give a brief comparison of geometric scattering with other graph ConvNets, with particular interest in isolating the key principles for building accurate graph ConvNet classifiers. We begin by remarking that like several other successful graph neural networks, the graph scattering transform is covariant or equivariant to vertex permutations (i.e., commutes with them) until the final features are extracted. This idea has been discussed in depth in various articles, including Kondor et al. (2018b), so we limit the discussion to observing that the geometric scattering transform thus propagates nearly all of the information in x through the multiple wavelet and absolute value layers, since only the absolute value operation removes information on x. As in Verma & Zhang (2018), we aggregate covariant responses via multiple summary statistics (i.e., moments), which are referred to there as a capsule. In the scattering context, at least, this idea is in fact not new and has been previously used in the Euclidean setting for the regression of quantum mechanical energies in Eickenberg et al. (2018; 2017) and texture synthesis in Bruna & Mallat (2018). We also point out that, unlike many deep learning classifiers (graph included), a graph scattering transform extracts invariant statistics at each layer/order. These intermediate layer statistics, while necessarily losing some information in x (and hence G), provide important coarse geometric invariants that eliminate needless complexity in subsequent classification or regression. Furthermore, such layer by layer statistics have proven useful in characterizing signals of other types (e.g., texture synthesis in Gatys et al., 2015).
A graph wavelet transform Ψ(J)x decomposes the geometry of G through the lens of x, along different scales. Graph ConvNet algorithms also obtain multiscale representations of G, but several works, including Atwood & Towsley (2016) and Zhang et al. (2018), propagate information via a random walk. While random walk operators like Pt act at different scales on the graph G, per the analysis in Sec. 2 we see that Pt for any t will be dominated by the low frequency responses of x. While subsequent nonlinearities may be able to recover this high frequency information, the resulting transform will most likely be unstable due to the suppression and then attempted recovery of the high frequency content of x. Alternatively, features derived from Ptx may lose the high frequency responses of x, which are useful in distinguishing similar graphs. The graph wavelet coefficients Ψ(J)x, on the other hand, respond most strongly within bands of nearly non-overlapping frequencies, each with a center frequency kj that depends on Ψj .
Finally, graph labels are often complex functions of both local and global subgraph structure within G. While graph ConvNets are adept at learning local structure within G, as detailed in Verma & Zhang (2018) they require many layers to obtain features that aggregate macroscopic patterns in the graph. This is due in large part to the use of fixed size filters, which often only incorporate information from the neighbors of any individual vertex. The training of such networks is difficult due to the limited size of many graph classification databases (see Table 4 in Appendix D). Geometric scattering transforms have two advantages in this regard: (a) the wavelet filters are designed; and (b) they are multiscale, thus incorporating macroscopic graph patterns in every layer/order.
4 APPLICATION & RESULTS
4.1 GRAPH CLASSIFICATION
To evaluate the proposed geometric scattering features, we test their effectiveness for graph classification on thirteen datasets commonly used for this task. Out of these, seven datasets contain biochemistry graphs that describe molecular structures of chemical compounds, as described in the following works that introduced them: NCI1 and NCI109, Wale et al. (2008); MUTAG, Debnath et al. (1991); PTC, Toivonen et al. (2003); PROTEINS and ENZYMES, Borgwardt et al. (2005b); and D&D, Dobson & Doig (2003). In these cases, each graph has several associated vertex features x that represent chemical properties of atoms in the molecule, and the classification is aimed to characterize compound properties (e.g., protein types). The other six datasets, which are introduced in Yanardag & Vishwanathan (2015), contain social network data extracted from scientific collaborations (COLLAB), movie collaborations (IMDB-B & IMDB-M), and Reddit discussion threads (REDDIT-B, REDDIT-5K, REDDIT-12K). In these cases there are no inherent graph signals in the data, and therefore we compute general node characteristics (e.g., degree, eccentricity, and clustering coefficient) over them, as is considered standard practice in relevant literature (see, for example,
Verma & Zhang, 2018). A detailed description of each of these datasets appear in their respective references, and are briefly summarized in Appendix D for completeness.
In all cases, we iterate over all graphs in the database and for each one we associate graph-wide features by (1) computing the scattering features of each of the available graph signals (provided or computed), and (2) concatenating the features of all such signals. Then, the full scattering feature vectors of these graphs are passed to a classifier, which is trained from input labels, in order to infer the class for each graph. We consider three classifiers here: neural network with two/three fully connected hidden layers (FCL), SVM with RBF kernel, or logistic regression. We note that the scattering features (computed as described in Sec. 3) are based on either normalized or unnormalized moments over the entire graph. Here we used unnormalized moments for FCL, and normalized ones for other classifiers, but the difference is subtle and similar results can be achieved for the other combinations. Finally, we also note that all technical design choices for configuring our geometric scattering or the classifiers were done as part of the cross validation described in Appendix E.
We evaluate the classification results of our three geometric scattering classification (GSC) settings using ten-fold cross validation (as explained in Appendix E) and compare them to 14 prominent methods for graph classification. Out of these, six are graph kernel methods, namely: WeisfeilerLehman graph kernels (WL, Shervashidze et al., 2011), propagation kernel (PK, Neumann et al., 2012), Graphlet kernels (Shervashidze et al., 2009), Random walks (RW, Gärtner et al., 2003), deep graph kernels (DGK, Yanardag & Vishwanathan, 2015), and Weisfeiler-Lehman optimal assignment kernels (WL-OA, Kriege et al., 2016). Seven other methods are recent geometric feed forward deep learning algorithms, namely: deep graph convolutional neural network (DGCNN, Zhang et al., 2018), Graph2vec (Narayanan et al., 2017), 2D convolutional neural networks (2DCNN, Tixier et al., 2017), covariant compositional networks (CCN, Kondor et al., 2018a), Patchy-san (PSCN, Niepert et al., 2016, with k = 10), diffusion convolutional neural networks (DCNN, Atwood & Towsley, 2016), and graph capsule convolutional neural networks (GCAPS-CNN, Verma & Zhang, 2018). Finally, one method is the recently introduced recurrent neural network autoencoder for graphs (S2S-N2N-PP, Taheri et al., 2018). Following the standard format of reported classification performances for these methods (per their respective references, see also Appendix A), our results are reported in the form of average accuracy± standard deviation (in percentages) over the ten crossvalidation folds. We remark here that many of them are not reported for all datasets, and hence, we
1Accuracy for these methods was reported for less than 3/4 of considered social graph datasets, but with biochemistry data they reach 7/9 of all considered datasets.
mark N/A when appropriate. For brevity, the comparison is reported here in Fig. 3 in summarized form, as explained below, and in full in Appendix A.
Since the scattering transform is independent of training labels, it provides universal graph features that might not be specifically optimal in each individual dataset, but overall provide stable classification results. Further, careful examination of the results of previous methods (feed forward algorithms in particular) shows that while some may excel in specific cases, none of them achieves the best results in all reported datasets. Therefore, to compare the overall classification quality of our GSC methods with related methods, we consider average accuracy aggregated over all datasets, and within each field (i.e., biochemistry and social networks) in the following way. First, out of the thirteen datasets, classification results on four datasets (NCI109, ENZYMES, IMDB-M, REDDIT12K) are reported significantly less frequently than the others, and therefore we discard them and use the remaining nine for the aggregation. Next, to address reported values versus N/A ones, we set an inclusion criterion of 75% reported datasets for each method. This translates into at most one N/A in each individual field, and at most two N/A overall. For each method that qualifies for this inclusion criterion, we compute its average accuracy over reported values (ignoring N/A ones) within each field and over all datasets; this results in up to three reported values for each method.
The aggregated results of our GSC and 13 of the compared methods appears in Fig. 3(a). These results show that GSC (with SVM) outperforms all other methods on social network data, and in fact as shown Appendinx B, it achieves state of the art results on two datasets of this type. Additionally, the aggregated results shows that our GSC approach (with FCL or SVM) outperforms all other feed forward methods both on biochemsitry data and overall in terms of universal average accuracy2. The CCN method is omitted from these aggregated results, as its results in Kondor et al. (2018a) are only reported on four biochemistry datasets. For completeness, detailed comparison of GSC with this method, which appears in Fig. 3(b), shows that our method outperforms it on two datasets while CCN outperforms GSC on the other two.
4.2 SCATTERING FEATURE SPACE FOR DATA EXPLORATION
Geometric scattering essentially provides a task independent representation of graphs in a Euclidean feature space. Therefore, it is not limited to supervised learning applications, and can be also utilized for exploratory graph-data analysis, as we demonstrate in this section. We focus our discussion on biochemistry data, and in particular on the ENZYMES dataset. Here, geometric scattering features can be considered as providing “signature” vectors for individual enzymes, which can be used to explore interactions between the six top level enzyme classes, labelled by their Enzyme Commission (EC) numbers (Borgwardt et al., 2005a). In order to emphasize the properties of scattering-based feature extraction, rather than downstream processing, we mostly limit our analysis of the scattering feature space to linear operations such as principal component analysis (PCA).
We start by considering the viability of scattering-based embedding for dimensionality reduction of graph data. To this end, we applied PCA to our scattering coefficients (computed from unnormalized moments), while choosing the number of principal components to capture 90% explained variance. In the ENZYMES case, this yields a 16 dimensional subspace of the full scattering features space. While the Euclidean notion of dimensionality is not naturally available in the original dataset, we note that graphs in it have, on average, 124.2 edges, 29.8 vertices, and 3 features per vertex, and therefore the effective embedding of the data into R16 indeed provides a significant dimensionality reduction. Next, to verify the resulting PCA subspace still captures sufficient discriminative information with respect to classes in the data, we compare SVM classification on the resulting low dimensional vectors to the the full feature space; indeed, projection on the PCA subspace results in only a small drop in accuracy from 56.85 ± 4.97 (full) to 49.83 ± 5.40 (PCA). Finally, we also consider the dimensionality of each individual class (with PCA and > 90% exp. variance) in the scattering feature space, as we expect scattering to reduce the variability in each class w.r.t. the full feature space. In the ENZYMES case, individual classes have PCA dimensionality ranging between 6 and 10, which is indeed significantly lower than the 16 dimensions of the entire PCA space. Appendix C summarizes these findings, and repeats the described procedure for two addi-
2It should be noted, though, that if NCI109 and ENZYMES were included, the GCAPS-CNN would outperform the GSC. However, many other methods would not be comparable then.
tional biochemistry datasets (from Wale et al., 2008) to verify that these are not unique to the specific ENZYMES dataset, but rather indicate a more general trend for geometric scattering feature spaces.
To further explore the scattering feature space, we now use it to infer relations between EC classes. First, for each enzyme e, with scattering feature vector ve (i.e., with Sx for all vertex features x), we compute its distance from class EC-j, with PCA subspace Cj , as the projection distance: dist(e,EC-j) = ‖ve−projSjve‖. Then, for each enzyme class EC-i, we compute the mean distance of enzymes in it from the subspace of each EC-j class asD(i, j) = mean{dist(e,EC-j) : e ∈ EC-i}. Appendix C summarizes these distances, as well as the proportion of points from each class that have their true EC as their nearest (or second nearest) subspace in the scattering feature space. In general, 48% of enzymes select their true EC as the nearest subspace (with additional 19% as second nearest), but these proportions vary between individual EC classes. Finally, we use these scatteringbased distances to infer EC exchange preferences during enzyme evolution, which are presented in Fig. 4 and validated with respect to established preferences observed and reported in Cuesta et al. (2015). We note that the result there is observed independently from the ENZYMES dataset. In particular, the portion of enzymes considered from each EC is different between these data, since Borgwardt et al. (2005b) took special care to ensure each EC class in ENZYMES has exactly 100 enzymes in it. However, we notice that in fact the portion of enzymes (in each EC) that choose the wrong EC as their nearest subspace, which can be considered as EC “incoherence” in the scattering feature space, correlates well with the proportion of evolutionary exchanges generally observed for each EC in Cuesta et al. (2015), and therefore we use these as EC weights in Fig. 4(c). Our results in Fig. 4 demonstrate that scattering features are sufficiently rich to capture relations between enzyme classes, and indicate that geometric scattering has the capacity to uncover descriptive and exploratory insights in graph data analysis, beyond the supervised graph classification from Sec 4.1.
5 CONCLUSION
We presented the geometric scattering transform as a deep filter bank for feature extraction on graphs. This transform generalizes the scattering transform, and augments the theoretical foundations of geometric deep learning. Further, our evaluation results on graph classification and data exploration show the potential of the produced scattering features to serve as universal representations of graphs. Indeed, classification with these features with relatively simple classifier models reaches high accuracy results on most commonly used graph classification datasets, and outperforms both traditional and recent deep learning feed forward methods in terms of average classification accuracy over multiple datasets. We note that this might be partially due to the scarcity of labeled big data in this field, compared to more traditional ones (e.g., image or audio classification). However, this trend also correlates with empirical results for the classic scattering transform, which excels in cases with low data availability. Finally, the geometric scattering features provide a new way for computing and considering global graph representations, independent of specific learning tasks. Therefore, they raise the possibility of embedding entire graphs in Euclidean space and computing meaningful distances between graphs with them, which can be used for both supervised and unsupervised learning, as well as exploratory analysis of graph-structured data.
APPENDIX A FULL COMPARISON TABLE
All results come from the respective papers that introduced the methods, with the exception of: (1) social network results of WL, from Tixier et al. (2017); (2) biochemistry and social results of DCNN, from Verma & Zhang (2018); (3) biochemistry, except for D&D, and social result of GK,
3DCNN using different training/test split
from Yanardag & Vishwanathan (2015); (4) D&D of GK is from Niepert et al. (2016); and (5) for Graphlets, biochemistry results from Kriege et al. (2016), social results from Tixier et al. (2017).
APPENDIX B STATE OF THE ART RESULTS ON REDDIT DATASETS
APPENDIX C DETAILED TABLES FOR SCATTERING FEATURE SPACE
ANALYSIS FROM SECTION 4.2
APPENDIX D DETAILED DATASET DESCRIPTIONS
The details of the datasets used in this work are as follows (see the main text in Sec. 3 for references):
NCI1 contains 4,110 chemical compounds as graphs, with 37 node features. Each compound is labeled according to is activity against non-small cell lung cancer and ovarian cancer cell lines, and these labels serve as classification goal on this data.
NCI109 is similar to NCI1, but with 4,127 chemical compounds and 38 node features. MUTAG consists of 188 mutagenic aromatic and heteroaromatic nitro compounds (as graphs) with
7 node features. The classification here is binary (i.e., two classes), based on whether or not a compound has a mutagenic effect on bacterium.
PTC is a dataset of 344 chemical compounds (as graphs) with nineteen node features that are divided into two classes depending on whether they are carcinogenic in rats.
PROTEINS dataset contains 1,113 proteins (as graphs) with three node features, where the goal of the classification is to predict whether the protein is enzyme or not.
D&D dataset contains 1,178 protein structures (as graphs) that, similar to the previous one, are classified as enzymes or non-enzymes.
ENZYMES is a dataset of 600 protein structures (as graphs) with three node features. These proteins are divided into six classes of enzymes (labelled by enzyme commission numbers) for classification.
COLLAB is a scientific collaboration dataset contains 5K graphs. The classification goal here is to predict whether the graph belongs to a subfield of Physics.
IMDB-B is a movie collaboration dataset with contains 1K graphs. The graphs are generated on two genres: Action and Romance, the classification goal is to predict the correct genre for each graph.
IMDB-M is similar to IMDB-B, but with 1.5K graphs & 3 genres: Comedy, Romance, and Sci-Fi. REDDIT-B is a dataset with 2K graphs, where each graph corresponds to an online discussion
thread. The classification goal is to predict whether the graph belongs to a Q&A-based community or discussion-based community.
REDDIT-5K consists of 5K threads (as graphs) from five different subreddits. The classification goal is to predict the corresponding subreddit for each thread.
REDDIT-12K is similar to REDDIT-5k, but with 11,929 graphs from 12 different subreddits.
Table 4 summarizes the size of available graph data (i.e., number of graphs, and both max & mean number of vertices within graphs) in these datasets, as previously reported in the literature.
Graph signals for social network data: None of the social network datasets has ready-to-use node features. Therefore, in the case of COLLAB, IMDB-B, and IMDB-M, we use the eccentricity, degree, and clustering coefficients for each vertex as characteristic graph signals. In the case of REDDIT-B, REDDIT-5K and REDDIT-12K, on the other hand, we only use degree and clustering coefficient, due to presence of disconnected graphs in these datasets.
APPENDIX E TECHNICAL DETAILS
The computation of the scattering features described in Section 3 is based on several design choices, akin to typical architecture choices in neural networks. Most importantly, it requires a choice of 1. which statistical moments to use (normalized or unnormalized), 2. the number of wavelet scales to use (given by J), and 3. the number of moments to use (denoted by Q). The configuration used for each dataset in this work is summarized in Table 5, together with specific settings used in the downstream classification layers, as descibed below.
Once the scattering coefficients are generated through the above processes, they are either fed into a standard classifier (SVM or logistic regression), or into two or three fully connected layers (see Table 5 for specifics) and then a softmax layer that is used to compute the class probabilities. In the latter case, cross entropy loss is minimized during the training process and ReLU is used as the activation function between fully connected layers. Besides, we use mini batch training with batch size 64 and ADAM optimization technique for training. Two learning rates 0.002 and 0.02 are tested during training. Optimal training epochs are decided through cross validation. Finally, L2 norm regularization is used to avoid overfittings.
Cross validation procedure: Classification evaluation was done with standard ten-fold cross validation procedure. First, the entire dataset is randomly split into ten subsets. Then, in each iteration (or “fold”), nine of them are used as training and validation, and the other one is used for testing classification accuracy. In total, after ten iterations, each of the subsets has been used once for testing, resulting in ten reported classification accuracy numbers for the examined dataset. Finally, the mean and standard deviation of these ten accuracies are computed and reported.
It should be noted that when using fully connected layers, each iteration also performs automatic tuning of the trained classifier, as follows. First, nine iterations are performed, each time using eight subsets (i.e., folds) as training and the remaining one as validation set, which is used to determine the optimal epoch for network training. Then, the classifier is retrained with all nine subsets. After nine iterations, each of the training/validation subsets has been used once for validation, and we obtain nine classification models, which in turn produce nine predictions (i.e., class probabilities) for each data point in the test subset of the main cross validation. To obtain the final result of this cross validation iteration, we sum up all these predictions and select the class with the highest probability as our final classification result. These results are then compared to the true labels (in the test set) on the test subset to obtain classification accuracy for this fold.
Software & hardware environment: Geometric scattering and related classification code were implemented in Python with TensorFlow. All experiments were performed on HPC environment using an intel16-k80 cluster, with a job requesting one node with four processors and two Nvidia Tesla k80 GPUs. | 1. What is the focus and contribution of the paper regarding scattering transform on graphs?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of theoretical and experimental results?
3. How does the reviewer assess the methodology for training the classifier, specifically regarding hyperparameters and dataset usage?
4. What are the limitations of the proposed approach, especially regarding feature correlation and naming conventions? | Review | Review
This paper generalizes scattering transform to graphs. It defines wavelets, scattering coefficients on graph signals. The experimental section describes their use in classification tasks with comparisons with recent methods. It seems scattering performs less well than SOTA methods, but has the advantages of not requiring any training so potentially good candidates for low data regimes application. Interesting and original paper and ideas being developed, but might be a tiny bit weak in term of results, both theoretical and experimental ?
There is not much theoretical results (mostly definition and hints that some of the results from euclidian case might generalize without formal investigation).
Regarding the results, in particular table3, given that you use particular hyper parameters J and Q, for each dataset, this is arguably a bit of architectural overfitting ? Results would be more convincing IMO if obtained with a single set of hyper parameters. What was the procedure to come up with those parameters ?
Regarding the methodology for training the classifier, I am not familiar with these datasets but using just a 1/10 of the data to train classifier seems a bit extreme ?
How about training each on 90% random subset of training set and averaging ? Or just the whole training subset ? That would still be fine in the sense that none of the classifier would have seen the test set ?
p2 '~it naturally extends to multiple signals by concatenating their scattering features~'
P4 figure 1: Not very clear what those visualizations are. \Psi_j is supposedly a n x n matrix so, is this \Psi_j applied to a two different Dirac on the graph ? Would be good to clarify exactly what is being plotted in the legend.
seems to be the biggest limitations of the proposed approach. By not early mixing of different features one might lose the high frequencies correlations between different signals defined on a single graph.
P4. IMO capsule is not such a great name / already used in ML by Hinton's capsule etc... Why not simply 'moments' or 'statistics' ?
'We can replace (3) with normalized moments of x ... how exactly do you normalize ? |
ICLR | Title
Graph Classification with Geometric Scattering
Abstract
One of the most notable contributions of deep learning is the application of convolutional neural networks (ConvNets) to structured signal classification, and in particular image classification. Beyond their impressive performances in supervised learning, the structure of such networks inspired the development of deep filter banks referred to as scattering transforms. These transforms apply a cascade of wavelet transforms and complex modulus operators to extract features that are invariant to group operations and stable to deformations. Furthermore, ConvNets inspired recent advances in geometric deep learning, which aim to generalize these networks to graph data by applying notions from graph signal processing to learn deep graph filter cascades. We further advance these lines of research by proposing a geometric scattering transform using graph wavelets defined in terms of random walks on the graph. We demonstrate the utility of features extracted with this designed deep filter bank in graph classification of biochemistry and social network data (incl. state of the art results in the latter case), and in data exploration, where they enable inference of EC exchange preferences in enzyme evolution.
1 INTRODUCTION
Over the past decade, numerous examples have established that deep neural networks (i.e., cascades of linear operations and simple nonlinearities) typically outperform traditional “shallow” models in various modern machine learning applications, especially given the increasing Big Data availability nowadays. Perhaps the most well known example of the advantages of deep networks is in computer vision, where the utilization of 2D convolutions enable network designs that learn cascades of convolutional filters, which have several advantages over fully connected network architectures, both computationally and conceptually. Indeed, in terms of supervised learning, convolutional neural networks (ConvNets) hold the current state of the art in image classification, and have become the standard machine learning approach towards processing big structured-signal data, including audio and video processing. See, e.g., Goodfellow et al. (2016, Chapter 9) for a detailed discussion.
Beyond their performances when applied to specific tasks, pretrained ConvNet layers have been explored as image feature extractors by freezing the first few pretrained convolutional layers and then retraining only the last few layers for specific datasets or applications (e.g., Yosinski et al., 2014; Oquab et al., 2014). Such transfer learning approaches provide evidence that suitably constructed deep filter banks should be able to extract task-agnostic semantic information from structured data, and in some sense mimic the operation of human visual and auditory cortices, thus supporting the neural terminology in deep learning. An alternative approach towards such universal feature extraction was presented in Mallat (2012), where a deep filter bank, known as the scattering transform, is designed, rather than trained, based on predetermined families of distruptive patterns that should be eliminated to extract informative representations. The scattering transform is constructed as a cascade of linear wavelet transforms and nonlinear complex modulus operations that provides features with guaranteed invariance to a predetermined Lie group of operations such as rotations, translations, or scaling. Further, it also provides Lipschitz stability to small diffeomorphisms of the inputted signal. Scattering features have been shown to be effective in several audio (e.g., Bruna & Mallat, 2013a; Andén & Mallat, 2014; Lostanlen & Mallat, 2015) and image (e.g., Bruna & Mallat, 2013b; Sifre & Mallat, 2014; Oyallon & Mallat, 2015) processing applications, and their advantages over learned features are especially relevant in applications with relatively low data availability, such as quantum chemistry (e.g., Hirn et al., 2017; Eickenberg et al., 2017; 2018).
Following the recent interest in geometric deep learning approaches for processing graph-structured data (see, for example, Bronstein et al. (2017) and references therein), we present here a generalization of the scattering transform from Euclidean domains to graphs. Similar to the Euclidean case, our construction is based on a cascade of bandpass filters, defined in this case using graph signal processing (Shuman et al., 2013) notions, and complex moduli, which in this case take the form of absolute values (see Sec. 3). While several choices of filter banks could generally be used with the proposed cascade, we focus here on graph wavelet filters defined by lazy random walks (see Sec. 2). These wavelet filters are also closely related to diffusion geometry and related notions of geometric harmonic analysis, e.g. the diffusion maps algorithm of Coifman & Lafon (2006) and the associated diffusion wavelets of Coifman & Maggioni (2006). Therefore, we call the constructed cascade geometric scattering, which also follows the same terminology from geometric deep learning.
We note that similar attempts at generalizing the scattering transform to graphs have been presented in Chen et al. (2014) as well as Zou & Lerman (2018) and Gama et al. (2018). The latter two works are most closely related to the present paper. In them, the authors focus on theoretical properties of the proposed graph scattering transforms, and show that such transforms are invariant to graph isomorphism. The geometric scattering transform that we define here also possesses the same invariance property, and we expect similar stability properties to hold for the proposed construction as well. However, in this paper we focus mainly on the practical applicability of geometric scattering transforms for graph-structured data analysis, with particular emphasis on the task of graph classification, which has received much attention recently in geometric deep learning (see Sec. 4)
In supervised graph classification problems one is given a training database of graph/label pairs {(Gi, yi)}Ni=1 ⊂ G × Y sampled from a set of potential graphs G and potential labels Y . The goal is to use the training data to learn a model f : G → Y that associates to any graph G ∈ G a label y = f(G) ∈ Y . These types of databases arise in biochemistry, in which the graphs may be molecules and the labels some property of the molecule (e.g., its toxicity), as well as in various types of social network databases. Until recently, most approaches were kernel based methods, in which the model f was selected from the reproducing kernel Hilbert space generated by a kernel that measures the similarity between two graphs; one of the most successful examples of this approach is the Weisfeiler-Lehman graph kernel of Shervashidze et al. (2011). Numerous feed forward deep learning algorithms, though, have appeared over the last few years. In many of these algorithms, task based (i.e., dependent upon the labels Y) graph filters are learned from the training data as part of the larger network architecture. These filters act on a characteristic signal xG that is defined on the vertices of any graph G, e.g., xG may be a vector of degrees of each vertex (we remark there are also edge based algorithms, such as Gilmer et al. (2017) and references within, but these have largely been developed for and tested on databases not considered in Sec. 4). Here, we propose an alternative to these methods in the form of a geometric scattering classifier (GSC) that leverages graph-dependent (but not label dependent) scattering transforms to map each graph G to the scattering features extracted from xG. Furthermore, inspired by transfer learning approaches such as Oquab et al. (2014), we consider treatment of our scattering cascade as frozen layers on xG, either followed by fully connected classification layers (see Fig. 2), or fed into other classifiers such as SVM or logistic regression. We note that while the formulation in Sec. 3 is phrased for a single signal xG, it naturally extends to multiple signals by concatenating their scattering features.
In Sec. 4.1 we evaluate the quality of the scattering features and resulting classification by comparing it to numerous graph kernel and deep learning methods over 13 datasets (7 biochemistry ones and 6 social network ones) commonly studied in related literature. In terms of classification accuracy on individual datasets, we show that the proposed approach obtains state of the art results on two datasets and performs competitively on the rest, despite only learning a classifier that come after the geometric scattering transform. Furthermore, while other methods may excel on specific datasets, when considering average accuracy: within social network data, our proposed GSC outperforms all other methods; in biochemistry or over all datasets, it outperforms nearly all feed forward neural network approaches, and is competitive with state of the art results of graph kernels (Kriege et al., 2016) and graph recurrent neural networks (Taheri et al., 2018). We regard this result as crucial in establishing the universality of graph features extracted by geometric scattering, as they provide an effective task-independent representation of analyzed graphs. Finally, to establish their unsupervised qualities, in Sec. 4.2 we use geometric scattering features extracted from enzyme data (Borgwardt et al., 2005a) to infer emergent patterns of enzyme commission (EC) exchange preferences in enzyme evolution, validated with established knowledge from Cuesta et al. (2015).
2 GRAPH RANDOM WALKS AND GRAPH WAVELETS
We define graph wavelets as the difference between lazy random walks that have propagated at different time scales, which mimics classical wavelet constructions found in Meyer (1993) as well as more recent constructions found in Coifman & Maggioni (2006). The underpinnings for this construction arise out of graph signal processing, and in particular the properties of the graph Laplacian.
Let G = (V,E,W ) be a weighted graph, consisting of n vertices V = {v1, . . . , vn}, edges E ⊆ {(v`, vm) : 1 ≤ `,m ≤ n}, and weights W = {w(v`, vm) > 0 : (v`, vm) ∈ E}. Note that unweighted graphs are considered as a special case, by setting w(v`, vm) = 1 for each (v`, vm) ∈ E. Define the n × n (weighted) adjacency matrix AG = A of G by A(v`, vm) = w(v`, vm) if (v`, vm) ∈ E and zero otherwise, where we use the notation A(v`, vm) to denote the (`,m) entry of the matrix A so as to emphasize the correspondence with the vertices in the graph and to reserve sub-indices for enumerating objects. Define the (weighted) degree of vertex v` as deg(v`) = ∑ m A(v`, vm) and the corresponding diagonal n × n degree matrix D given by D(v`, v`) = deg(v`), D(v`, vm) = 0, ` 6= m. Finally, the n × n graph Laplacian matrix LG = L on G is defined as L = D−A. The graph Laplacian is a symmetric, real valued positive semi-definite matrix, and thus has n nonnegative eigenvalues. Furthermore, if we set 0 = (0, . . . , 0)T to to be the n× 1 vector of all zeroes, and 1 = (1, . . . , 1)T to be the analogous vector of all ones, then it is easy to see that L1 = 0. Therefore 0 is an eigenvalue of L and we write the n eigenvalues of L as 0 = λ0 ≤ λ1 ≤ · · · ≤ λn−1 with corresponding n × 1 orthonormal eigenvectors 1/ √ n = ϕ0,ϕ1, . . . ,ϕn−1. If the graph G is connected, then λ1 > 0. In order to simplify the following discussion we assume that this is the case, although the discussion below can be amended to include disconnected graphs as well.
Since ϕ0 is constant and every other eigenvector is orthogonal to ϕ0, it is natural to view the eigenvectors ϕk as the Fourier modes of the graph G, with a frequency magnitude √ λk. Let x : V → R be a signal defined on the vertices of the graph G, which we will consider as an n × 1 vector with entries x(v`). It follows that the Fourier transform of x can be defined as x̂(k) = x · ϕk, where x · y is the standard dot product. This analogy is one of the foundations of graph signal processing and indeed we could use this correspondence to define wavelet operators on the graph G, as in Hammond et al. (2011). Rather than follow this path, though, we instead take a related path similar to Coifman & Maggioni (2006); Gama et al. (2018) by defining the graph wavelet operators in terms of random walks defined on G, which will avoid diagonalizing L and will allow us to control the “spatial” graph support of the filters directly.
Define the n×n transition matrix of a lazy random random walk as P = 12 ( D−1A + I ) . Note that the row sums of P are all one and thus the entry P(v`, vm) corresponds to the transition probability of walking from vertex v` to vm in one step. Powers of P run the random walk forward, so that in particular Pt(v`, vm) is the transition probability of walking from v` to vm in exactly t steps. We will use P as a left multiplier, in which case P acts a diffusion operator. To understand this idea more precisely, first note that a simple calculation shows that P1 = 1 and furthermore if the graph G is connected, every other eigenvalue of P is contained in [0, 1). Note in particular that L and P share the eigenvector 1. It follows that Ptx responds most significantly to the zero frequency x̂(0) of x while depressing the non-zero frequencies of x (where the frequency modes are defined in terms of the graph Laplacian L, as described above). On the spatial side, the value Ptx(v`) is the weighted average of x(v`) with all values x(vm) such that vm is within t steps of v` in the graph G.
High frequency responses of x can be recovered in multiple different fashions, but we utilize multiscale wavelet transforms that group the non-zero frequencies of G into approximately dyadic bands. As shown in Mallat (2012, Lemma 2.12), wavelet transforms are provably stable operators in the Euclidean domain, and the proof of Zou & Lerman (2018, Theorem 5.1) indicates that similar results on graphs may be possible. Furthermore, the multiscale nature of wavelet transforms will allow the resulting geometric scattering transform (Sec. 3) to traverse the entire graph G in one layer, which is valuable for obtaining global descriptions of G. Following Coifman & Maggioni (2006), define the n× n diffusion wavelet matrix at the scale 2j as
Ψj = P 2j−1 −P2 j = P2 j−1 (I−P2 j−1 ) (1)
Since Pt1 = 1 for every t, we see that Ψj1 = 0 for each j ≥ 1. Thus each Ψjx partially recovers x̂(k) for k ≥ 1. The value Ψjx(v`) aggregates the signal information x(vm) from the vertices vm
that are within 2j steps of v`, but does not average the information like the operator P2 j
. Instead, it responds to sharp transitions or oscillations of the signal x within the neighborhood of v` with radius 2j (in terms of the graph path distance). Generally, the smaller j the higher the frequencies Ψjx recovers in x. These high frequency wavelet coefficients up to the scale 2J are denoted by:
Ψ(J)x(v`) = [Ψjx(v`) : 1 ≤ j ≤ J ] , ` = 1, . . . , n . (2)
Since 2J controls the maximum scale of the wavelet, in the experiments of Sec. 4 we select J such that 2J ∼ diam(G). Figure 1 plots the diffusion wavelets at different scales on two different graphs.
3 GEOMETRIC SCATTERING ON GRAPHS
A geometric wavelet scattering transform follows a similar construction as the (Euclidean) wavelet scattering transform of Mallat (2012), but leverages a graph wavelet transform. In this paper we utilize the wavelet transform defined in (2) of the previous section, but remark that in principle any graph wavelet transform could be used (see, e.g., Zou & Lerman, 2018). In Sec. 3.1 we define the graph scattering transform, in Sec. 3.2 we discuss its relation to other recently proposed graph scattering constructions (Gama et al., 2018; Zou & Lerman, 2018), and in Sec. 3.3 we describe several of its desirable properties as compared to other geometric deep learning algorithms on graphs.
3.1 GEOMETRIC SCATTERING DEFINITIONS
Machine learning algorithms that compare and classify graphs must be invariant to graph isomorphism, i.e., re-indexations of the vertices and corresponding edges. A common way to obtain invariant graph features is via summation operators, which act on a signal x = xG that can be defined on any graph G, e.g., x(v`) = deg(v`) for each vertex v` in G. The geometric scattering transform, which is described in the remainder of this section, follows such an approach.
The simplest of such summation operators computes the sum of the responses of the signal x. As described in Verma & Zhang (2018), this invariant can be complemented by higher order summary statistics of x, the collection of which form statistical moments, and which are also referred to as “capsules” in that work. For example, the unnormalized qth moments of x yield the following “zero” order geometric scattering moments:
Sx(q) = n∑ `=1 x(v`) q, 1 ≤ q ≤ Q (3)
We can also replace (3) with normalized (i.e., standardized) moments of x, in which case we store its mean (q = 1), variance (q = 2), skew (q = 3), kurtosis (q = 4), and so on. In the numerical experiments described in Sec. 4 we take Q = 2, 3, 4 depending upon the database, where Q is chosen via cross validation to optimize classification performance. Higher order moments are not considered as they become increasingly unstable, and we report results for both normalized and unnormalized moments. In what follows we discuss the unnormalized moments, since their presentation is simpler and we use them in conjunction with fully connected layers (FCL) for classification purposes, but
the same principles also apply to normalized moments (e.g., used with SVM and logistic regression in our classification results). The invariants Sx(q) do not capture the full variability of x and hence the graph G upon which the signal x is defined. We thus complement these moments with summary statistics derived from the wavelet coefficients of x, which in turn will lead naturally to the graph ConvNet structure of the geometric scattering transform.
Observe, analogously to the Euclidean setting, that in computing Sx(1), which is the summation of x(v`) over V , we have captured the zero frequency of x since ∑n `=1 x(v`) = x · 1 = √ n x̂(0).
Higher order moments of x can incorporate the full range of frequencies in x, e.g. Sx(2) =∑n `=1 x(v`) 2 = ∑n k=1 x̂(k) 2, but they are mixed into one invariant coefficient. We can separate and recapture the high frequencies of x by computing its wavelet coefficients Ψ(J)x, which were defined in (2). However, Ψ(J)x is not invariant to permutations of the vertex indices; in fact, it is covariant (or equivariant). Before summing the individual wavelet coefficient vectors Ψjx, though, we must first apply a pointwise nonlinearity. Indeed, define the n× 1 vector d(v`) = deg(v`), and note that Ψjx · d = 0 since one can show that d is a left eigenvector of P with eigenvalue 1. If G is a regular graph then d = c1 from which it follows that Ψjx · 1 = 0. For more general graphs d(v`) ≥ 0 for v` ∈ V , which implies that for many graphs 1 ·d will be the dominating coefficient in an expansion of 1 in an orthogonal basis containing d; it follows that in these cases |Ψjx · 1| 1.
We thus apply the absolute value nonlinearity, to obtain nonlinear covariant coefficients |Ψ(J)x| = {|Ψjx| : 1 ≤ j ≤ J}. We use absolute value because it is covariant to vertex permutations, nonexpansive, and when combined with traditional wavelet transforms on Euclidean domains, yields a provably stable scattering transform for q = 1. Furthermore, initial theoretical results in Zou & Lerman (2018); Gama et al. (2018) indicate that similar graph based scattering transforms possess certain types of stability properties as well. As in (3), we extract invariant coefficients from |Ψjx| by computing its moments, which define the first order geometric scattering moments:
Sx(j, q) = n∑ `=1 |Ψjx(v`)|q, 1 ≤ j ≤ J, 1 ≤ q ≤ Q (4)
These first order scattering moments aggregate complimentary multiscale geometric descriptions of G into a collection of invariant multiscale statistics. These invariants give a finer partition of the frequency responses of x. For example, whereas Sx(2) mixed all frequencies of x, we see that Sx(j, 2) only mixes the frequencies of x captured by the graph wavelet Ψj .
First order geometric scattering moments can be augmented with second order geometric scattering moments by iterating the graph wavelet and absolute value transforms, which leads naturally to the structure of a graph ConvNet. These moments are defined as:
Sx(j, j′, q) = n∑
i=1
|Ψj′ |Ψjx(vi)||q, 1 ≤ j < j′ ≤ J, 1 ≤ q ≤ Q (5)
which consists of reapplying the wavelet transform operator Ψ(J) to each |Ψjx| and computing the summary statistics of the magnitudes of the resulting coefficients. The intermediate covariant coefficients |Ψj′ |Ψjx|| and resulting invariant statistics Sx(j, j′, q) couple two scales 2j and 2j ′ within the graph G, thus creating features that bind patterns of smaller subgraphs within G with patterns of larger subgraphs (e.g., circles of friends of individual people with larger community structures in social network graphs). The transform can be iterated additional times, leading to third order features and beyond, and thus has the general structure of a graph ConvNet.
The collection of graph scattering moments Sx = {Sx(q), Sx(j, q), Sx(j, j′, q)} (illustrated in Fig. 2(a)) provides a rich set of multiscale invariants of the graphG. These can be used in supervised settings as input to graph classification or regression models, or in unsupervised settings to embed graphs into a Euclidean feature space for further exploration, as demonstrated in Sec. 4.
3.2 STABILITY AND CAPACITY OF GEOMETRIC SCATTERING
In order to assess the utility of scattering features for representing graphs, two properties have to be considered: stability and capacity. First, the stability property aims to essentially provide an upper bound on distances between similar graphs that only differ by types of deformations that can
be treated as noise. This property has been the focus of both Zou & Lerman (2018) and Gama et al. (2018), and in particular the latter shows that a diffusion scattering transform yields features that are stable to graph structure deformations whose size can be computed via the diffusion framework (Coifman & Maggioni, 2006) that forms the basis for their construction. While there are some technical differences between the geometric scattering here and the diffusion scattering in Gama et al. (2018), these constructions are sufficiently similar that we can expect both of them to have analogous stability properties. Therefore, we mainly focus here on the complementary property of the scattering transform capacity to provide a rich feature space for representing graph data without eliminating informative variance in them.
We note that even in the classical Euclidean case, while the stability of scattering transforms to deformations can be established analytically (Mallat, 2012), their capacity is typically examined by empirical evidence when applied to machine learning tasks (e.g., Bruna & Mallat, 2011; Sifre & Mallat, 2012; Andén & Mallat, 2014). Similarly, in the graph processing settings, we examine the capacity of our proposed geometric scattering features via their discriminaive power in graph data analysis tasks. In Sec. 4.1, we describe extensive numerical experiments for graph classification problems in which our scattering coefficients are utilized in conjunction with several classifiers, namely, fully connected layers (FCL, illustrated in Fig. 2(b)), support vector machine (SVM), and logistic regression. We note that SVM classification over scattering features leads to state of the art results on social network data, as well as outperforming all feed-forward neural network methods in general. Furthermore, for biochemistry data (where graphs represent molecule structures), FCL classification over scattering features outperforms all other feed-forward neural networks, even though we only train the fully connected layers. Finally, to assess the scattering feature space for data representation and exploration, in Sec. 4.2 we examine its qualities when analyzing biochemistry data, with emphasis on enzyme graphs. We show that geometric scattering enables graph embedding in a relatively low dimensional Euclidean space, while preserving insightful properties in the data. Beyond establishing the capacity of our specific construction, these results also indicate the viability of graph scattering transforms in general, as universal feature extractors on graph data, and complement the stability results established in Zou & Lerman (2018) and Gama et al. (2018).
3.3 GEOMETRIC SCATTERING COMPARED TO OTHER FEED FORWARD GRAPH CONVNETS
We give a brief comparison of geometric scattering with other graph ConvNets, with particular interest in isolating the key principles for building accurate graph ConvNet classifiers. We begin by remarking that like several other successful graph neural networks, the graph scattering transform is covariant or equivariant to vertex permutations (i.e., commutes with them) until the final features are extracted. This idea has been discussed in depth in various articles, including Kondor et al. (2018b), so we limit the discussion to observing that the geometric scattering transform thus propagates nearly all of the information in x through the multiple wavelet and absolute value layers, since only the absolute value operation removes information on x. As in Verma & Zhang (2018), we aggregate covariant responses via multiple summary statistics (i.e., moments), which are referred to there as a capsule. In the scattering context, at least, this idea is in fact not new and has been previously used in the Euclidean setting for the regression of quantum mechanical energies in Eickenberg et al. (2018; 2017) and texture synthesis in Bruna & Mallat (2018). We also point out that, unlike many deep learning classifiers (graph included), a graph scattering transform extracts invariant statistics at each layer/order. These intermediate layer statistics, while necessarily losing some information in x (and hence G), provide important coarse geometric invariants that eliminate needless complexity in subsequent classification or regression. Furthermore, such layer by layer statistics have proven useful in characterizing signals of other types (e.g., texture synthesis in Gatys et al., 2015).
A graph wavelet transform Ψ(J)x decomposes the geometry of G through the lens of x, along different scales. Graph ConvNet algorithms also obtain multiscale representations of G, but several works, including Atwood & Towsley (2016) and Zhang et al. (2018), propagate information via a random walk. While random walk operators like Pt act at different scales on the graph G, per the analysis in Sec. 2 we see that Pt for any t will be dominated by the low frequency responses of x. While subsequent nonlinearities may be able to recover this high frequency information, the resulting transform will most likely be unstable due to the suppression and then attempted recovery of the high frequency content of x. Alternatively, features derived from Ptx may lose the high frequency responses of x, which are useful in distinguishing similar graphs. The graph wavelet coefficients Ψ(J)x, on the other hand, respond most strongly within bands of nearly non-overlapping frequencies, each with a center frequency kj that depends on Ψj .
Finally, graph labels are often complex functions of both local and global subgraph structure within G. While graph ConvNets are adept at learning local structure within G, as detailed in Verma & Zhang (2018) they require many layers to obtain features that aggregate macroscopic patterns in the graph. This is due in large part to the use of fixed size filters, which often only incorporate information from the neighbors of any individual vertex. The training of such networks is difficult due to the limited size of many graph classification databases (see Table 4 in Appendix D). Geometric scattering transforms have two advantages in this regard: (a) the wavelet filters are designed; and (b) they are multiscale, thus incorporating macroscopic graph patterns in every layer/order.
4 APPLICATION & RESULTS
4.1 GRAPH CLASSIFICATION
To evaluate the proposed geometric scattering features, we test their effectiveness for graph classification on thirteen datasets commonly used for this task. Out of these, seven datasets contain biochemistry graphs that describe molecular structures of chemical compounds, as described in the following works that introduced them: NCI1 and NCI109, Wale et al. (2008); MUTAG, Debnath et al. (1991); PTC, Toivonen et al. (2003); PROTEINS and ENZYMES, Borgwardt et al. (2005b); and D&D, Dobson & Doig (2003). In these cases, each graph has several associated vertex features x that represent chemical properties of atoms in the molecule, and the classification is aimed to characterize compound properties (e.g., protein types). The other six datasets, which are introduced in Yanardag & Vishwanathan (2015), contain social network data extracted from scientific collaborations (COLLAB), movie collaborations (IMDB-B & IMDB-M), and Reddit discussion threads (REDDIT-B, REDDIT-5K, REDDIT-12K). In these cases there are no inherent graph signals in the data, and therefore we compute general node characteristics (e.g., degree, eccentricity, and clustering coefficient) over them, as is considered standard practice in relevant literature (see, for example,
Verma & Zhang, 2018). A detailed description of each of these datasets appear in their respective references, and are briefly summarized in Appendix D for completeness.
In all cases, we iterate over all graphs in the database and for each one we associate graph-wide features by (1) computing the scattering features of each of the available graph signals (provided or computed), and (2) concatenating the features of all such signals. Then, the full scattering feature vectors of these graphs are passed to a classifier, which is trained from input labels, in order to infer the class for each graph. We consider three classifiers here: neural network with two/three fully connected hidden layers (FCL), SVM with RBF kernel, or logistic regression. We note that the scattering features (computed as described in Sec. 3) are based on either normalized or unnormalized moments over the entire graph. Here we used unnormalized moments for FCL, and normalized ones for other classifiers, but the difference is subtle and similar results can be achieved for the other combinations. Finally, we also note that all technical design choices for configuring our geometric scattering or the classifiers were done as part of the cross validation described in Appendix E.
We evaluate the classification results of our three geometric scattering classification (GSC) settings using ten-fold cross validation (as explained in Appendix E) and compare them to 14 prominent methods for graph classification. Out of these, six are graph kernel methods, namely: WeisfeilerLehman graph kernels (WL, Shervashidze et al., 2011), propagation kernel (PK, Neumann et al., 2012), Graphlet kernels (Shervashidze et al., 2009), Random walks (RW, Gärtner et al., 2003), deep graph kernels (DGK, Yanardag & Vishwanathan, 2015), and Weisfeiler-Lehman optimal assignment kernels (WL-OA, Kriege et al., 2016). Seven other methods are recent geometric feed forward deep learning algorithms, namely: deep graph convolutional neural network (DGCNN, Zhang et al., 2018), Graph2vec (Narayanan et al., 2017), 2D convolutional neural networks (2DCNN, Tixier et al., 2017), covariant compositional networks (CCN, Kondor et al., 2018a), Patchy-san (PSCN, Niepert et al., 2016, with k = 10), diffusion convolutional neural networks (DCNN, Atwood & Towsley, 2016), and graph capsule convolutional neural networks (GCAPS-CNN, Verma & Zhang, 2018). Finally, one method is the recently introduced recurrent neural network autoencoder for graphs (S2S-N2N-PP, Taheri et al., 2018). Following the standard format of reported classification performances for these methods (per their respective references, see also Appendix A), our results are reported in the form of average accuracy± standard deviation (in percentages) over the ten crossvalidation folds. We remark here that many of them are not reported for all datasets, and hence, we
1Accuracy for these methods was reported for less than 3/4 of considered social graph datasets, but with biochemistry data they reach 7/9 of all considered datasets.
mark N/A when appropriate. For brevity, the comparison is reported here in Fig. 3 in summarized form, as explained below, and in full in Appendix A.
Since the scattering transform is independent of training labels, it provides universal graph features that might not be specifically optimal in each individual dataset, but overall provide stable classification results. Further, careful examination of the results of previous methods (feed forward algorithms in particular) shows that while some may excel in specific cases, none of them achieves the best results in all reported datasets. Therefore, to compare the overall classification quality of our GSC methods with related methods, we consider average accuracy aggregated over all datasets, and within each field (i.e., biochemistry and social networks) in the following way. First, out of the thirteen datasets, classification results on four datasets (NCI109, ENZYMES, IMDB-M, REDDIT12K) are reported significantly less frequently than the others, and therefore we discard them and use the remaining nine for the aggregation. Next, to address reported values versus N/A ones, we set an inclusion criterion of 75% reported datasets for each method. This translates into at most one N/A in each individual field, and at most two N/A overall. For each method that qualifies for this inclusion criterion, we compute its average accuracy over reported values (ignoring N/A ones) within each field and over all datasets; this results in up to three reported values for each method.
The aggregated results of our GSC and 13 of the compared methods appears in Fig. 3(a). These results show that GSC (with SVM) outperforms all other methods on social network data, and in fact as shown Appendinx B, it achieves state of the art results on two datasets of this type. Additionally, the aggregated results shows that our GSC approach (with FCL or SVM) outperforms all other feed forward methods both on biochemsitry data and overall in terms of universal average accuracy2. The CCN method is omitted from these aggregated results, as its results in Kondor et al. (2018a) are only reported on four biochemistry datasets. For completeness, detailed comparison of GSC with this method, which appears in Fig. 3(b), shows that our method outperforms it on two datasets while CCN outperforms GSC on the other two.
4.2 SCATTERING FEATURE SPACE FOR DATA EXPLORATION
Geometric scattering essentially provides a task independent representation of graphs in a Euclidean feature space. Therefore, it is not limited to supervised learning applications, and can be also utilized for exploratory graph-data analysis, as we demonstrate in this section. We focus our discussion on biochemistry data, and in particular on the ENZYMES dataset. Here, geometric scattering features can be considered as providing “signature” vectors for individual enzymes, which can be used to explore interactions between the six top level enzyme classes, labelled by their Enzyme Commission (EC) numbers (Borgwardt et al., 2005a). In order to emphasize the properties of scattering-based feature extraction, rather than downstream processing, we mostly limit our analysis of the scattering feature space to linear operations such as principal component analysis (PCA).
We start by considering the viability of scattering-based embedding for dimensionality reduction of graph data. To this end, we applied PCA to our scattering coefficients (computed from unnormalized moments), while choosing the number of principal components to capture 90% explained variance. In the ENZYMES case, this yields a 16 dimensional subspace of the full scattering features space. While the Euclidean notion of dimensionality is not naturally available in the original dataset, we note that graphs in it have, on average, 124.2 edges, 29.8 vertices, and 3 features per vertex, and therefore the effective embedding of the data into R16 indeed provides a significant dimensionality reduction. Next, to verify the resulting PCA subspace still captures sufficient discriminative information with respect to classes in the data, we compare SVM classification on the resulting low dimensional vectors to the the full feature space; indeed, projection on the PCA subspace results in only a small drop in accuracy from 56.85 ± 4.97 (full) to 49.83 ± 5.40 (PCA). Finally, we also consider the dimensionality of each individual class (with PCA and > 90% exp. variance) in the scattering feature space, as we expect scattering to reduce the variability in each class w.r.t. the full feature space. In the ENZYMES case, individual classes have PCA dimensionality ranging between 6 and 10, which is indeed significantly lower than the 16 dimensions of the entire PCA space. Appendix C summarizes these findings, and repeats the described procedure for two addi-
2It should be noted, though, that if NCI109 and ENZYMES were included, the GCAPS-CNN would outperform the GSC. However, many other methods would not be comparable then.
tional biochemistry datasets (from Wale et al., 2008) to verify that these are not unique to the specific ENZYMES dataset, but rather indicate a more general trend for geometric scattering feature spaces.
To further explore the scattering feature space, we now use it to infer relations between EC classes. First, for each enzyme e, with scattering feature vector ve (i.e., with Sx for all vertex features x), we compute its distance from class EC-j, with PCA subspace Cj , as the projection distance: dist(e,EC-j) = ‖ve−projSjve‖. Then, for each enzyme class EC-i, we compute the mean distance of enzymes in it from the subspace of each EC-j class asD(i, j) = mean{dist(e,EC-j) : e ∈ EC-i}. Appendix C summarizes these distances, as well as the proportion of points from each class that have their true EC as their nearest (or second nearest) subspace in the scattering feature space. In general, 48% of enzymes select their true EC as the nearest subspace (with additional 19% as second nearest), but these proportions vary between individual EC classes. Finally, we use these scatteringbased distances to infer EC exchange preferences during enzyme evolution, which are presented in Fig. 4 and validated with respect to established preferences observed and reported in Cuesta et al. (2015). We note that the result there is observed independently from the ENZYMES dataset. In particular, the portion of enzymes considered from each EC is different between these data, since Borgwardt et al. (2005b) took special care to ensure each EC class in ENZYMES has exactly 100 enzymes in it. However, we notice that in fact the portion of enzymes (in each EC) that choose the wrong EC as their nearest subspace, which can be considered as EC “incoherence” in the scattering feature space, correlates well with the proportion of evolutionary exchanges generally observed for each EC in Cuesta et al. (2015), and therefore we use these as EC weights in Fig. 4(c). Our results in Fig. 4 demonstrate that scattering features are sufficiently rich to capture relations between enzyme classes, and indicate that geometric scattering has the capacity to uncover descriptive and exploratory insights in graph data analysis, beyond the supervised graph classification from Sec 4.1.
5 CONCLUSION
We presented the geometric scattering transform as a deep filter bank for feature extraction on graphs. This transform generalizes the scattering transform, and augments the theoretical foundations of geometric deep learning. Further, our evaluation results on graph classification and data exploration show the potential of the produced scattering features to serve as universal representations of graphs. Indeed, classification with these features with relatively simple classifier models reaches high accuracy results on most commonly used graph classification datasets, and outperforms both traditional and recent deep learning feed forward methods in terms of average classification accuracy over multiple datasets. We note that this might be partially due to the scarcity of labeled big data in this field, compared to more traditional ones (e.g., image or audio classification). However, this trend also correlates with empirical results for the classic scattering transform, which excels in cases with low data availability. Finally, the geometric scattering features provide a new way for computing and considering global graph representations, independent of specific learning tasks. Therefore, they raise the possibility of embedding entire graphs in Euclidean space and computing meaningful distances between graphs with them, which can be used for both supervised and unsupervised learning, as well as exploratory analysis of graph-structured data.
APPENDIX A FULL COMPARISON TABLE
All results come from the respective papers that introduced the methods, with the exception of: (1) social network results of WL, from Tixier et al. (2017); (2) biochemistry and social results of DCNN, from Verma & Zhang (2018); (3) biochemistry, except for D&D, and social result of GK,
3DCNN using different training/test split
from Yanardag & Vishwanathan (2015); (4) D&D of GK is from Niepert et al. (2016); and (5) for Graphlets, biochemistry results from Kriege et al. (2016), social results from Tixier et al. (2017).
APPENDIX B STATE OF THE ART RESULTS ON REDDIT DATASETS
APPENDIX C DETAILED TABLES FOR SCATTERING FEATURE SPACE
ANALYSIS FROM SECTION 4.2
APPENDIX D DETAILED DATASET DESCRIPTIONS
The details of the datasets used in this work are as follows (see the main text in Sec. 3 for references):
NCI1 contains 4,110 chemical compounds as graphs, with 37 node features. Each compound is labeled according to is activity against non-small cell lung cancer and ovarian cancer cell lines, and these labels serve as classification goal on this data.
NCI109 is similar to NCI1, but with 4,127 chemical compounds and 38 node features. MUTAG consists of 188 mutagenic aromatic and heteroaromatic nitro compounds (as graphs) with
7 node features. The classification here is binary (i.e., two classes), based on whether or not a compound has a mutagenic effect on bacterium.
PTC is a dataset of 344 chemical compounds (as graphs) with nineteen node features that are divided into two classes depending on whether they are carcinogenic in rats.
PROTEINS dataset contains 1,113 proteins (as graphs) with three node features, where the goal of the classification is to predict whether the protein is enzyme or not.
D&D dataset contains 1,178 protein structures (as graphs) that, similar to the previous one, are classified as enzymes or non-enzymes.
ENZYMES is a dataset of 600 protein structures (as graphs) with three node features. These proteins are divided into six classes of enzymes (labelled by enzyme commission numbers) for classification.
COLLAB is a scientific collaboration dataset contains 5K graphs. The classification goal here is to predict whether the graph belongs to a subfield of Physics.
IMDB-B is a movie collaboration dataset with contains 1K graphs. The graphs are generated on two genres: Action and Romance, the classification goal is to predict the correct genre for each graph.
IMDB-M is similar to IMDB-B, but with 1.5K graphs & 3 genres: Comedy, Romance, and Sci-Fi. REDDIT-B is a dataset with 2K graphs, where each graph corresponds to an online discussion
thread. The classification goal is to predict whether the graph belongs to a Q&A-based community or discussion-based community.
REDDIT-5K consists of 5K threads (as graphs) from five different subreddits. The classification goal is to predict the corresponding subreddit for each thread.
REDDIT-12K is similar to REDDIT-5k, but with 11,929 graphs from 12 different subreddits.
Table 4 summarizes the size of available graph data (i.e., number of graphs, and both max & mean number of vertices within graphs) in these datasets, as previously reported in the literature.
Graph signals for social network data: None of the social network datasets has ready-to-use node features. Therefore, in the case of COLLAB, IMDB-B, and IMDB-M, we use the eccentricity, degree, and clustering coefficients for each vertex as characteristic graph signals. In the case of REDDIT-B, REDDIT-5K and REDDIT-12K, on the other hand, we only use degree and clustering coefficient, due to presence of disconnected graphs in these datasets.
APPENDIX E TECHNICAL DETAILS
The computation of the scattering features described in Section 3 is based on several design choices, akin to typical architecture choices in neural networks. Most importantly, it requires a choice of 1. which statistical moments to use (normalized or unnormalized), 2. the number of wavelet scales to use (given by J), and 3. the number of moments to use (denoted by Q). The configuration used for each dataset in this work is summarized in Table 5, together with specific settings used in the downstream classification layers, as descibed below.
Once the scattering coefficients are generated through the above processes, they are either fed into a standard classifier (SVM or logistic regression), or into two or three fully connected layers (see Table 5 for specifics) and then a softmax layer that is used to compute the class probabilities. In the latter case, cross entropy loss is minimized during the training process and ReLU is used as the activation function between fully connected layers. Besides, we use mini batch training with batch size 64 and ADAM optimization technique for training. Two learning rates 0.002 and 0.02 are tested during training. Optimal training epochs are decided through cross validation. Finally, L2 norm regularization is used to avoid overfittings.
Cross validation procedure: Classification evaluation was done with standard ten-fold cross validation procedure. First, the entire dataset is randomly split into ten subsets. Then, in each iteration (or “fold”), nine of them are used as training and validation, and the other one is used for testing classification accuracy. In total, after ten iterations, each of the subsets has been used once for testing, resulting in ten reported classification accuracy numbers for the examined dataset. Finally, the mean and standard deviation of these ten accuracies are computed and reported.
It should be noted that when using fully connected layers, each iteration also performs automatic tuning of the trained classifier, as follows. First, nine iterations are performed, each time using eight subsets (i.e., folds) as training and the remaining one as validation set, which is used to determine the optimal epoch for network training. Then, the classifier is retrained with all nine subsets. After nine iterations, each of the training/validation subsets has been used once for validation, and we obtain nine classification models, which in turn produce nine predictions (i.e., class probabilities) for each data point in the test subset of the main cross validation. To obtain the final result of this cross validation iteration, we sum up all these predictions and select the class with the highest probability as our final classification result. These results are then compared to the true labels (in the test set) on the test subset to obtain classification accuracy for this fold.
Software & hardware environment: Geometric scattering and related classification code were implemented in Python with TensorFlow. All experiments were performed on HPC environment using an intel16-k80 cluster, with a job requesting one node with four processors and two Nvidia Tesla k80 GPUs. | 1. What is the main contribution of the paper, and how does it differ from previous works in the field?
2. How does the proposed approach compare to other methods in terms of complexity and stability?
3. What are the strengths and weaknesses of the experimental setup, and how could it be improved to better demonstrate the benefits of the new method?
4. How does the paper position itself in relation to other publications in the field, particularly Gama et al. and Zou & Lerman?
5. Are there any inconsistencies in terminology or notation throughout the paper that need to be addressed? | Review | Review
# Summary of the paper
Inspired by the success of deep filter banks, this paper presents a designed deep filter bank for graphs that is based on random walks. More precisely, the technique uses lazy random walks, expressed in terms of the graph Laplacian, and re-frames this in terms of graph signal processing. Similarly to wavelets, graph node features are calculated at different scales and subsequently summed in order to remain invariant under permutations. Several experiments on graph data sets demonstrate the performance of the new technique.
# Review
This paper is written very well and explains its method with high clarity. The principal issues I see are as follows:
- The originality of the contributions is not clear
- Missing theoretical discussion
- The experimental setup is terse and slightly confusing
Concerning the originality of the paper, the differences to Gama et al., 'Diffusion Scattering Transforms on Graphs' are not made clear. Cursory reading of this publication shows a large degree of similarity. Both of the papers make use of diffusion geometry, but Gama et al. _also_ define a multi-scale filter bank, similar to Eq. 4 and 5. The paper needs to position itself more clearly vis-à-vis this other publication. Is the present approach to be seen more as an application of the theory that was developed in the paper by Gama et al.? What are the key similarities and differences? In terms of space, this could be added to Section 3.2, which could be rephrased as a generic 'Differences to other methods' section and has to be slightly condensed in any case (see my suggestions below). Another publication by Zou & Lerman, 'Graph Convolutional Neural Networks via Scattering', is also cited as an inspiration, but here the differences are larger in my understanding and do not necessitate further justification. Last, the publication 'Graph Capsule Convolutional Neural Networks' by Verma & Zhang is also cited for the definition of 'scattering capsules'. Again, cursory reading of the publication shows that this approach is similar to the presented one; the only difference being which features are used for the definition of capsules. I recommend referring to the invariants as 'capsules' and link it back to Verma & Zhang so that the provenance of the terminology is clear.
Concerning the theoretical part of the paper, I miss a discussion of the complexity of the approach. Such a discussion does not have to be long, but in particular since the paper mentions that the applicability of scattering transforms for transfer learning (and also remarks about the universality of them in Section 4), some space should be devoted to theoretical considerations (memory complexity, runtime complexity). This would strengthen the paper a lot, in particular in light of the complexity of other approaches! Furthermore, an additional experiment about the stability of scattering transforms appears warranted. While I applaud the experimental description in the paper (number of scales, how the maximum scale is chosen, ...), an additional proof or experiment in the appendix should deal with the stability. Let's assume that for extremely large graphs, I am content with 'almost-but-not-quite-as-good' classification performance. Is it possible to achieve this by limiting the number of scales? How much to the results depend on the 'right' choice here?
Concerning the experimental setup, I think that the way (average) accuracies are reported at present is slightly misleading. The paper even remarks about this in footnote 2. While I understand the need of demonstrating the universality of these features, I think that the current setup is not optimal for this. I would recommend (in addition to reporting accuracies) a transfer learning setup rather in which the beneficial properties of the new method can be better explored. More precisely, the claim from Section 4, 4th paragraph ('Since the scattering transform...') needs to be further explored. This appears to be a unique feature of the new method. The current experimental setup does not exploit it. As a side-note, I realize that this might sound like a standard request for 'show more experiments', but I think the paper would be more impactful if it contained one scenario in which its benefits over other approaches are clear.
# Suggestions for improvement
The paper flows extremely well and it is clear that care has been taken to ensure that everything can be understood. I liked the discussion of invariance properties in particular. There are only a few minor things that can be improved:
- 'covariant' and 'equivariant', while common in (graph) signal processing, could be briefly explained to increase accessibility and impact
- 'order' and 'layer' are not used consistently: in the caption of Figure 2a, the term 'order' is used, but for Eq. 4 and 5, for example, the term 'layer' is employed. Since 'layer' is more reminiscent of a DNN, I would suggest to use 'order' throughout the paper, because it meshes better with the way the scattering invariants are defined.
- the notation $Sx$ is slightly overloaded; in Figure 2a, for example, it is not clear at first that the individual cascades are supposed to form a *set*; this is only explained at the end of Section 3.1; to make matters more consistent, the figure should be updated and the combination of individual cascades should be made clear
- In Eq. 5, the bars of the absolute value are not set correctly; the absolute value should cover $\psi_j x(v_i)$ and not $(v_i)$ itself.
- minor 'gripe': $\psi^{(J)}$ is defined as a set in Eq. 2, but it is treated as a matrix or an operator (and also referred to as such); this should be more consistent
- The discussion of the aggregation of multiple statistics in Section 3.2 appears to be somewhat redundant in light of the discussion for Eq. 4 and Eq. 5 in the preceding section
- in the appendix, more details about the training of the FCN should be added; all other parts of the experiments are described in sufficient detail, but the training process requires additional information about learning rates etc. |
ICLR | Title
Restricted Category Removal from Model Representations using Limited Data
Abstract
Deep learning models are trained on multiple categories jointly to solve several real-world problems. However, there can be cases where some of the classes may become restricted in the future and need to be excluded after the model has already been trained on them (Class-level Privacy). It can be due to privacy, ethical or legal concerns. A naive solution is to simply train the model from scratch on the complete training data while leaving out the training samples from the restricted classes (FDR full data retraining). But this can be a very time-consuming process. Further, this approach will not work well if we no longer have access to the complete training data and instead only have access to very few training data. The objective of this work is to remove the information about the restricted classes from the network representations of all layers using limited data without affecting the prediction power of the model for the remaining classes. Simply fine-tuning the model on the limited available training data for the remaining classes will not be able to sufficiently remove the restricted class information, and aggressive fine-tuning on the limited data may also lead to overfitting. We propose a novel solution to achieve this objective that is significantly faster (∼ 200× on ImageNet) than the naive solution. Specifically, we propose a novel technique for identifying the model parameters that are mainly relevant to the restricted classes. We also propose a novel technique that uses the limited training data of the restricted classes to remove the restricted class information from these parameters and uses the limited training data of the remaining classes to reuse these parameters for the remaining classes. The model obtained through our approach behaves as if it was never trained on the restricted classes and performs similar to FDR (which needs the complete training data). We also propose several baseline approaches and compare our approach with them in order to demonstrate its efficacy.
1 INTRODUCTION
There are several real-world problems in which deep learning models have exceeded human-level performance. This has led to a wide deployment of deep learning models. Deep learning models generally train jointly on a number of categories/classes of data. However, the use of some of these classes may get restricted in the future (restricted classes), and a model with the capability to identify these classes may violate legal/privacy concerns, e.g., a company may legally prevent a deep learning model from having the capability to identify its copyright-protected logo, patented products, and so on. Another example is a treatment prediction model that predicts the best treatment for a patient based on the disease. If one of the treatments for a disease is banned due to its side-effects or ethical concerns, the restricted treatment category has to be excluded from the trained model. Individuals and organizations are becoming increasingly aware of these issues leading to an increasing number of legal cases on privacy issues in recent years. In such situations, the model has to be stripped of its capability to identify these categories. This is a difficult problem to solve, especially if the full training data is no longer available and only a few training examples are available. We present a “Restricted Category Removal from Model Representations with Limited Data” (RCRMR-LD) problem setting that simulates the above problem. In this paper, we propose to solve this problem in a fast and efficient manner.
The objective of the RCRMR-LD problem is to remove the information regarding the restricted classes from the network representations of all layers using the limited training data available without
affecting the ability of the model to identify the remaining classes. If we have access to the full training data, then we can simply exclude the restricted class examples from the training data and perform a full training of the model from scratch using the abundant data (FDR - full data retraining). However, the RCRMR-LD problem setting is based on the scenario that the directive to exclude the restricted classes is received in the future after the model has already been trained on the full data and now only a limited amount of training data is available to carry out this process. Simply training the network from scratch on only the limited training data of the remaining classes will result in severe overfitting and significantly affect the model performance (Baseline 2, as shown in Tables 1, 6).
Another possible solution to this problem is to remove the weights of the fully-connected classification layer of the network corresponding to the excluded classes such that it can no longer classify the excluded classes. However, this approach suffers from a serious problem. Since, in this approach, we only remove some of the weights of the classification layer and the rest of the model remains unchanged, it still contains the information required for recognizing the excluded classes. This information can be easily accessed through the features that the model extracts from the images. Therefore, we can use these features for performing classification. In this paper, we use a nearest prototype-based classifier to demonstrate that the model features still contain information regarding the restricted classes. Specifically, we use the model features of the examples from the limited training data to compute the average class prototype for each class and create a nearest class prototype-based classifier using them. Next, for any given test image, we extract its features using the model and then find the class prototype closest to the given test image. This nearest class prototype-based classifier performs close to the original fully-connected classifier on the excluded classes as shown in Tables 1, 6 (Baseline 1). Therefore, even after using this approach, the resulting model still contains information regarding the restricted classes. Another possible approach can be to apply the standard fine-tuning approach to the model using the limited available training data of the remaining classes (Baseline 8). However, fine-tuning on such limited training data is not able to sufficiently remove the restricted class information from the model representations (see Tables 1, 6), and aggressive fine-tuning on the limited training data may result in overfitting.
Considering the problems faced by the naive approaches mentioned above, we propose a novel “Efficient Removal with Preservation” (ERwP) approach to address the RCRMR-LD problem. First, we propose a novel technique to identify the model parameters that are highly relevant to the restricted classes, and to the best of our knowledge, there are no existing prior works for finding such classspecific relevant parameters. Next, we propose a novel technique that optimizes the model on the limited available training data in such a way that the restricted class information is discarded from the restricted class relevant parameters, and these parameters are reused for the remaining classes.
To the best of our knowledge, this is the first work that addresses the RCRMR-LD problem. Therefore, we also propose several baseline approaches for this problem (see Sec. 11.2). However, our proposed approach significantly outperforms all the proposed baseline approaches. Our proposed approach requires very few epochs to address the RCRMR-LD problem and is, therefore, very fast (∼ 200 times faster than the full data retraining model for the ImageNet dataset) and efficient. The model obtained after applying our approach forgets the excluded classes to such an extent that it behaves as though it was never trained on examples from the excluded classes. The performance of our model is very similar to the full data retraining (FDR) model (see Sec. 8.1, Fig. 5). We also propose the performance metrics needed to evaluate the performance of any approach for the RCRMR-LD problem. We perform experiments on several datasets to demonstrate the efficacy of our method.
2 PROBLEM SETTING
In this work, we present the restricted category removal from model representations with limited data (RCRMR-LD) problem setting, in which a deep learning model Mo trained on a specific dataset has to be modified to exclude information regarding a set of restricted/excluded classes from all layers of the deep learning model without affecting its identification power for the remaining classes (see Fig. 1). The classes that need to be excluded are referred to as the restricted/excluded classes. Let {Ce1 , Ce2 , ..., CeNe} be the restricted/excluded classes, where Ne refers to the number of excluded classes. The remaining classes of the dataset are the remaining/non-excluded classes. Let {Cne1 , Cne2 , ..., CneNne} be the non-excluded classes, whereNne refers to the number of remaining/nonexcluded classes. Additionally, we only have access to a limited amount of training data for the restricted classes and the remaining classes, for carrying out this process. Therefore, any approach for addressing this problem can only utilize this limited training data.
3 RCRMR-LD PROBLEM IN REAL WORLD SCENARIOS
A real-world scenario where our proposed RCRMR-LD problem can arise is federated learning (McMahan et al., 2017). In the federated learning setting, there are multiple collaborators that have a part of the training data stored locally, and a model is trained collaboratively using these private data without sharing or collating the data due to privacy concerns. Suppose organization A has a part of the training data, and there are other collaborators that have other parts of the training data for the same classes. Organization A collaboratively trains a model with other collaborators using federated learning. After the model has been trained, a few classes may become restricted in the future due to some ethical or privacy concerns, and these classes should be removed from the model. However, the other collaborators may not be available or may charge a huge amount of money for collaborating again to train a fresh model from scratch. In this case, organization A does not have access to the full training data of the non-excluded/remaining classes that it can use to re-train a model from scratch in order to exclude the restricted classes information. This clearly shows that the RCRMR-LD problem is possible in federated learning.
Another real-world scenario is the incremental learning setting (Rebuffi et al., 2017; Kemker & Kanan, 2018), where the model receives training data in the form of sequentially arriving tasks. Each task contains a new set of classes. During a training session t, the model receives the task t for training and cannot access the full data of the previous tasks. Instead, the model has access to very few exemplars of the classes in the previous tasks. Suppose before training a model on training session t, it is noticed that some classes from a previous task (< t) have to be removed from the model since those classes have become restricted due to privacy or ethical concerns. In this case, only a limited number of exemplars are available for all these previous classes (restricted and remaining). This demonstrates that the RCRMR-LD problem is also possible in the incremental learning setting. We experimentally demonstrate in Sec. 8.3, how our approach can address the RCRMR-LD problem in the incremental learning setting.
4 PROPOSED METHOD
Let, B refer to a mini-batch (of size S) from the available limited training data, and B contains training datapoints from the restricted/excluded classes ({(xei , yei )|(xe1, ye1), ..., (xeSe , y e Se
)}) and from the remaining/non-excluded classes ({(xnej , ynej )|(xne1 , yne1 ), ..., (xneSne , y ne Sne
)}). Here, (xei , yei ) refers to a training datapoint from the excluded classes where xei is an image, y e i is the corresponding label and yei ∈ {Ce1 , Ce2 , ..., CeNe}. (x ne j , y ne j ) refers to a training datapoint from the non-excluded classes where xnej is an image, y ne j is the corresponding label and y ne j ∈ {Cne1 , Cne2 , ..., CneNne}. Here, Se and Sne refer to the number of training examples in the mini-batch from the excluded and non-excluded classes, respectively, such that S = Se + Sne. Ne and Nne refer to the number of excluded and non-excluded classes, respectively. Let M refer to the deep learning model being trained using our approach and Mo is the original trained deep learning model.
In a trained model, some of the parameters may be highly relevant to the restricted classes, and the performance of the model on the restricted classes is mainly dependent on such highly relevant parameters. Therefore, in our approach, we focus on removing the excluded class information from these restricted class relevant parameters. Since the model is trained on all the classes jointly, the parameters are shared across the different classes. Therefore identifying these class-specific relevant parameters is very difficult. Let us consider a model that is trained on color images of a class. If we now train it on grayscale images of the class, then the model has to learn to identify these new images. In order to do so, the parameters relevant to that class will receive large gradient updates as compared to the other parameters (see Sec. 9.1). We propose a novel approach for identifying the relevant parameters for the restricted classes using this idea. For each restricted class, we choose the training images belonging to that class from the limited available training data. Next, we apply a grayscale data augmentation technique/transformation f to these images so that these images become different from the images that the original model was earlier trained on (assuming that the original model has not been trained on grayscale images). We can also use other data augmentation techniques that are not seen during the training process of the original model and that do not change the class of the image (refer to Sec. 11.7 in the appendix). Next, we combine the predictions for each training image into a single average prediction and perform backpropagation. During the backpropagation, we study the gradients for all the parameters in each layer of the model. Accordingly, we select the parameters with the highest absolute gradient as the relevant parameters for the corresponding restricted class. Specifically, for a given restricted class, we choose the minimum number of such parameters from each network layer such that pruning these parameters will result in the maximum degradation of model performance on that restricted class. We provide a detailed description of the process for identifying the restricted class relevant parameters in Sec. 11.1 of the appendix. The combined set of the relevant parameters for all the excluded classes is referred to as the restricted/excluded class relevant parameters Θexrel (see Fig. 2). Please note that we use this process only to identify Θexrel, and we do not update the model parameters during this step.
Pruning the relevant parameters for a restricted class can severely impact the performance of the model for that class (see Sec. 9.1). However, this may also degrade the performance of the model on the non-excluded classes because the parameters are shared across multiple classes. Therefore, we cannot address the RCRMR-LD problem by pruning the relevant parameters of the excluded classes. Finetuning these parameters on the limited remaining class data will also not be able to sufficiently remove the restricted class information from the model (see Sec. 11.10 in appendix). Based on this, we propose to address the RCRMR-LD problem by optimizing the relevant parameters of the restricted classes to remove the restricted class information from them and to reuse them for the remaining classes.
After identifying the restricted class relevant parameters, our ERwP approach uses a classification loss based on the cross-entropy loss function to optimize the restricted class relevant parameters of the model on each mini-batch (see Fig. 3). We know that the gradient ascent optimization algorithm can be used to maximize a loss function and encourage the model to perform badly on the given input. Therefore, we use the gradient ascent optimization on the classification loss for the limited restricted class training examples to remove the information regarding the restricted classes from Θexrel. We achieve this by multiplying the classification loss for the augmented training examples from the excluded classes by a constant negative factor of -1. We also optimize Θexrel using the gradient descent optimization on the classification loss for the limited remaining class training example, in order to reuse these parameters for the remaining classes. We validate using this approach through various ablation experiments as shown in Sec. 9.2. The classification loss for the examples from the
excluded and non-excluded classes and the overall classification loss for each mini-batch are defined as follows.
Lec = Se∑ i=1 −1 ∗ `(yei , ye∗i ) (1)
Lnec = Sne∑ j=1 `(ynej , y ne∗ j ) (2)
Lc = 1
S (Lec + Lnec ) (3)
Where, ye∗i and y ne∗ j refer to the predicted class labels for x e i and x ne j , respectively. `(., .) refers to the cross-entropy loss function. Lec and Lnec refer to the classification loss for the examples from the excluded and non-excluded classes in the mini-batch, respectively. Lc refers to the overall classification loss for each mini-batch.
Since all the network parameters were jointly trained on all the classes (restricted and remaining), the restricted class relevant parameters also contain information relevant to the remaining classes. Applying the above process alone will still harm the model’s predictive power for the non-excluded classes (as shown in Sec. 9.2, Table 3). This is because the gradient ascent optimization strategy will also erase some of the relevant information regarding the remaining classes. Further, applying Lnec on the limited training examples of the remaining classes will lead to overfitting and will not be effective enough to fully preserve the model performance on the remaining classes. In order to ensure that the model’s predictive power for the non-excluded classes does not change, we use a knowledge distillation-based regularization loss. Knowledge distillation (Hinton et al., 2014) ensures that the predictive power of the teacher network is replicated in the student network. In this problem setting, we want the final model to replicate the same predictive power of the original model for the remaining classes. Therefore, given any training example, we use the knowledge distillation-based regularization loss to ensure that the output logits produced by the model corresponding to only the non-excluded classes remain the same as that produced by the original model. We apply the knowledge distillation loss to the limited training examples from both the excluded and remaining classes, to preserve the non-excluded class logits of the model for any input image. We validate this regularization loss through ablation experiments as shown in Table 3. We use the original model Mo (before applying ERwP) as the teacher network and the current model M being processed by ERwP as the student network, for the knowledge distillation process. Please note that the optimization for this loss is also carried out only for the restricted class relevant parameters of the model. Let KD refer to the knowledge distillation loss function. It computes the Kullback-Liebler (KL) divergence between the soft predictions of the teacher and the student networks and can be defined as follows:
KD(ps, pt) = KL(σ(ps), σ(pt)) (4)
where, σ(.) refers to the softmax activation function that converts logit ai for each class i into a probability by comparing ai with logits of other classes aj , i.e., σ(ai) = exp
ai/κ∑ j exp aj/κ . κ refers to the
temperature (Hinton et al., 2014), KL refers to the KL-Divergence function. ps, pt refer to the logits produced by the student network and the teacher network, respectively.
The knowledge distillation-based regularization losses in our approach are defined as follows.
Lekd = Se∑ i=1 KD(M(xei )[C ne],Mo(x e i )[C ne]) (5)
Lnekd = Sne∑ j=1 KD(M(xnej )[C ne],Mo(x ne j )[C ne]) (6)
Lkd = 1
S (Lekd + Lnekd) (7)
Where, M(#)[Cne] and Mo(#)[Cne] refer to the output logits corresponding to the remaining classes produced by M and Mo, respectively. # can be either xei or x ne j . Lekd and Lnekd refer to knowledge distillation-based regularization loss for the examples from the excluded and non-excluded classes, respectively. Lkd refers to the overall knowledge distillation-based regularization loss for each mini-batch.
The total loss Lerwp of our approach for each mini-batch is defined as follows.
Lerwp = Lc + βLkd (8)
Where, β is a hyper-parameter that controls the contribution of the knowledge distillation-based regularization loss. We use this loss for fine-tuning the model for very few epochs.
5 RELATED WORK
Pruning involves removing redundant and unimportant weights (Carreira-Perpinán & Idelbayev, 2018; Dong et al., 2017; Guo et al., 2016; Han et al., 2015a;b; Tung & Mori, 2018; Zhang et al., 2018) or filters (He et al., 2019a; 2018; 2019b;c; Li et al., 2016) from a deep learning model without affecting the model performance. In contrast, our approach identifies class-specific important parameters, and therefore, the pruning techniques cannot be applied in our approach. In the incremental learning setting (Douillard et al., 2020; Hou et al., 2019; Tao et al., 2020; Yu et al., 2020; Liu et al., 2021), the objective is to preserve the predictive power of the model for previously seen classes while learning a new set of classes. In contrast, our proposed RCRMR-LD problem setting involves removing the information regarding specific classes from the pre-trained model while preserving the capacity of the model to identify the remaining classes. Privacy-preserving deep learning (Nan & Tao, 2020; Louizos et al., 2015; Edwards & Storkey, 2015; Hamm, 2017) involves learning representations that incorporate features from the data relevant to the given task and ignore sensitive information (such as the identity of a person). In contrast, the objective of the RCRMR-LD problem setting, is to achieve class-level privacy, i.e., if a class is declared as private/restricted, then all information about this class should be removed from the model trained on it, without affecting its ability to identify the remaining classes. The authors in (Ginart et al., 2019) propose an approach to delete individual data points from trained machine learning models like a clustering model. In contrast, RCRMR-LD involves removing the information of a set of classes from all layers of a deep learning model. Therefore, the approach proposed in (Ginart et al., 2019) cannot be applied to the RCRMR-LD problem setting.
6 BASELINES
We propose 9 baseline models for the RCRMR-LD problem and compare our proposed approach with them. The baseline 1 involves deleting the weights of the fully-connected classification layer corresponding to the excluded classes. Baselines 2, 3, 4, 5 involve training the model on the limited training data of the remaining classes. Baselines 6, 7, 8, 9 involve fine-tuning the model on the available limited training data. Please refer to Sec. 11.2 in the appendix for details about the baselines.
7 PERFORMANCE METRICS
In the RCRMR-LD problem setting, we propose three performance metrics to validate the performance of any method: forgetting accuracy (FAe), forgetting prototype accuracy (FPAe), and constraint accuracy (CAne). The forgetting accuracy refers to the fully-connected classification layer accuracy of the model for the excluded classes. The forgetting prototype accuracy refers to the nearest class prototype-based classifier accuracy of the model for the excluded classes. CAne refers to the fully-connected classification layer accuracy of the model for the non-excluded classes.
In order to judge any approach on the basis of these metrics, we follow the following sequence. First, we analyze the constraint accuracy (CAne) of the model produced by the given approach to verify if the approach has preserved the prediction power of the model for the non-excluded classes. CAne of the model should be close to that of the original model. If this condition is not satisfied, then the approach is not suitable for this problem, and we need not analyze the other metrics. This is because if the constraint accuracy is not maintained, then the overall usability of the model is hurt significantly. Next, we analyze the forgetting accuracy (FAe) of the model to verify if the excluded class information has been removed from the model at the classifier level. FAe of the model should be as close to 0% as possible. Finally, we analyze the forgetting prototype accuracy (FPAe) of the model to verify if the excluded class information has been removed from the model at the feature level. FPAe of the model should be significantly less than that of the original model. However, the FPAe will not become zero since any trained model will learn to extract meaningful features, which will help the nearest class prototype-based classifier to achieve some non-negligible accuracy even on the excluded classes. Therefore, for a better analysis of the level of forgetting of the excluded classes at the feature level, we compare the FPAe of the model with the FPAe of the FDR model. The
FDR model is a good candidate for this analysis since it has not been trained on the excluded classes (only trained on the complete dataset of the remaining classes), and it still achieves a non-negligible performance of the excluded classes (see Sec 8.1). However, it should be noted that this comparison is only for analysis and the comparison is not fair since the FDR model needs to train on the entire dataset (except the excluded classes).
8 EXPERIMENTS
We have reported the experimental results for the CIFAR-100 and ImageNet-1k datasets in this section. We have provided the results on the CUB-200 dataset in the appendix. Please refer to the appendix for the details regarding the dataset and implementation.
8.1 CIFAR-100 RESULTS
We report the performance of different baselines and our proposed ERwP method on the RCRMR-LD problem using the CIFAR-100 dataset with different architectures in Table 1. We observe that the baseline 1 (weight deletion) achieves high constraint accuracy CAne and 0% forgetting accuracy FAe. But its forgetting prototype accuracy FPAe remains the same as the original model for all the three architectures, i.e., ResNet-20/56/164. Therefore, baseline 1 fails to remove the excluded class information from the model at the feature level. Baseline 2 is not able to preserve the constraint accuracy CAne even though it performs full training on the limited excluded class data. Baseline 3 achieves higher CAne than baseline 2, but the constraint accuracy is still too low. Baselines 4 and 5 demonstrate significantly better constraint accuracy than baseline 2 and 3, but their constraint accuracy is still significantly lower than the original model (except baseline 5 for ResNet-20). The baseline 5 with ResNet-20 maintains the constraint accuracy and achieves 0% forgetting accuracy FAe but its FPAe is still significantly high and, therefore, is unable to remove the excluded class information from the model at the feature level. The fine-tuning based baselines 6 and 7 are able to significantly reduce the forgetting accuracy FAe but their constraint accuracy CAne drops significantly. The fine-tuning based baselines 8 and 9 only finetune the model on the limited remaining class data and as a result they are not able to sufficiently reduce either the forgetting accuracy FAe or the forgetting prototype accuracy FPAe.
Our proposed ERwP approach achieves a constraint accuracy CAne that is very close to the original model for all three architectures. It achieves close to 0% FAe. Further, it achieves a significantly lower FPAe than the original model. Specifically, the FPAe of our approach is significantly lower than that of the original model by absolute margins of 17.19%, 20.81%, and 20.17% for the ResNet-20, ResNet56, and ResNet-164 architectures, respectively. The FPAe for the FDR model is 44.20%, 45.40% and 51.85% for the ResNet-20, ResNet-56 and ResNet-164 architectures, respectively. Therefore, the FPAe of our approach is close to that of the FDR model by absolute margins of 3.86%, 2.44% and 4.38% for the ResNet-20, ResNet-56 and ResNet-164 architectures, respectively. Therefore, our ERwP approach makes the model behave similar to the FDR model even though it was trained on only limited data from the excluded and remaining classes. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training
Model Methods Top-1 Top-5
FAe CAne FAe CAne
Res-18 Original 69.76% 69.76% 89.58% 89.02%ERwP 0.28% 69.13% 1.01% 88.93%
Res-50 Original 76.30% 76.11% 93.04% 92.84%ERwP 0.25% 75.45% 2.55% 92.39%
Mob-V2 Original 72.38% 70.83% 91.28% 90.18%ERwP 0.17% 70.81% 0.81% 89.95%
data is only 10% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 30 ∗ 10 = 300× faster than the FDR method that is trained on the full training data for 300 epochs.
8.2 IMAGENET RESULTS
Table 2 reports the experimental results for different approaches to RCRMR-LD problem over the ImageNet-1k dataset using the ResNet-18, ResNet-50 and MobileNet V2 architectures. Our proposed ERwP approach achieves a top-1 constraint accuracy CAne that is very close to that of the original model by absolute margins of 0.63%, 0.66% and 0.02% for the ResNet-18, ResNet-50 and MobileNet V2 architectures, respectively. It achieves close to 0% top-1 forgetting accuracy FAe for all the three architectures. Therefore, our approach performs well even on the large-scale ImageNet-1k dataset. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training data is only 5% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 20 ∗ 10 = 200× faster than the FDR method that is trained on the full data for 100 epochs.
8.3 RCRMR-LD PROBLEM IN INCREMENTAL LEARNING
In this section, we experimentally demonstrate how the RCRMR-LD problem in the incremental learning setting is addressed using our proposed approach. We consider an incremental learning setting on the CIFAR-100 dataset in which each task contains 20 classes. We use the BIC (Wu et al., 2019) method for incremental learning on this dataset. The exemplar memory size is fixed at 2000 as per the setting in (Wu et al., 2019). In this setting, there are 5 tasks. Let us assume that the model (M4) has already been trained on 4 tasks (80 classes), and we are in the fifth training session. Suppose, at this stage, it is noticed that all the classes in the first task (20 classes) have become restricted and need to be removed before the model is trained on task 5. However, we only have a limited number of exemplars of the 80 classes seen till now, i.e., 2000/80 = 25 per class. We apply our proposed approach to the model obtained after training session 4, and the results are reported in Table 4. The results indicate that our approach modified the model obtained after session 4, such that the forgetting accuracy of the restricted classes approaches 0% and the constraint accuracy of the remaining classes is not affected. In fact, the modified model behaves as if, it was never trained on the classes from task 1. We can now perform the incremental training of the modified model on task 5.
9 ABLATION STUDIES
9.1 RESTRICTED CLASS RELEVANT PARAMETERS
We perform ablation experiments to verify our approach of identifying the highly relevant parameters for any restricted class. We perform these experiments on the CIFAR-100 dataset with the ResNet-56 architecture and report the forgetting accuracy FAe for the randomly chosen excluded class. Please note that in this case, only the chosen class of CIFAR-100 is the restricted class and all the remaining
classes constitute the non-excluded classes. In order to show the effectiveness of our approach, we sort the absolute gradients of the parameters in the model (obtained through backpropagation for the excluded class augmented images) and choose a set of high relevance and low relevance parameters. We then prune/zero out these parameters and record the forgetting accuracy. Fig. 4 demonstrates that as we zero out the high relevance parameters, the forgetting accuracy of the excluded class drops by a huge margin. It also shows that as we zero out the low relevance parameters, there is only a minor change in the forgetting accuracy of the excluded class. This validates our approach for identifying the high relevant parameters for the restricted classes.
9.2 SIGNIFICANCE OF THE COMPONENTS OF THE PROPOSED ERWP APPROACH
We perform ablations on the CIFAR-100 dataset using the ResNet-56 model to study the significance of the Lec, Lnec and Lkd components of our proposed ERwP approach. Table 3 indicates that optimizing the restricted class relevant parameters using only Lnec cannot significantly remove the information regarding the restricted classes from the model. Applying Lnec along with Lec significantly reduces the forgetting accuracy FAe and forgetting prototype accuracy FPAe but also significantly reduces the constraint accuracy CAne. Finally, applying the Lkd loss along with Lnec and Lec significantly reduces FAe and FPAe while maintaining the constraint accuracy CAne very close to that of the original model.
9.3 ABLATION ON THE NUMBER OF EXCLUDED CLASSES
We report the experimental results for our approach for different splits of excluded and remaining classes of the CIFAR-100 dataset in Table 5. We observe that our ERwP performs well for all the splits for both the ResNet-20 and ResNet-56 architectures.
9.4 PERFORMANCE OF ERWP OVER TRAINING EPOCHS
We analyze the change in the performance of the model after every epoch of our proposed ERwP approach in Fig. 5. We observe that as the training progresses the constraint accuracy is maintained close to that of the original model, the forgetting accuracy keeps dropping till it reaches 0% and the forgetting prototype accuracy keeps falling and comes closer to that of the FDR model.
10 CONCLUSION
In this paper, we present a “Restricted Category Removal from Model Representations with Limited Data” problem in which the objective is to remove the information regarding a set of excluded/restricted classes from a trained deep learning model without hurting its predictive power for the remaining classes. We propose several baseline approaches and also the performance metrics for this setting. First, we propose a novel approach to identify the model parameters that are highly relevant to the restricted classes. Next, we propose a novel efficient approach that optimizes these model parameters in order to remove the restricted class information and re-use these parameters for the remaining classes. We experimentally show how our approach significantly outperforms all the proposed baselines and performs similar to the full data retraining model.
11 APPENDIX
11.1 PROCESS FOR SELECTING THE RESTRICTED CLASS RELEVANT PARAMETERS
First, we apply a data augmentation technique f , not used during training, to the images of the given restricted class. Next, we combine the predictions for these images and perform backpropagation. Finally, we select the parameters with the highest absolute gradient as the relevant parameters for the corresponding restricted class. Specifically, for a given restricted class, we choose the minimum number of such parameters from each network layer such that pruning these parameters will result in the maximum degradation of model performance on that restricted class. We use a process similar to the binary search for automatically selecting the parameters with the highest absolute gradient. We use an automated script that first creates a list of parameters in each layer, sorts them in descending order according to the gradient values, and checks if zeroing out the weights of the first 5% parameters from this list leads to near zero accuracy for that class. If not, then we select double the number of parameters chosen earlier and repeat this process. If the accuracy is near zero, we repeat the process with half the number of parameters chosen earlier. Please note, this process is just for identifying the parameters relevant to the restricted classes, and their weights are restored after this process. The combined set of the relevant parameters for all the excluded classes is referred to as the restricted/excluded class relevant parameters.
11.2 BASELINES
We propose several baseline models for the RCRMR-LD problem and compare our proposed approach with them. The baseline approaches are defined as follows:
Original model: It refers to the original model that is trained on the complete training set containing all the training examples from both the excluded and non-excluded classes. It represents the model that has not been modified by any technique to remove the excluded class information.
Baseline 1 - Weight Deletion (WD): It refers to the original model with a modified fully-connected classification layer. Specifically, we remove the weights corresponding to the excluded classes in the fully-connected classification layer so that it cannot classify the excluded classes.
Baseline 2 - Training from Scratch on Limited Non-Restricted Class data (TSLNRC): In this baseline, we train a new model from scratch using the limited training examples of only the nonexcluded classes. It uses the complete training schedule as the original model and only uses the classification loss for training the model.
Baseline 3 - Training from Scratch on Limited Non-Restricted Class data with KD (TSLNRCKD): This baseline is the same as baseline 2, but in addition to the classification loss, it also uses a knowledge distillation loss to ensure that the non-excluded class logits of the model (student) match that of the original model (teacher).
Baseline 4 - Training of Original model on Limited Non-Restricted Class data (TOLNRC): This baseline is the same as baseline 2, but the model is initialized with the weights of the original model instead of randomly initializing it.
Baseline 5 - Training of Original model on Limited Non-Restricted Class data with KD (TOLNRC-KD): This baseline is the same as baseline 4, but in addition to the classification loss, it also uses a knowledge distillation loss.
Baseline 6 - Fine-tuning of Original model on Limited data after Mapping Restricted Classes to a Single Class (FOLMRCSC): In this baseline approach, we first replace all the excluded class labels in the limited training data with a new single excluded class label and then fine-tune the original model for a few epochs on the limited training data of both the excluded and remaining classes. In the case of the examples from the excluded classes, the model is trained to predict the new single excluded class. In the case of the examples from the remaining classes, the model is trained to predict the corresponding non-excluded classes.
Baseline 7 - Fine-tuning of Original model on Limited data after Mapping Restricted Classes to a Single Class with KD (FOLMRCSC-KD): This baseline is the same as baseline 6, but in
addition to the classification loss, it also uses a knowledge distillation loss to ensure that the nonexcluded class logits of the model (student) match that of the original model (teacher).
Baseline 8 - Fine-tuning of Original model on Limited Non-Restricted Class data (FOLNRC): In this baseline approach, we fine-tune the original model for a few epochs on the limited training data of non-excluded/remaining classes. The model is trained to predict the corresponding excluded classes of the training examples.
Baseline 9 - Fine-tuning of Original model on Limited Non-Restricted Class data with KD (FOLNRC-KD): This baseline is the same as baseline 8, but in addition to the classification loss, it also uses a knowledge distillation loss.
11.3 DATASETS
For the RCRMR-LD problem setting, we modify the CIFAR-100 (Krizhevsky et al., 2009), CUB (Wah et al., 2011) and ImageNet-1k (Russakovsky et al., 2015) datasets. In order to simulate the RCRMRLD problem setting with limited training data, we choose the last 20 classes of the CIFAR-100 dataset as the excluded classes and take only 10% of the training images of each class. Similarly, we choose the last 20 classes of the CUB dataset as the excluded classes with only 3 training images per class. For ImageNet-1K, we choose the last 100 classes as the excluded classes with 5% of the training images to simulate the limited data available for this problem setting.
11.4 IMPLEMENTATION DETAILS
In this section, we provide all the details required to reproduce our experimental results. We use the ResNet-20 (He et al., 2016), ResNet-56, ResNet-164 architectures for the experiments on the CIFAR-100 dataset. We use the standard data augmentation methods of random cropping to a size of 32 × 32 (zero-padded on each side with four pixels before taking a random crop) and random horizontal flipping, which is a standard practice for training a model on CIFAR-100. In order to obtain the original and FDR models for the CIFAR-100 dataset, we train the network for 300 epochs with a mini-batch size of 128 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e− 4. We choose the initial learning rate as 0.1, and we decrease it by a factor of 5 after every 50 epochs. For the CIFAR-100 experiments with ERwP using the ResNet-20, ResNet-56, and ResNet-164 architectures, we take learning rate= 1e− 4, β = 10 and optimize the network for 10 epochs. Since the available limited training data is only 10% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 30 ∗ 10 = 300× faster than the FDR method. For the experiments on the ImageNet dataset, we use the ResNet-18, ResNet-50, and MobileNet-V2 architectures. We use the standard data augmentation methods of random cropping to a size of 224 × 224 and random horizontal flipping, which is a standard practice for training a model on ImageNet-1k. In order to obtain the original and FDR models for the ImageNet dataset, we train the network for 100 epochs with a mini-batch size of 256 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e − 4. We choose the initial learning rate as 0.1, and we decrease it by a factor of 10 after every 30 epochs. For evaluation, the validation images are subjected to center cropping of size 224 × 224. For the ImageNet-1k experiments (5% training data) with ERwP using the ResNet-50 architecture, we optimize the network for 10 epochs with a learning rate of 9e − 5 using β = 200. For the ERwP experiments using the ResNet-18 architecture, we optimize the network for 10 epochs using β = 200 with an initial learning rate of 1.1e − 4 and a learning rate of 1.1e− 5 from the third epoch onward. In the case of the ERwP experiments with the MobileNet-V2 architecture, we optimize the network for 10 epochs using β = 400 with an initial learning rate of 1.5e − 4 and a learning rate of 1.5e − 5 from the third epoch onward. Since the available limited training data is only 5% of the entire ImageNet-1k dataset, therefore, our ERwP approach is approximately 20 ∗ 10 = 200× faster than the FDR method. For the experiments on the CUB-200 dataset, we use the ResNet-50 architecture pre-trained on the ImageNet dataset. In order to obtain the original and FDR models for the CUB-200 dataset, we train the network for 50 epochs with a mini-batch size of 64 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e − 3. We choose the initial learning rate as 1e − 2, and we decrease it by a factor of 10 after epochs 30 and 40. For the CUB-200 experiments (3 images per class, i.e., 10% training data) with ERwP using the ResNet-50 architecture, we optimize the network for 10 epochs with a learning rate of 1e − 4 using β = 10. Since the available limited training data is only 10% of the
entire CUB-200 dataset, therefore, our ERwP approach is approximately 5 ∗ 10 = 50× faster than the FDR method.
In our proposed approach, we use κ = 2 for all the experiments. We use a popular Pytorch implementation1 for performing knowledge distillation. We run all the experiments 3 times (using different random seeds) and report the average accuracy. We perform all the experiments using the Pytorch framework version 1.6.0 (Paszke et al., 2017) and Python 3.0. We use 4 GeForce GTX 1080 Ti graphics processing units for our experiments.
11.5 RCRMR-LD PROBLEM IN CUB-200 CLASSIFICATION
Table 6 reports the experimental results for different approaches to the RCRMR-LD problem over the CUB dataset using the ResNet-50 architecture. Our proposed ERwP approach achieves a constraint accuracy CAne that is very close to that of the original model even though we use only 3 images per class for optimizing the model. It achieves close to 0% forgetting accuracy FAe and achieves a FPAe that is significantly lower than that of the original model by an absolute margin of 35.80%. Similar to the CIFAR-100 experiments, our ERwP approach outperforms all the baseline approaches. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training data is only 10% of the entire CUB dataset, therefore, our ERwP approach is approximately 5 ∗ 10 = 50× faster than the FDR method that is trained on the full training data for 50 epochs.
11.6 ABLATION EXPERIMENTS FOR β AND κ
We perform ablation experiments to identify the most suitable values for the hyper-parameters β and κ for our proposed ERwP. The ablation results in Tables 7, 8, validate our choice of hyper-parameter values considering the forgetting accuracy and the constraint accuracy of the resulting model.
11.7 EFFECT OF DIFFERENT DATA AUGMENTATIONS ON THE IDENTIFICATION OF CLASS RELEVANT MODEL PARAMETERS
We perform experiments to verify our approach of identifying the highly relevant parameters for any restricted class using various augmentation techniques (grayscale, vertical flip, rotation, random affine augmentations). We chose the same restricted class of CIFAR-100 and used the ResNet-56 network for all the experiments. The results in Fig. 6 indicate that for all the compared data augmentations approaches, pruning/zeroing out the high relevance parameters obtained using our approach, results
1https://github.com/peterliht/knowledge-distillation-pytorch/blob/master/model/net.py
Table 7: Experimental results on the CIFAR100 dataset with ResNet-20 architecture for the RCRMR-LD problem with 20 excluded classes using our proposed ERwP with different values of β.
Table 8: Experimental results on the CIFAR100 dataset with ResNet-20 architecture for the RCRMR-LD problem with 20 excluded classes using our proposed ERwP with different values of κ.
in a huge drop in the forgetting accuracy of the excluded class. Further, zeroing out the low relevance parameters, has a minor impact on the forgetting accuracy of the excluded class.
11.8 ABLATION EXPERIMENTS ON THE RESTRICTED CLASS RELEVANT PARAMETERS
We perform ablation experiments with ERwP to check if only 25% and 50% of the restricted class relevant parameters of each layer identified using our proposed procedure can be used for ERwP. We run each of these experiments for the same number of epochs for the CIFAR-100 dataset and ResNet56 network. However, we observed that the final FPAe falls from 68.65% to 60.35% and 53.7%, respectively, for 25% and 50% of restricted class relevant parameters of each layer as compared to 47.84% when using all the restricted class relevant parameters per layer identified using our approach. The good performance of our approach is more evident in light of the performance of the FDR model that achieves a FPAe accuracy of 45.40%. We provide this result as a reference to demonstrate that the 47.84% FPAe accuracy is due to the generalization power of the model and not due to the restricted classes information in the model. This shows that our approach effectively identifies the class-relevant parameters of the model for a given class.
11.9 PERFORMANCE OF ERWP OVER TRAINING EPOCHS
We analyze the change in the performance of the model after every epoch of our proposed ERwP approach in Fig. 7 for the CIFAR-100 dataset with 20 excluded classes using the ResNet-20 and ResNet-56 architectures. For both architectures, we observe that as the training progresses, ERwP maintains the constraint accuracy close to that of the original model and forces the forgetting accuracy to drop to 0%. ERwP also forces the forgetting prototype accuracy to keep dropping and makes it similar to the FDR model.
11.10 FINETUNING RESTRICTED CLASS RELEVANT PARAMETERS ON REMAINING CLASSES
In our experimental results, we demonstrated how finetuning the model on the limited training data of the non-excluded classes cannot sufficiently remove the excluded class information from the model. We also perform an ablation experiment to demonstrate that finetuning only the restricted class relevant parameters using the limited training data of the non-excluded classes is also not effective in sufficiently removing the excluded class information from the model. We perform these experiments on the CIFAR-100 dataset with the ResNet-56 architecture. We observe the constraint accuracy CAne, forgetting accuracy FAe and the forgetting prototype accuracy FAe are almost the same even in this case. Therefore, finetuning only on the remaining class data cannot sufficiently remove the excluded class information from the model representations.
11.11 EFFECT OF USING THE PROPOSED ERWP APPROACH WHEN THE ENTIRE DATASET IS AVAILABLE
We perform ablation experiments to demonstrate the performance of our proposed ERwP approach when the entire training data is available. We perform these experiments on the CIFAR-100 dataset using ResNet-20 and ResNet-56. We observe experimentally that for both the ResNet-20 and ResNet-
56 experiments using ERwP, the forgetting accuracy FAe accuracy is 0% and the constraint accuracy CAne matches that of the original model. Further, the gap between the forgetting prototype accuracy FPAe of ERwP and the FDR model reduces from 3.86% (for limited data) to 2.79% for ResNet-20. Similarly, the gap reduces from 2.44% (for limited data) to 1.65% for ResNet-56. However, ERwP requires only 2-3 epochs of optimization (∼100-150× faster than the FDR model) for achieving this performance when trained on the entire dataset. This makes it significantly faster than any approach that trains on the entire dataset.
11.12 QUALITATIVE ANALYSIS
In order to analyze the effect of removing the excluded class information from the model using our proposed ERwP approach, we study the class activation map visualizations (Selvaraju et al., 2017) of the model before and after applying ERwP. We observe in Fig. 8 that for the images from the excluded classes, the model’s region of attention gets scattered after applying ERwP, unlike the images from the remaining classes. | 1. What is the main contribution of the paper regarding the problem of restricted class unavailability?
2. What are the strengths and weaknesses of the proposed approach in removing restricted class information from model parameters?
3. How does the reviewer assess the clarity and quality of the paper's content, particularly in terms of concept repetition and lack of clear descriptions?
4. What are some examples from real-world settings that could help illustrate the need for the proposed solution?
5. How do the empirical results on the CIFAR-100 and ImageNet-1K datasets support the effectiveness of the proposed approach, and what are some potential directions for future research?
6. How might the notation used in the paper for excluded and non-excluded classes be improved to avoid confusion? | Summary Of The Paper
Review | Summary Of The Paper
The paper tackles the problem of restricted class unavailability after a deep learning model has already been trained on such restricted classes and the aim is to remove any information pertaining to the restricted classes from the model parameters so that the model will not be able to correctly classify the restricted classes in the future. The approach presented includes identifying the model parameters that are most relevant to the restricted classes and removing the restricted class information from these parameters (gradient ascent) while ensuring that these parameters can still be used for accurately classifying other non-restricted classes. With the need to correctly assess the utility of the proposed approach, several baseline methods have been proposed. Empirical results on the CIFAR-100 and ImageNet-1K datasets illustrate how the proposed approach can be used.
Review
Positives:
The paper studies an important problem of tackling with restricted classes.
The presented approach displays an ability to remove restricted class information from model parameters.
Negatives:
The paper is not very clearly written, with concepts repeated several times and not clear description on some others that are mentioned below.
While the problem is interesting indeed, the motivation for the proposed solution is not clearly presented. Instead of repeating the ideas, it would be helpful to have a few clear examples that illustrate the need to solve this problem, as well as a clear description of the behavior of the said approach. While an example about the company logo is stated, it would be helpful to have a few more clear examples from real-world settings to help the reader. One such example, a model trained to predict which treatment would be beneficial for the patient would need to be altered if the treatment cannot be offered in the future due to ethical or resource constraints.
While empirical results on the CIFAR-100 and ImageNet-1K datasets seem promising, it would be helpful to study this in the real-world dataset. Issues such as generalizability due to distribution shifts in the future and fairness considerations when certain labels are dropped are potential directions.
Additional comments:
The notation for the excluded and non-excluded classes is a bit confusing as
C
e
,
C
r
can both mean excluded or restricted. I would suggest to change this. |
ICLR | Title
Restricted Category Removal from Model Representations using Limited Data
Abstract
Deep learning models are trained on multiple categories jointly to solve several real-world problems. However, there can be cases where some of the classes may become restricted in the future and need to be excluded after the model has already been trained on them (Class-level Privacy). It can be due to privacy, ethical or legal concerns. A naive solution is to simply train the model from scratch on the complete training data while leaving out the training samples from the restricted classes (FDR full data retraining). But this can be a very time-consuming process. Further, this approach will not work well if we no longer have access to the complete training data and instead only have access to very few training data. The objective of this work is to remove the information about the restricted classes from the network representations of all layers using limited data without affecting the prediction power of the model for the remaining classes. Simply fine-tuning the model on the limited available training data for the remaining classes will not be able to sufficiently remove the restricted class information, and aggressive fine-tuning on the limited data may also lead to overfitting. We propose a novel solution to achieve this objective that is significantly faster (∼ 200× on ImageNet) than the naive solution. Specifically, we propose a novel technique for identifying the model parameters that are mainly relevant to the restricted classes. We also propose a novel technique that uses the limited training data of the restricted classes to remove the restricted class information from these parameters and uses the limited training data of the remaining classes to reuse these parameters for the remaining classes. The model obtained through our approach behaves as if it was never trained on the restricted classes and performs similar to FDR (which needs the complete training data). We also propose several baseline approaches and compare our approach with them in order to demonstrate its efficacy.
1 INTRODUCTION
There are several real-world problems in which deep learning models have exceeded human-level performance. This has led to a wide deployment of deep learning models. Deep learning models generally train jointly on a number of categories/classes of data. However, the use of some of these classes may get restricted in the future (restricted classes), and a model with the capability to identify these classes may violate legal/privacy concerns, e.g., a company may legally prevent a deep learning model from having the capability to identify its copyright-protected logo, patented products, and so on. Another example is a treatment prediction model that predicts the best treatment for a patient based on the disease. If one of the treatments for a disease is banned due to its side-effects or ethical concerns, the restricted treatment category has to be excluded from the trained model. Individuals and organizations are becoming increasingly aware of these issues leading to an increasing number of legal cases on privacy issues in recent years. In such situations, the model has to be stripped of its capability to identify these categories. This is a difficult problem to solve, especially if the full training data is no longer available and only a few training examples are available. We present a “Restricted Category Removal from Model Representations with Limited Data” (RCRMR-LD) problem setting that simulates the above problem. In this paper, we propose to solve this problem in a fast and efficient manner.
The objective of the RCRMR-LD problem is to remove the information regarding the restricted classes from the network representations of all layers using the limited training data available without
affecting the ability of the model to identify the remaining classes. If we have access to the full training data, then we can simply exclude the restricted class examples from the training data and perform a full training of the model from scratch using the abundant data (FDR - full data retraining). However, the RCRMR-LD problem setting is based on the scenario that the directive to exclude the restricted classes is received in the future after the model has already been trained on the full data and now only a limited amount of training data is available to carry out this process. Simply training the network from scratch on only the limited training data of the remaining classes will result in severe overfitting and significantly affect the model performance (Baseline 2, as shown in Tables 1, 6).
Another possible solution to this problem is to remove the weights of the fully-connected classification layer of the network corresponding to the excluded classes such that it can no longer classify the excluded classes. However, this approach suffers from a serious problem. Since, in this approach, we only remove some of the weights of the classification layer and the rest of the model remains unchanged, it still contains the information required for recognizing the excluded classes. This information can be easily accessed through the features that the model extracts from the images. Therefore, we can use these features for performing classification. In this paper, we use a nearest prototype-based classifier to demonstrate that the model features still contain information regarding the restricted classes. Specifically, we use the model features of the examples from the limited training data to compute the average class prototype for each class and create a nearest class prototype-based classifier using them. Next, for any given test image, we extract its features using the model and then find the class prototype closest to the given test image. This nearest class prototype-based classifier performs close to the original fully-connected classifier on the excluded classes as shown in Tables 1, 6 (Baseline 1). Therefore, even after using this approach, the resulting model still contains information regarding the restricted classes. Another possible approach can be to apply the standard fine-tuning approach to the model using the limited available training data of the remaining classes (Baseline 8). However, fine-tuning on such limited training data is not able to sufficiently remove the restricted class information from the model representations (see Tables 1, 6), and aggressive fine-tuning on the limited training data may result in overfitting.
Considering the problems faced by the naive approaches mentioned above, we propose a novel “Efficient Removal with Preservation” (ERwP) approach to address the RCRMR-LD problem. First, we propose a novel technique to identify the model parameters that are highly relevant to the restricted classes, and to the best of our knowledge, there are no existing prior works for finding such classspecific relevant parameters. Next, we propose a novel technique that optimizes the model on the limited available training data in such a way that the restricted class information is discarded from the restricted class relevant parameters, and these parameters are reused for the remaining classes.
To the best of our knowledge, this is the first work that addresses the RCRMR-LD problem. Therefore, we also propose several baseline approaches for this problem (see Sec. 11.2). However, our proposed approach significantly outperforms all the proposed baseline approaches. Our proposed approach requires very few epochs to address the RCRMR-LD problem and is, therefore, very fast (∼ 200 times faster than the full data retraining model for the ImageNet dataset) and efficient. The model obtained after applying our approach forgets the excluded classes to such an extent that it behaves as though it was never trained on examples from the excluded classes. The performance of our model is very similar to the full data retraining (FDR) model (see Sec. 8.1, Fig. 5). We also propose the performance metrics needed to evaluate the performance of any approach for the RCRMR-LD problem. We perform experiments on several datasets to demonstrate the efficacy of our method.
2 PROBLEM SETTING
In this work, we present the restricted category removal from model representations with limited data (RCRMR-LD) problem setting, in which a deep learning model Mo trained on a specific dataset has to be modified to exclude information regarding a set of restricted/excluded classes from all layers of the deep learning model without affecting its identification power for the remaining classes (see Fig. 1). The classes that need to be excluded are referred to as the restricted/excluded classes. Let {Ce1 , Ce2 , ..., CeNe} be the restricted/excluded classes, where Ne refers to the number of excluded classes. The remaining classes of the dataset are the remaining/non-excluded classes. Let {Cne1 , Cne2 , ..., CneNne} be the non-excluded classes, whereNne refers to the number of remaining/nonexcluded classes. Additionally, we only have access to a limited amount of training data for the restricted classes and the remaining classes, for carrying out this process. Therefore, any approach for addressing this problem can only utilize this limited training data.
3 RCRMR-LD PROBLEM IN REAL WORLD SCENARIOS
A real-world scenario where our proposed RCRMR-LD problem can arise is federated learning (McMahan et al., 2017). In the federated learning setting, there are multiple collaborators that have a part of the training data stored locally, and a model is trained collaboratively using these private data without sharing or collating the data due to privacy concerns. Suppose organization A has a part of the training data, and there are other collaborators that have other parts of the training data for the same classes. Organization A collaboratively trains a model with other collaborators using federated learning. After the model has been trained, a few classes may become restricted in the future due to some ethical or privacy concerns, and these classes should be removed from the model. However, the other collaborators may not be available or may charge a huge amount of money for collaborating again to train a fresh model from scratch. In this case, organization A does not have access to the full training data of the non-excluded/remaining classes that it can use to re-train a model from scratch in order to exclude the restricted classes information. This clearly shows that the RCRMR-LD problem is possible in federated learning.
Another real-world scenario is the incremental learning setting (Rebuffi et al., 2017; Kemker & Kanan, 2018), where the model receives training data in the form of sequentially arriving tasks. Each task contains a new set of classes. During a training session t, the model receives the task t for training and cannot access the full data of the previous tasks. Instead, the model has access to very few exemplars of the classes in the previous tasks. Suppose before training a model on training session t, it is noticed that some classes from a previous task (< t) have to be removed from the model since those classes have become restricted due to privacy or ethical concerns. In this case, only a limited number of exemplars are available for all these previous classes (restricted and remaining). This demonstrates that the RCRMR-LD problem is also possible in the incremental learning setting. We experimentally demonstrate in Sec. 8.3, how our approach can address the RCRMR-LD problem in the incremental learning setting.
4 PROPOSED METHOD
Let, B refer to a mini-batch (of size S) from the available limited training data, and B contains training datapoints from the restricted/excluded classes ({(xei , yei )|(xe1, ye1), ..., (xeSe , y e Se
)}) and from the remaining/non-excluded classes ({(xnej , ynej )|(xne1 , yne1 ), ..., (xneSne , y ne Sne
)}). Here, (xei , yei ) refers to a training datapoint from the excluded classes where xei is an image, y e i is the corresponding label and yei ∈ {Ce1 , Ce2 , ..., CeNe}. (x ne j , y ne j ) refers to a training datapoint from the non-excluded classes where xnej is an image, y ne j is the corresponding label and y ne j ∈ {Cne1 , Cne2 , ..., CneNne}. Here, Se and Sne refer to the number of training examples in the mini-batch from the excluded and non-excluded classes, respectively, such that S = Se + Sne. Ne and Nne refer to the number of excluded and non-excluded classes, respectively. Let M refer to the deep learning model being trained using our approach and Mo is the original trained deep learning model.
In a trained model, some of the parameters may be highly relevant to the restricted classes, and the performance of the model on the restricted classes is mainly dependent on such highly relevant parameters. Therefore, in our approach, we focus on removing the excluded class information from these restricted class relevant parameters. Since the model is trained on all the classes jointly, the parameters are shared across the different classes. Therefore identifying these class-specific relevant parameters is very difficult. Let us consider a model that is trained on color images of a class. If we now train it on grayscale images of the class, then the model has to learn to identify these new images. In order to do so, the parameters relevant to that class will receive large gradient updates as compared to the other parameters (see Sec. 9.1). We propose a novel approach for identifying the relevant parameters for the restricted classes using this idea. For each restricted class, we choose the training images belonging to that class from the limited available training data. Next, we apply a grayscale data augmentation technique/transformation f to these images so that these images become different from the images that the original model was earlier trained on (assuming that the original model has not been trained on grayscale images). We can also use other data augmentation techniques that are not seen during the training process of the original model and that do not change the class of the image (refer to Sec. 11.7 in the appendix). Next, we combine the predictions for each training image into a single average prediction and perform backpropagation. During the backpropagation, we study the gradients for all the parameters in each layer of the model. Accordingly, we select the parameters with the highest absolute gradient as the relevant parameters for the corresponding restricted class. Specifically, for a given restricted class, we choose the minimum number of such parameters from each network layer such that pruning these parameters will result in the maximum degradation of model performance on that restricted class. We provide a detailed description of the process for identifying the restricted class relevant parameters in Sec. 11.1 of the appendix. The combined set of the relevant parameters for all the excluded classes is referred to as the restricted/excluded class relevant parameters Θexrel (see Fig. 2). Please note that we use this process only to identify Θexrel, and we do not update the model parameters during this step.
Pruning the relevant parameters for a restricted class can severely impact the performance of the model for that class (see Sec. 9.1). However, this may also degrade the performance of the model on the non-excluded classes because the parameters are shared across multiple classes. Therefore, we cannot address the RCRMR-LD problem by pruning the relevant parameters of the excluded classes. Finetuning these parameters on the limited remaining class data will also not be able to sufficiently remove the restricted class information from the model (see Sec. 11.10 in appendix). Based on this, we propose to address the RCRMR-LD problem by optimizing the relevant parameters of the restricted classes to remove the restricted class information from them and to reuse them for the remaining classes.
After identifying the restricted class relevant parameters, our ERwP approach uses a classification loss based on the cross-entropy loss function to optimize the restricted class relevant parameters of the model on each mini-batch (see Fig. 3). We know that the gradient ascent optimization algorithm can be used to maximize a loss function and encourage the model to perform badly on the given input. Therefore, we use the gradient ascent optimization on the classification loss for the limited restricted class training examples to remove the information regarding the restricted classes from Θexrel. We achieve this by multiplying the classification loss for the augmented training examples from the excluded classes by a constant negative factor of -1. We also optimize Θexrel using the gradient descent optimization on the classification loss for the limited remaining class training example, in order to reuse these parameters for the remaining classes. We validate using this approach through various ablation experiments as shown in Sec. 9.2. The classification loss for the examples from the
excluded and non-excluded classes and the overall classification loss for each mini-batch are defined as follows.
Lec = Se∑ i=1 −1 ∗ `(yei , ye∗i ) (1)
Lnec = Sne∑ j=1 `(ynej , y ne∗ j ) (2)
Lc = 1
S (Lec + Lnec ) (3)
Where, ye∗i and y ne∗ j refer to the predicted class labels for x e i and x ne j , respectively. `(., .) refers to the cross-entropy loss function. Lec and Lnec refer to the classification loss for the examples from the excluded and non-excluded classes in the mini-batch, respectively. Lc refers to the overall classification loss for each mini-batch.
Since all the network parameters were jointly trained on all the classes (restricted and remaining), the restricted class relevant parameters also contain information relevant to the remaining classes. Applying the above process alone will still harm the model’s predictive power for the non-excluded classes (as shown in Sec. 9.2, Table 3). This is because the gradient ascent optimization strategy will also erase some of the relevant information regarding the remaining classes. Further, applying Lnec on the limited training examples of the remaining classes will lead to overfitting and will not be effective enough to fully preserve the model performance on the remaining classes. In order to ensure that the model’s predictive power for the non-excluded classes does not change, we use a knowledge distillation-based regularization loss. Knowledge distillation (Hinton et al., 2014) ensures that the predictive power of the teacher network is replicated in the student network. In this problem setting, we want the final model to replicate the same predictive power of the original model for the remaining classes. Therefore, given any training example, we use the knowledge distillation-based regularization loss to ensure that the output logits produced by the model corresponding to only the non-excluded classes remain the same as that produced by the original model. We apply the knowledge distillation loss to the limited training examples from both the excluded and remaining classes, to preserve the non-excluded class logits of the model for any input image. We validate this regularization loss through ablation experiments as shown in Table 3. We use the original model Mo (before applying ERwP) as the teacher network and the current model M being processed by ERwP as the student network, for the knowledge distillation process. Please note that the optimization for this loss is also carried out only for the restricted class relevant parameters of the model. Let KD refer to the knowledge distillation loss function. It computes the Kullback-Liebler (KL) divergence between the soft predictions of the teacher and the student networks and can be defined as follows:
KD(ps, pt) = KL(σ(ps), σ(pt)) (4)
where, σ(.) refers to the softmax activation function that converts logit ai for each class i into a probability by comparing ai with logits of other classes aj , i.e., σ(ai) = exp
ai/κ∑ j exp aj/κ . κ refers to the
temperature (Hinton et al., 2014), KL refers to the KL-Divergence function. ps, pt refer to the logits produced by the student network and the teacher network, respectively.
The knowledge distillation-based regularization losses in our approach are defined as follows.
Lekd = Se∑ i=1 KD(M(xei )[C ne],Mo(x e i )[C ne]) (5)
Lnekd = Sne∑ j=1 KD(M(xnej )[C ne],Mo(x ne j )[C ne]) (6)
Lkd = 1
S (Lekd + Lnekd) (7)
Where, M(#)[Cne] and Mo(#)[Cne] refer to the output logits corresponding to the remaining classes produced by M and Mo, respectively. # can be either xei or x ne j . Lekd and Lnekd refer to knowledge distillation-based regularization loss for the examples from the excluded and non-excluded classes, respectively. Lkd refers to the overall knowledge distillation-based regularization loss for each mini-batch.
The total loss Lerwp of our approach for each mini-batch is defined as follows.
Lerwp = Lc + βLkd (8)
Where, β is a hyper-parameter that controls the contribution of the knowledge distillation-based regularization loss. We use this loss for fine-tuning the model for very few epochs.
5 RELATED WORK
Pruning involves removing redundant and unimportant weights (Carreira-Perpinán & Idelbayev, 2018; Dong et al., 2017; Guo et al., 2016; Han et al., 2015a;b; Tung & Mori, 2018; Zhang et al., 2018) or filters (He et al., 2019a; 2018; 2019b;c; Li et al., 2016) from a deep learning model without affecting the model performance. In contrast, our approach identifies class-specific important parameters, and therefore, the pruning techniques cannot be applied in our approach. In the incremental learning setting (Douillard et al., 2020; Hou et al., 2019; Tao et al., 2020; Yu et al., 2020; Liu et al., 2021), the objective is to preserve the predictive power of the model for previously seen classes while learning a new set of classes. In contrast, our proposed RCRMR-LD problem setting involves removing the information regarding specific classes from the pre-trained model while preserving the capacity of the model to identify the remaining classes. Privacy-preserving deep learning (Nan & Tao, 2020; Louizos et al., 2015; Edwards & Storkey, 2015; Hamm, 2017) involves learning representations that incorporate features from the data relevant to the given task and ignore sensitive information (such as the identity of a person). In contrast, the objective of the RCRMR-LD problem setting, is to achieve class-level privacy, i.e., if a class is declared as private/restricted, then all information about this class should be removed from the model trained on it, without affecting its ability to identify the remaining classes. The authors in (Ginart et al., 2019) propose an approach to delete individual data points from trained machine learning models like a clustering model. In contrast, RCRMR-LD involves removing the information of a set of classes from all layers of a deep learning model. Therefore, the approach proposed in (Ginart et al., 2019) cannot be applied to the RCRMR-LD problem setting.
6 BASELINES
We propose 9 baseline models for the RCRMR-LD problem and compare our proposed approach with them. The baseline 1 involves deleting the weights of the fully-connected classification layer corresponding to the excluded classes. Baselines 2, 3, 4, 5 involve training the model on the limited training data of the remaining classes. Baselines 6, 7, 8, 9 involve fine-tuning the model on the available limited training data. Please refer to Sec. 11.2 in the appendix for details about the baselines.
7 PERFORMANCE METRICS
In the RCRMR-LD problem setting, we propose three performance metrics to validate the performance of any method: forgetting accuracy (FAe), forgetting prototype accuracy (FPAe), and constraint accuracy (CAne). The forgetting accuracy refers to the fully-connected classification layer accuracy of the model for the excluded classes. The forgetting prototype accuracy refers to the nearest class prototype-based classifier accuracy of the model for the excluded classes. CAne refers to the fully-connected classification layer accuracy of the model for the non-excluded classes.
In order to judge any approach on the basis of these metrics, we follow the following sequence. First, we analyze the constraint accuracy (CAne) of the model produced by the given approach to verify if the approach has preserved the prediction power of the model for the non-excluded classes. CAne of the model should be close to that of the original model. If this condition is not satisfied, then the approach is not suitable for this problem, and we need not analyze the other metrics. This is because if the constraint accuracy is not maintained, then the overall usability of the model is hurt significantly. Next, we analyze the forgetting accuracy (FAe) of the model to verify if the excluded class information has been removed from the model at the classifier level. FAe of the model should be as close to 0% as possible. Finally, we analyze the forgetting prototype accuracy (FPAe) of the model to verify if the excluded class information has been removed from the model at the feature level. FPAe of the model should be significantly less than that of the original model. However, the FPAe will not become zero since any trained model will learn to extract meaningful features, which will help the nearest class prototype-based classifier to achieve some non-negligible accuracy even on the excluded classes. Therefore, for a better analysis of the level of forgetting of the excluded classes at the feature level, we compare the FPAe of the model with the FPAe of the FDR model. The
FDR model is a good candidate for this analysis since it has not been trained on the excluded classes (only trained on the complete dataset of the remaining classes), and it still achieves a non-negligible performance of the excluded classes (see Sec 8.1). However, it should be noted that this comparison is only for analysis and the comparison is not fair since the FDR model needs to train on the entire dataset (except the excluded classes).
8 EXPERIMENTS
We have reported the experimental results for the CIFAR-100 and ImageNet-1k datasets in this section. We have provided the results on the CUB-200 dataset in the appendix. Please refer to the appendix for the details regarding the dataset and implementation.
8.1 CIFAR-100 RESULTS
We report the performance of different baselines and our proposed ERwP method on the RCRMR-LD problem using the CIFAR-100 dataset with different architectures in Table 1. We observe that the baseline 1 (weight deletion) achieves high constraint accuracy CAne and 0% forgetting accuracy FAe. But its forgetting prototype accuracy FPAe remains the same as the original model for all the three architectures, i.e., ResNet-20/56/164. Therefore, baseline 1 fails to remove the excluded class information from the model at the feature level. Baseline 2 is not able to preserve the constraint accuracy CAne even though it performs full training on the limited excluded class data. Baseline 3 achieves higher CAne than baseline 2, but the constraint accuracy is still too low. Baselines 4 and 5 demonstrate significantly better constraint accuracy than baseline 2 and 3, but their constraint accuracy is still significantly lower than the original model (except baseline 5 for ResNet-20). The baseline 5 with ResNet-20 maintains the constraint accuracy and achieves 0% forgetting accuracy FAe but its FPAe is still significantly high and, therefore, is unable to remove the excluded class information from the model at the feature level. The fine-tuning based baselines 6 and 7 are able to significantly reduce the forgetting accuracy FAe but their constraint accuracy CAne drops significantly. The fine-tuning based baselines 8 and 9 only finetune the model on the limited remaining class data and as a result they are not able to sufficiently reduce either the forgetting accuracy FAe or the forgetting prototype accuracy FPAe.
Our proposed ERwP approach achieves a constraint accuracy CAne that is very close to the original model for all three architectures. It achieves close to 0% FAe. Further, it achieves a significantly lower FPAe than the original model. Specifically, the FPAe of our approach is significantly lower than that of the original model by absolute margins of 17.19%, 20.81%, and 20.17% for the ResNet-20, ResNet56, and ResNet-164 architectures, respectively. The FPAe for the FDR model is 44.20%, 45.40% and 51.85% for the ResNet-20, ResNet-56 and ResNet-164 architectures, respectively. Therefore, the FPAe of our approach is close to that of the FDR model by absolute margins of 3.86%, 2.44% and 4.38% for the ResNet-20, ResNet-56 and ResNet-164 architectures, respectively. Therefore, our ERwP approach makes the model behave similar to the FDR model even though it was trained on only limited data from the excluded and remaining classes. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training
Model Methods Top-1 Top-5
FAe CAne FAe CAne
Res-18 Original 69.76% 69.76% 89.58% 89.02%ERwP 0.28% 69.13% 1.01% 88.93%
Res-50 Original 76.30% 76.11% 93.04% 92.84%ERwP 0.25% 75.45% 2.55% 92.39%
Mob-V2 Original 72.38% 70.83% 91.28% 90.18%ERwP 0.17% 70.81% 0.81% 89.95%
data is only 10% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 30 ∗ 10 = 300× faster than the FDR method that is trained on the full training data for 300 epochs.
8.2 IMAGENET RESULTS
Table 2 reports the experimental results for different approaches to RCRMR-LD problem over the ImageNet-1k dataset using the ResNet-18, ResNet-50 and MobileNet V2 architectures. Our proposed ERwP approach achieves a top-1 constraint accuracy CAne that is very close to that of the original model by absolute margins of 0.63%, 0.66% and 0.02% for the ResNet-18, ResNet-50 and MobileNet V2 architectures, respectively. It achieves close to 0% top-1 forgetting accuracy FAe for all the three architectures. Therefore, our approach performs well even on the large-scale ImageNet-1k dataset. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training data is only 5% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 20 ∗ 10 = 200× faster than the FDR method that is trained on the full data for 100 epochs.
8.3 RCRMR-LD PROBLEM IN INCREMENTAL LEARNING
In this section, we experimentally demonstrate how the RCRMR-LD problem in the incremental learning setting is addressed using our proposed approach. We consider an incremental learning setting on the CIFAR-100 dataset in which each task contains 20 classes. We use the BIC (Wu et al., 2019) method for incremental learning on this dataset. The exemplar memory size is fixed at 2000 as per the setting in (Wu et al., 2019). In this setting, there are 5 tasks. Let us assume that the model (M4) has already been trained on 4 tasks (80 classes), and we are in the fifth training session. Suppose, at this stage, it is noticed that all the classes in the first task (20 classes) have become restricted and need to be removed before the model is trained on task 5. However, we only have a limited number of exemplars of the 80 classes seen till now, i.e., 2000/80 = 25 per class. We apply our proposed approach to the model obtained after training session 4, and the results are reported in Table 4. The results indicate that our approach modified the model obtained after session 4, such that the forgetting accuracy of the restricted classes approaches 0% and the constraint accuracy of the remaining classes is not affected. In fact, the modified model behaves as if, it was never trained on the classes from task 1. We can now perform the incremental training of the modified model on task 5.
9 ABLATION STUDIES
9.1 RESTRICTED CLASS RELEVANT PARAMETERS
We perform ablation experiments to verify our approach of identifying the highly relevant parameters for any restricted class. We perform these experiments on the CIFAR-100 dataset with the ResNet-56 architecture and report the forgetting accuracy FAe for the randomly chosen excluded class. Please note that in this case, only the chosen class of CIFAR-100 is the restricted class and all the remaining
classes constitute the non-excluded classes. In order to show the effectiveness of our approach, we sort the absolute gradients of the parameters in the model (obtained through backpropagation for the excluded class augmented images) and choose a set of high relevance and low relevance parameters. We then prune/zero out these parameters and record the forgetting accuracy. Fig. 4 demonstrates that as we zero out the high relevance parameters, the forgetting accuracy of the excluded class drops by a huge margin. It also shows that as we zero out the low relevance parameters, there is only a minor change in the forgetting accuracy of the excluded class. This validates our approach for identifying the high relevant parameters for the restricted classes.
9.2 SIGNIFICANCE OF THE COMPONENTS OF THE PROPOSED ERWP APPROACH
We perform ablations on the CIFAR-100 dataset using the ResNet-56 model to study the significance of the Lec, Lnec and Lkd components of our proposed ERwP approach. Table 3 indicates that optimizing the restricted class relevant parameters using only Lnec cannot significantly remove the information regarding the restricted classes from the model. Applying Lnec along with Lec significantly reduces the forgetting accuracy FAe and forgetting prototype accuracy FPAe but also significantly reduces the constraint accuracy CAne. Finally, applying the Lkd loss along with Lnec and Lec significantly reduces FAe and FPAe while maintaining the constraint accuracy CAne very close to that of the original model.
9.3 ABLATION ON THE NUMBER OF EXCLUDED CLASSES
We report the experimental results for our approach for different splits of excluded and remaining classes of the CIFAR-100 dataset in Table 5. We observe that our ERwP performs well for all the splits for both the ResNet-20 and ResNet-56 architectures.
9.4 PERFORMANCE OF ERWP OVER TRAINING EPOCHS
We analyze the change in the performance of the model after every epoch of our proposed ERwP approach in Fig. 5. We observe that as the training progresses the constraint accuracy is maintained close to that of the original model, the forgetting accuracy keeps dropping till it reaches 0% and the forgetting prototype accuracy keeps falling and comes closer to that of the FDR model.
10 CONCLUSION
In this paper, we present a “Restricted Category Removal from Model Representations with Limited Data” problem in which the objective is to remove the information regarding a set of excluded/restricted classes from a trained deep learning model without hurting its predictive power for the remaining classes. We propose several baseline approaches and also the performance metrics for this setting. First, we propose a novel approach to identify the model parameters that are highly relevant to the restricted classes. Next, we propose a novel efficient approach that optimizes these model parameters in order to remove the restricted class information and re-use these parameters for the remaining classes. We experimentally show how our approach significantly outperforms all the proposed baselines and performs similar to the full data retraining model.
11 APPENDIX
11.1 PROCESS FOR SELECTING THE RESTRICTED CLASS RELEVANT PARAMETERS
First, we apply a data augmentation technique f , not used during training, to the images of the given restricted class. Next, we combine the predictions for these images and perform backpropagation. Finally, we select the parameters with the highest absolute gradient as the relevant parameters for the corresponding restricted class. Specifically, for a given restricted class, we choose the minimum number of such parameters from each network layer such that pruning these parameters will result in the maximum degradation of model performance on that restricted class. We use a process similar to the binary search for automatically selecting the parameters with the highest absolute gradient. We use an automated script that first creates a list of parameters in each layer, sorts them in descending order according to the gradient values, and checks if zeroing out the weights of the first 5% parameters from this list leads to near zero accuracy for that class. If not, then we select double the number of parameters chosen earlier and repeat this process. If the accuracy is near zero, we repeat the process with half the number of parameters chosen earlier. Please note, this process is just for identifying the parameters relevant to the restricted classes, and their weights are restored after this process. The combined set of the relevant parameters for all the excluded classes is referred to as the restricted/excluded class relevant parameters.
11.2 BASELINES
We propose several baseline models for the RCRMR-LD problem and compare our proposed approach with them. The baseline approaches are defined as follows:
Original model: It refers to the original model that is trained on the complete training set containing all the training examples from both the excluded and non-excluded classes. It represents the model that has not been modified by any technique to remove the excluded class information.
Baseline 1 - Weight Deletion (WD): It refers to the original model with a modified fully-connected classification layer. Specifically, we remove the weights corresponding to the excluded classes in the fully-connected classification layer so that it cannot classify the excluded classes.
Baseline 2 - Training from Scratch on Limited Non-Restricted Class data (TSLNRC): In this baseline, we train a new model from scratch using the limited training examples of only the nonexcluded classes. It uses the complete training schedule as the original model and only uses the classification loss for training the model.
Baseline 3 - Training from Scratch on Limited Non-Restricted Class data with KD (TSLNRCKD): This baseline is the same as baseline 2, but in addition to the classification loss, it also uses a knowledge distillation loss to ensure that the non-excluded class logits of the model (student) match that of the original model (teacher).
Baseline 4 - Training of Original model on Limited Non-Restricted Class data (TOLNRC): This baseline is the same as baseline 2, but the model is initialized with the weights of the original model instead of randomly initializing it.
Baseline 5 - Training of Original model on Limited Non-Restricted Class data with KD (TOLNRC-KD): This baseline is the same as baseline 4, but in addition to the classification loss, it also uses a knowledge distillation loss.
Baseline 6 - Fine-tuning of Original model on Limited data after Mapping Restricted Classes to a Single Class (FOLMRCSC): In this baseline approach, we first replace all the excluded class labels in the limited training data with a new single excluded class label and then fine-tune the original model for a few epochs on the limited training data of both the excluded and remaining classes. In the case of the examples from the excluded classes, the model is trained to predict the new single excluded class. In the case of the examples from the remaining classes, the model is trained to predict the corresponding non-excluded classes.
Baseline 7 - Fine-tuning of Original model on Limited data after Mapping Restricted Classes to a Single Class with KD (FOLMRCSC-KD): This baseline is the same as baseline 6, but in
addition to the classification loss, it also uses a knowledge distillation loss to ensure that the nonexcluded class logits of the model (student) match that of the original model (teacher).
Baseline 8 - Fine-tuning of Original model on Limited Non-Restricted Class data (FOLNRC): In this baseline approach, we fine-tune the original model for a few epochs on the limited training data of non-excluded/remaining classes. The model is trained to predict the corresponding excluded classes of the training examples.
Baseline 9 - Fine-tuning of Original model on Limited Non-Restricted Class data with KD (FOLNRC-KD): This baseline is the same as baseline 8, but in addition to the classification loss, it also uses a knowledge distillation loss.
11.3 DATASETS
For the RCRMR-LD problem setting, we modify the CIFAR-100 (Krizhevsky et al., 2009), CUB (Wah et al., 2011) and ImageNet-1k (Russakovsky et al., 2015) datasets. In order to simulate the RCRMRLD problem setting with limited training data, we choose the last 20 classes of the CIFAR-100 dataset as the excluded classes and take only 10% of the training images of each class. Similarly, we choose the last 20 classes of the CUB dataset as the excluded classes with only 3 training images per class. For ImageNet-1K, we choose the last 100 classes as the excluded classes with 5% of the training images to simulate the limited data available for this problem setting.
11.4 IMPLEMENTATION DETAILS
In this section, we provide all the details required to reproduce our experimental results. We use the ResNet-20 (He et al., 2016), ResNet-56, ResNet-164 architectures for the experiments on the CIFAR-100 dataset. We use the standard data augmentation methods of random cropping to a size of 32 × 32 (zero-padded on each side with four pixels before taking a random crop) and random horizontal flipping, which is a standard practice for training a model on CIFAR-100. In order to obtain the original and FDR models for the CIFAR-100 dataset, we train the network for 300 epochs with a mini-batch size of 128 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e− 4. We choose the initial learning rate as 0.1, and we decrease it by a factor of 5 after every 50 epochs. For the CIFAR-100 experiments with ERwP using the ResNet-20, ResNet-56, and ResNet-164 architectures, we take learning rate= 1e− 4, β = 10 and optimize the network for 10 epochs. Since the available limited training data is only 10% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 30 ∗ 10 = 300× faster than the FDR method. For the experiments on the ImageNet dataset, we use the ResNet-18, ResNet-50, and MobileNet-V2 architectures. We use the standard data augmentation methods of random cropping to a size of 224 × 224 and random horizontal flipping, which is a standard practice for training a model on ImageNet-1k. In order to obtain the original and FDR models for the ImageNet dataset, we train the network for 100 epochs with a mini-batch size of 256 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e − 4. We choose the initial learning rate as 0.1, and we decrease it by a factor of 10 after every 30 epochs. For evaluation, the validation images are subjected to center cropping of size 224 × 224. For the ImageNet-1k experiments (5% training data) with ERwP using the ResNet-50 architecture, we optimize the network for 10 epochs with a learning rate of 9e − 5 using β = 200. For the ERwP experiments using the ResNet-18 architecture, we optimize the network for 10 epochs using β = 200 with an initial learning rate of 1.1e − 4 and a learning rate of 1.1e− 5 from the third epoch onward. In the case of the ERwP experiments with the MobileNet-V2 architecture, we optimize the network for 10 epochs using β = 400 with an initial learning rate of 1.5e − 4 and a learning rate of 1.5e − 5 from the third epoch onward. Since the available limited training data is only 5% of the entire ImageNet-1k dataset, therefore, our ERwP approach is approximately 20 ∗ 10 = 200× faster than the FDR method. For the experiments on the CUB-200 dataset, we use the ResNet-50 architecture pre-trained on the ImageNet dataset. In order to obtain the original and FDR models for the CUB-200 dataset, we train the network for 50 epochs with a mini-batch size of 64 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e − 3. We choose the initial learning rate as 1e − 2, and we decrease it by a factor of 10 after epochs 30 and 40. For the CUB-200 experiments (3 images per class, i.e., 10% training data) with ERwP using the ResNet-50 architecture, we optimize the network for 10 epochs with a learning rate of 1e − 4 using β = 10. Since the available limited training data is only 10% of the
entire CUB-200 dataset, therefore, our ERwP approach is approximately 5 ∗ 10 = 50× faster than the FDR method.
In our proposed approach, we use κ = 2 for all the experiments. We use a popular Pytorch implementation1 for performing knowledge distillation. We run all the experiments 3 times (using different random seeds) and report the average accuracy. We perform all the experiments using the Pytorch framework version 1.6.0 (Paszke et al., 2017) and Python 3.0. We use 4 GeForce GTX 1080 Ti graphics processing units for our experiments.
11.5 RCRMR-LD PROBLEM IN CUB-200 CLASSIFICATION
Table 6 reports the experimental results for different approaches to the RCRMR-LD problem over the CUB dataset using the ResNet-50 architecture. Our proposed ERwP approach achieves a constraint accuracy CAne that is very close to that of the original model even though we use only 3 images per class for optimizing the model. It achieves close to 0% forgetting accuracy FAe and achieves a FPAe that is significantly lower than that of the original model by an absolute margin of 35.80%. Similar to the CIFAR-100 experiments, our ERwP approach outperforms all the baseline approaches. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training data is only 10% of the entire CUB dataset, therefore, our ERwP approach is approximately 5 ∗ 10 = 50× faster than the FDR method that is trained on the full training data for 50 epochs.
11.6 ABLATION EXPERIMENTS FOR β AND κ
We perform ablation experiments to identify the most suitable values for the hyper-parameters β and κ for our proposed ERwP. The ablation results in Tables 7, 8, validate our choice of hyper-parameter values considering the forgetting accuracy and the constraint accuracy of the resulting model.
11.7 EFFECT OF DIFFERENT DATA AUGMENTATIONS ON THE IDENTIFICATION OF CLASS RELEVANT MODEL PARAMETERS
We perform experiments to verify our approach of identifying the highly relevant parameters for any restricted class using various augmentation techniques (grayscale, vertical flip, rotation, random affine augmentations). We chose the same restricted class of CIFAR-100 and used the ResNet-56 network for all the experiments. The results in Fig. 6 indicate that for all the compared data augmentations approaches, pruning/zeroing out the high relevance parameters obtained using our approach, results
1https://github.com/peterliht/knowledge-distillation-pytorch/blob/master/model/net.py
Table 7: Experimental results on the CIFAR100 dataset with ResNet-20 architecture for the RCRMR-LD problem with 20 excluded classes using our proposed ERwP with different values of β.
Table 8: Experimental results on the CIFAR100 dataset with ResNet-20 architecture for the RCRMR-LD problem with 20 excluded classes using our proposed ERwP with different values of κ.
in a huge drop in the forgetting accuracy of the excluded class. Further, zeroing out the low relevance parameters, has a minor impact on the forgetting accuracy of the excluded class.
11.8 ABLATION EXPERIMENTS ON THE RESTRICTED CLASS RELEVANT PARAMETERS
We perform ablation experiments with ERwP to check if only 25% and 50% of the restricted class relevant parameters of each layer identified using our proposed procedure can be used for ERwP. We run each of these experiments for the same number of epochs for the CIFAR-100 dataset and ResNet56 network. However, we observed that the final FPAe falls from 68.65% to 60.35% and 53.7%, respectively, for 25% and 50% of restricted class relevant parameters of each layer as compared to 47.84% when using all the restricted class relevant parameters per layer identified using our approach. The good performance of our approach is more evident in light of the performance of the FDR model that achieves a FPAe accuracy of 45.40%. We provide this result as a reference to demonstrate that the 47.84% FPAe accuracy is due to the generalization power of the model and not due to the restricted classes information in the model. This shows that our approach effectively identifies the class-relevant parameters of the model for a given class.
11.9 PERFORMANCE OF ERWP OVER TRAINING EPOCHS
We analyze the change in the performance of the model after every epoch of our proposed ERwP approach in Fig. 7 for the CIFAR-100 dataset with 20 excluded classes using the ResNet-20 and ResNet-56 architectures. For both architectures, we observe that as the training progresses, ERwP maintains the constraint accuracy close to that of the original model and forces the forgetting accuracy to drop to 0%. ERwP also forces the forgetting prototype accuracy to keep dropping and makes it similar to the FDR model.
11.10 FINETUNING RESTRICTED CLASS RELEVANT PARAMETERS ON REMAINING CLASSES
In our experimental results, we demonstrated how finetuning the model on the limited training data of the non-excluded classes cannot sufficiently remove the excluded class information from the model. We also perform an ablation experiment to demonstrate that finetuning only the restricted class relevant parameters using the limited training data of the non-excluded classes is also not effective in sufficiently removing the excluded class information from the model. We perform these experiments on the CIFAR-100 dataset with the ResNet-56 architecture. We observe the constraint accuracy CAne, forgetting accuracy FAe and the forgetting prototype accuracy FAe are almost the same even in this case. Therefore, finetuning only on the remaining class data cannot sufficiently remove the excluded class information from the model representations.
11.11 EFFECT OF USING THE PROPOSED ERWP APPROACH WHEN THE ENTIRE DATASET IS AVAILABLE
We perform ablation experiments to demonstrate the performance of our proposed ERwP approach when the entire training data is available. We perform these experiments on the CIFAR-100 dataset using ResNet-20 and ResNet-56. We observe experimentally that for both the ResNet-20 and ResNet-
56 experiments using ERwP, the forgetting accuracy FAe accuracy is 0% and the constraint accuracy CAne matches that of the original model. Further, the gap between the forgetting prototype accuracy FPAe of ERwP and the FDR model reduces from 3.86% (for limited data) to 2.79% for ResNet-20. Similarly, the gap reduces from 2.44% (for limited data) to 1.65% for ResNet-56. However, ERwP requires only 2-3 epochs of optimization (∼100-150× faster than the FDR model) for achieving this performance when trained on the entire dataset. This makes it significantly faster than any approach that trains on the entire dataset.
11.12 QUALITATIVE ANALYSIS
In order to analyze the effect of removing the excluded class information from the model using our proposed ERwP approach, we study the class activation map visualizations (Selvaraju et al., 2017) of the model before and after applying ERwP. We observe in Fig. 8 that for the images from the excluded classes, the model’s region of attention gets scattered after applying ERwP, unlike the images from the remaining classes. | 1. What is the main contribution of the paper regarding fine-tuning pretrained models?
2. What are the concerns regarding the motivation and significance of studying class-level privacy?
3. How does the reviewer assess the relevance and effectiveness of individual data deletion methods in this work?
4. What are the questions regarding the identification process of parameters related to restricted classes?
5. How can the method handle tabular data privacy?
6. What is the reviewer's suggestion for comparing model performances in terms of remaining classes only? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a new learning setting of fine-tuning a pretrained model to forget some specific categories which is motivated by class-level privacy. The solution to this challenge is firstly detecting the most related model parameters that significantly affect model performance on restricted classes and then tuning on a small number of examples with the losses of desired classification capability. The proposed method is experimentally demonstrated effective than possible baselines.
Review
This paper proposes a new learning setting of fine-tuning a pretrained model to forget some specific categories which is motivated by class-level privacy. The solution to this challenge is firstly detecting the most related model parameters that significantly affect model performance on restricted classes and then tuning on a small number of examples with the losses of desired classification capability. The proposed method is experimentally demonstrated effective than possible baselines.
I mainly have the following concerns.
The motivation of the new setting is not strong. In introduction, the class-level privacy is specified by violated privacy concerns and corresponding examples. However, they are not quite convincing to me, and I feel more practical instances are needed to clarify the significance of studying class-level privacy. In particular, in what situation there would be only a few training examples available when considering removing information of restricted class from model concerned with privacy?
In related work, individual data deletion (Ginart et al 2019) is cited but not properly evaluated. Following the work of data deletion, I feel there also exists an important problem which is ignored. That is, making model forget some examples or some classes does not mean zero classification accuracy or random classification accuracy (i.e., 1/N). In data deletion work, Ginart et al tune the pretrained model by only compensating the impact of deleted samples instead of forcing model have large error on them. As a result, the tuned model turns out to be never seeing the deleted examples. This work obviously cannot guarantee it from the loss shown as Eq. 1. For example, a 3-way classifier on dog, cat, and leopard and leopard is the restricted class. It is predicted a classifier training on dog and cat only would intend classify leopard to cat because of their natural similarity. Thus, a careful clarification about this point is required in this paper, especially from the view of class-privacy.
The process of identifying parameters related to restricted classes seems quite empirically, as a transformation component is needed from some prior knowledge. The authors have mentioned it for images. However, many data privacy related data are also tabular. In this case, how to apply a proper transformation? If this component is quite related to data format, any workaround for this issue?
From Figure 3, KD is defined for remaining classes only, but the KD loss also includes restricted classes.
It is interesting to see the model performance comparison with the original training in term of remaining classes only (also related to concern 2, the model performance on the original raining data of remaining classes only may be a good reference point for evaluation), although original training data might be inaccessible in the proposed setting. |
ICLR | Title
Restricted Category Removal from Model Representations using Limited Data
Abstract
Deep learning models are trained on multiple categories jointly to solve several real-world problems. However, there can be cases where some of the classes may become restricted in the future and need to be excluded after the model has already been trained on them (Class-level Privacy). It can be due to privacy, ethical or legal concerns. A naive solution is to simply train the model from scratch on the complete training data while leaving out the training samples from the restricted classes (FDR full data retraining). But this can be a very time-consuming process. Further, this approach will not work well if we no longer have access to the complete training data and instead only have access to very few training data. The objective of this work is to remove the information about the restricted classes from the network representations of all layers using limited data without affecting the prediction power of the model for the remaining classes. Simply fine-tuning the model on the limited available training data for the remaining classes will not be able to sufficiently remove the restricted class information, and aggressive fine-tuning on the limited data may also lead to overfitting. We propose a novel solution to achieve this objective that is significantly faster (∼ 200× on ImageNet) than the naive solution. Specifically, we propose a novel technique for identifying the model parameters that are mainly relevant to the restricted classes. We also propose a novel technique that uses the limited training data of the restricted classes to remove the restricted class information from these parameters and uses the limited training data of the remaining classes to reuse these parameters for the remaining classes. The model obtained through our approach behaves as if it was never trained on the restricted classes and performs similar to FDR (which needs the complete training data). We also propose several baseline approaches and compare our approach with them in order to demonstrate its efficacy.
1 INTRODUCTION
There are several real-world problems in which deep learning models have exceeded human-level performance. This has led to a wide deployment of deep learning models. Deep learning models generally train jointly on a number of categories/classes of data. However, the use of some of these classes may get restricted in the future (restricted classes), and a model with the capability to identify these classes may violate legal/privacy concerns, e.g., a company may legally prevent a deep learning model from having the capability to identify its copyright-protected logo, patented products, and so on. Another example is a treatment prediction model that predicts the best treatment for a patient based on the disease. If one of the treatments for a disease is banned due to its side-effects or ethical concerns, the restricted treatment category has to be excluded from the trained model. Individuals and organizations are becoming increasingly aware of these issues leading to an increasing number of legal cases on privacy issues in recent years. In such situations, the model has to be stripped of its capability to identify these categories. This is a difficult problem to solve, especially if the full training data is no longer available and only a few training examples are available. We present a “Restricted Category Removal from Model Representations with Limited Data” (RCRMR-LD) problem setting that simulates the above problem. In this paper, we propose to solve this problem in a fast and efficient manner.
The objective of the RCRMR-LD problem is to remove the information regarding the restricted classes from the network representations of all layers using the limited training data available without
affecting the ability of the model to identify the remaining classes. If we have access to the full training data, then we can simply exclude the restricted class examples from the training data and perform a full training of the model from scratch using the abundant data (FDR - full data retraining). However, the RCRMR-LD problem setting is based on the scenario that the directive to exclude the restricted classes is received in the future after the model has already been trained on the full data and now only a limited amount of training data is available to carry out this process. Simply training the network from scratch on only the limited training data of the remaining classes will result in severe overfitting and significantly affect the model performance (Baseline 2, as shown in Tables 1, 6).
Another possible solution to this problem is to remove the weights of the fully-connected classification layer of the network corresponding to the excluded classes such that it can no longer classify the excluded classes. However, this approach suffers from a serious problem. Since, in this approach, we only remove some of the weights of the classification layer and the rest of the model remains unchanged, it still contains the information required for recognizing the excluded classes. This information can be easily accessed through the features that the model extracts from the images. Therefore, we can use these features for performing classification. In this paper, we use a nearest prototype-based classifier to demonstrate that the model features still contain information regarding the restricted classes. Specifically, we use the model features of the examples from the limited training data to compute the average class prototype for each class and create a nearest class prototype-based classifier using them. Next, for any given test image, we extract its features using the model and then find the class prototype closest to the given test image. This nearest class prototype-based classifier performs close to the original fully-connected classifier on the excluded classes as shown in Tables 1, 6 (Baseline 1). Therefore, even after using this approach, the resulting model still contains information regarding the restricted classes. Another possible approach can be to apply the standard fine-tuning approach to the model using the limited available training data of the remaining classes (Baseline 8). However, fine-tuning on such limited training data is not able to sufficiently remove the restricted class information from the model representations (see Tables 1, 6), and aggressive fine-tuning on the limited training data may result in overfitting.
Considering the problems faced by the naive approaches mentioned above, we propose a novel “Efficient Removal with Preservation” (ERwP) approach to address the RCRMR-LD problem. First, we propose a novel technique to identify the model parameters that are highly relevant to the restricted classes, and to the best of our knowledge, there are no existing prior works for finding such classspecific relevant parameters. Next, we propose a novel technique that optimizes the model on the limited available training data in such a way that the restricted class information is discarded from the restricted class relevant parameters, and these parameters are reused for the remaining classes.
To the best of our knowledge, this is the first work that addresses the RCRMR-LD problem. Therefore, we also propose several baseline approaches for this problem (see Sec. 11.2). However, our proposed approach significantly outperforms all the proposed baseline approaches. Our proposed approach requires very few epochs to address the RCRMR-LD problem and is, therefore, very fast (∼ 200 times faster than the full data retraining model for the ImageNet dataset) and efficient. The model obtained after applying our approach forgets the excluded classes to such an extent that it behaves as though it was never trained on examples from the excluded classes. The performance of our model is very similar to the full data retraining (FDR) model (see Sec. 8.1, Fig. 5). We also propose the performance metrics needed to evaluate the performance of any approach for the RCRMR-LD problem. We perform experiments on several datasets to demonstrate the efficacy of our method.
2 PROBLEM SETTING
In this work, we present the restricted category removal from model representations with limited data (RCRMR-LD) problem setting, in which a deep learning model Mo trained on a specific dataset has to be modified to exclude information regarding a set of restricted/excluded classes from all layers of the deep learning model without affecting its identification power for the remaining classes (see Fig. 1). The classes that need to be excluded are referred to as the restricted/excluded classes. Let {Ce1 , Ce2 , ..., CeNe} be the restricted/excluded classes, where Ne refers to the number of excluded classes. The remaining classes of the dataset are the remaining/non-excluded classes. Let {Cne1 , Cne2 , ..., CneNne} be the non-excluded classes, whereNne refers to the number of remaining/nonexcluded classes. Additionally, we only have access to a limited amount of training data for the restricted classes and the remaining classes, for carrying out this process. Therefore, any approach for addressing this problem can only utilize this limited training data.
3 RCRMR-LD PROBLEM IN REAL WORLD SCENARIOS
A real-world scenario where our proposed RCRMR-LD problem can arise is federated learning (McMahan et al., 2017). In the federated learning setting, there are multiple collaborators that have a part of the training data stored locally, and a model is trained collaboratively using these private data without sharing or collating the data due to privacy concerns. Suppose organization A has a part of the training data, and there are other collaborators that have other parts of the training data for the same classes. Organization A collaboratively trains a model with other collaborators using federated learning. After the model has been trained, a few classes may become restricted in the future due to some ethical or privacy concerns, and these classes should be removed from the model. However, the other collaborators may not be available or may charge a huge amount of money for collaborating again to train a fresh model from scratch. In this case, organization A does not have access to the full training data of the non-excluded/remaining classes that it can use to re-train a model from scratch in order to exclude the restricted classes information. This clearly shows that the RCRMR-LD problem is possible in federated learning.
Another real-world scenario is the incremental learning setting (Rebuffi et al., 2017; Kemker & Kanan, 2018), where the model receives training data in the form of sequentially arriving tasks. Each task contains a new set of classes. During a training session t, the model receives the task t for training and cannot access the full data of the previous tasks. Instead, the model has access to very few exemplars of the classes in the previous tasks. Suppose before training a model on training session t, it is noticed that some classes from a previous task (< t) have to be removed from the model since those classes have become restricted due to privacy or ethical concerns. In this case, only a limited number of exemplars are available for all these previous classes (restricted and remaining). This demonstrates that the RCRMR-LD problem is also possible in the incremental learning setting. We experimentally demonstrate in Sec. 8.3, how our approach can address the RCRMR-LD problem in the incremental learning setting.
4 PROPOSED METHOD
Let, B refer to a mini-batch (of size S) from the available limited training data, and B contains training datapoints from the restricted/excluded classes ({(xei , yei )|(xe1, ye1), ..., (xeSe , y e Se
)}) and from the remaining/non-excluded classes ({(xnej , ynej )|(xne1 , yne1 ), ..., (xneSne , y ne Sne
)}). Here, (xei , yei ) refers to a training datapoint from the excluded classes where xei is an image, y e i is the corresponding label and yei ∈ {Ce1 , Ce2 , ..., CeNe}. (x ne j , y ne j ) refers to a training datapoint from the non-excluded classes where xnej is an image, y ne j is the corresponding label and y ne j ∈ {Cne1 , Cne2 , ..., CneNne}. Here, Se and Sne refer to the number of training examples in the mini-batch from the excluded and non-excluded classes, respectively, such that S = Se + Sne. Ne and Nne refer to the number of excluded and non-excluded classes, respectively. Let M refer to the deep learning model being trained using our approach and Mo is the original trained deep learning model.
In a trained model, some of the parameters may be highly relevant to the restricted classes, and the performance of the model on the restricted classes is mainly dependent on such highly relevant parameters. Therefore, in our approach, we focus on removing the excluded class information from these restricted class relevant parameters. Since the model is trained on all the classes jointly, the parameters are shared across the different classes. Therefore identifying these class-specific relevant parameters is very difficult. Let us consider a model that is trained on color images of a class. If we now train it on grayscale images of the class, then the model has to learn to identify these new images. In order to do so, the parameters relevant to that class will receive large gradient updates as compared to the other parameters (see Sec. 9.1). We propose a novel approach for identifying the relevant parameters for the restricted classes using this idea. For each restricted class, we choose the training images belonging to that class from the limited available training data. Next, we apply a grayscale data augmentation technique/transformation f to these images so that these images become different from the images that the original model was earlier trained on (assuming that the original model has not been trained on grayscale images). We can also use other data augmentation techniques that are not seen during the training process of the original model and that do not change the class of the image (refer to Sec. 11.7 in the appendix). Next, we combine the predictions for each training image into a single average prediction and perform backpropagation. During the backpropagation, we study the gradients for all the parameters in each layer of the model. Accordingly, we select the parameters with the highest absolute gradient as the relevant parameters for the corresponding restricted class. Specifically, for a given restricted class, we choose the minimum number of such parameters from each network layer such that pruning these parameters will result in the maximum degradation of model performance on that restricted class. We provide a detailed description of the process for identifying the restricted class relevant parameters in Sec. 11.1 of the appendix. The combined set of the relevant parameters for all the excluded classes is referred to as the restricted/excluded class relevant parameters Θexrel (see Fig. 2). Please note that we use this process only to identify Θexrel, and we do not update the model parameters during this step.
Pruning the relevant parameters for a restricted class can severely impact the performance of the model for that class (see Sec. 9.1). However, this may also degrade the performance of the model on the non-excluded classes because the parameters are shared across multiple classes. Therefore, we cannot address the RCRMR-LD problem by pruning the relevant parameters of the excluded classes. Finetuning these parameters on the limited remaining class data will also not be able to sufficiently remove the restricted class information from the model (see Sec. 11.10 in appendix). Based on this, we propose to address the RCRMR-LD problem by optimizing the relevant parameters of the restricted classes to remove the restricted class information from them and to reuse them for the remaining classes.
After identifying the restricted class relevant parameters, our ERwP approach uses a classification loss based on the cross-entropy loss function to optimize the restricted class relevant parameters of the model on each mini-batch (see Fig. 3). We know that the gradient ascent optimization algorithm can be used to maximize a loss function and encourage the model to perform badly on the given input. Therefore, we use the gradient ascent optimization on the classification loss for the limited restricted class training examples to remove the information regarding the restricted classes from Θexrel. We achieve this by multiplying the classification loss for the augmented training examples from the excluded classes by a constant negative factor of -1. We also optimize Θexrel using the gradient descent optimization on the classification loss for the limited remaining class training example, in order to reuse these parameters for the remaining classes. We validate using this approach through various ablation experiments as shown in Sec. 9.2. The classification loss for the examples from the
excluded and non-excluded classes and the overall classification loss for each mini-batch are defined as follows.
Lec = Se∑ i=1 −1 ∗ `(yei , ye∗i ) (1)
Lnec = Sne∑ j=1 `(ynej , y ne∗ j ) (2)
Lc = 1
S (Lec + Lnec ) (3)
Where, ye∗i and y ne∗ j refer to the predicted class labels for x e i and x ne j , respectively. `(., .) refers to the cross-entropy loss function. Lec and Lnec refer to the classification loss for the examples from the excluded and non-excluded classes in the mini-batch, respectively. Lc refers to the overall classification loss for each mini-batch.
Since all the network parameters were jointly trained on all the classes (restricted and remaining), the restricted class relevant parameters also contain information relevant to the remaining classes. Applying the above process alone will still harm the model’s predictive power for the non-excluded classes (as shown in Sec. 9.2, Table 3). This is because the gradient ascent optimization strategy will also erase some of the relevant information regarding the remaining classes. Further, applying Lnec on the limited training examples of the remaining classes will lead to overfitting and will not be effective enough to fully preserve the model performance on the remaining classes. In order to ensure that the model’s predictive power for the non-excluded classes does not change, we use a knowledge distillation-based regularization loss. Knowledge distillation (Hinton et al., 2014) ensures that the predictive power of the teacher network is replicated in the student network. In this problem setting, we want the final model to replicate the same predictive power of the original model for the remaining classes. Therefore, given any training example, we use the knowledge distillation-based regularization loss to ensure that the output logits produced by the model corresponding to only the non-excluded classes remain the same as that produced by the original model. We apply the knowledge distillation loss to the limited training examples from both the excluded and remaining classes, to preserve the non-excluded class logits of the model for any input image. We validate this regularization loss through ablation experiments as shown in Table 3. We use the original model Mo (before applying ERwP) as the teacher network and the current model M being processed by ERwP as the student network, for the knowledge distillation process. Please note that the optimization for this loss is also carried out only for the restricted class relevant parameters of the model. Let KD refer to the knowledge distillation loss function. It computes the Kullback-Liebler (KL) divergence between the soft predictions of the teacher and the student networks and can be defined as follows:
KD(ps, pt) = KL(σ(ps), σ(pt)) (4)
where, σ(.) refers to the softmax activation function that converts logit ai for each class i into a probability by comparing ai with logits of other classes aj , i.e., σ(ai) = exp
ai/κ∑ j exp aj/κ . κ refers to the
temperature (Hinton et al., 2014), KL refers to the KL-Divergence function. ps, pt refer to the logits produced by the student network and the teacher network, respectively.
The knowledge distillation-based regularization losses in our approach are defined as follows.
Lekd = Se∑ i=1 KD(M(xei )[C ne],Mo(x e i )[C ne]) (5)
Lnekd = Sne∑ j=1 KD(M(xnej )[C ne],Mo(x ne j )[C ne]) (6)
Lkd = 1
S (Lekd + Lnekd) (7)
Where, M(#)[Cne] and Mo(#)[Cne] refer to the output logits corresponding to the remaining classes produced by M and Mo, respectively. # can be either xei or x ne j . Lekd and Lnekd refer to knowledge distillation-based regularization loss for the examples from the excluded and non-excluded classes, respectively. Lkd refers to the overall knowledge distillation-based regularization loss for each mini-batch.
The total loss Lerwp of our approach for each mini-batch is defined as follows.
Lerwp = Lc + βLkd (8)
Where, β is a hyper-parameter that controls the contribution of the knowledge distillation-based regularization loss. We use this loss for fine-tuning the model for very few epochs.
5 RELATED WORK
Pruning involves removing redundant and unimportant weights (Carreira-Perpinán & Idelbayev, 2018; Dong et al., 2017; Guo et al., 2016; Han et al., 2015a;b; Tung & Mori, 2018; Zhang et al., 2018) or filters (He et al., 2019a; 2018; 2019b;c; Li et al., 2016) from a deep learning model without affecting the model performance. In contrast, our approach identifies class-specific important parameters, and therefore, the pruning techniques cannot be applied in our approach. In the incremental learning setting (Douillard et al., 2020; Hou et al., 2019; Tao et al., 2020; Yu et al., 2020; Liu et al., 2021), the objective is to preserve the predictive power of the model for previously seen classes while learning a new set of classes. In contrast, our proposed RCRMR-LD problem setting involves removing the information regarding specific classes from the pre-trained model while preserving the capacity of the model to identify the remaining classes. Privacy-preserving deep learning (Nan & Tao, 2020; Louizos et al., 2015; Edwards & Storkey, 2015; Hamm, 2017) involves learning representations that incorporate features from the data relevant to the given task and ignore sensitive information (such as the identity of a person). In contrast, the objective of the RCRMR-LD problem setting, is to achieve class-level privacy, i.e., if a class is declared as private/restricted, then all information about this class should be removed from the model trained on it, without affecting its ability to identify the remaining classes. The authors in (Ginart et al., 2019) propose an approach to delete individual data points from trained machine learning models like a clustering model. In contrast, RCRMR-LD involves removing the information of a set of classes from all layers of a deep learning model. Therefore, the approach proposed in (Ginart et al., 2019) cannot be applied to the RCRMR-LD problem setting.
6 BASELINES
We propose 9 baseline models for the RCRMR-LD problem and compare our proposed approach with them. The baseline 1 involves deleting the weights of the fully-connected classification layer corresponding to the excluded classes. Baselines 2, 3, 4, 5 involve training the model on the limited training data of the remaining classes. Baselines 6, 7, 8, 9 involve fine-tuning the model on the available limited training data. Please refer to Sec. 11.2 in the appendix for details about the baselines.
7 PERFORMANCE METRICS
In the RCRMR-LD problem setting, we propose three performance metrics to validate the performance of any method: forgetting accuracy (FAe), forgetting prototype accuracy (FPAe), and constraint accuracy (CAne). The forgetting accuracy refers to the fully-connected classification layer accuracy of the model for the excluded classes. The forgetting prototype accuracy refers to the nearest class prototype-based classifier accuracy of the model for the excluded classes. CAne refers to the fully-connected classification layer accuracy of the model for the non-excluded classes.
In order to judge any approach on the basis of these metrics, we follow the following sequence. First, we analyze the constraint accuracy (CAne) of the model produced by the given approach to verify if the approach has preserved the prediction power of the model for the non-excluded classes. CAne of the model should be close to that of the original model. If this condition is not satisfied, then the approach is not suitable for this problem, and we need not analyze the other metrics. This is because if the constraint accuracy is not maintained, then the overall usability of the model is hurt significantly. Next, we analyze the forgetting accuracy (FAe) of the model to verify if the excluded class information has been removed from the model at the classifier level. FAe of the model should be as close to 0% as possible. Finally, we analyze the forgetting prototype accuracy (FPAe) of the model to verify if the excluded class information has been removed from the model at the feature level. FPAe of the model should be significantly less than that of the original model. However, the FPAe will not become zero since any trained model will learn to extract meaningful features, which will help the nearest class prototype-based classifier to achieve some non-negligible accuracy even on the excluded classes. Therefore, for a better analysis of the level of forgetting of the excluded classes at the feature level, we compare the FPAe of the model with the FPAe of the FDR model. The
FDR model is a good candidate for this analysis since it has not been trained on the excluded classes (only trained on the complete dataset of the remaining classes), and it still achieves a non-negligible performance of the excluded classes (see Sec 8.1). However, it should be noted that this comparison is only for analysis and the comparison is not fair since the FDR model needs to train on the entire dataset (except the excluded classes).
8 EXPERIMENTS
We have reported the experimental results for the CIFAR-100 and ImageNet-1k datasets in this section. We have provided the results on the CUB-200 dataset in the appendix. Please refer to the appendix for the details regarding the dataset and implementation.
8.1 CIFAR-100 RESULTS
We report the performance of different baselines and our proposed ERwP method on the RCRMR-LD problem using the CIFAR-100 dataset with different architectures in Table 1. We observe that the baseline 1 (weight deletion) achieves high constraint accuracy CAne and 0% forgetting accuracy FAe. But its forgetting prototype accuracy FPAe remains the same as the original model for all the three architectures, i.e., ResNet-20/56/164. Therefore, baseline 1 fails to remove the excluded class information from the model at the feature level. Baseline 2 is not able to preserve the constraint accuracy CAne even though it performs full training on the limited excluded class data. Baseline 3 achieves higher CAne than baseline 2, but the constraint accuracy is still too low. Baselines 4 and 5 demonstrate significantly better constraint accuracy than baseline 2 and 3, but their constraint accuracy is still significantly lower than the original model (except baseline 5 for ResNet-20). The baseline 5 with ResNet-20 maintains the constraint accuracy and achieves 0% forgetting accuracy FAe but its FPAe is still significantly high and, therefore, is unable to remove the excluded class information from the model at the feature level. The fine-tuning based baselines 6 and 7 are able to significantly reduce the forgetting accuracy FAe but their constraint accuracy CAne drops significantly. The fine-tuning based baselines 8 and 9 only finetune the model on the limited remaining class data and as a result they are not able to sufficiently reduce either the forgetting accuracy FAe or the forgetting prototype accuracy FPAe.
Our proposed ERwP approach achieves a constraint accuracy CAne that is very close to the original model for all three architectures. It achieves close to 0% FAe. Further, it achieves a significantly lower FPAe than the original model. Specifically, the FPAe of our approach is significantly lower than that of the original model by absolute margins of 17.19%, 20.81%, and 20.17% for the ResNet-20, ResNet56, and ResNet-164 architectures, respectively. The FPAe for the FDR model is 44.20%, 45.40% and 51.85% for the ResNet-20, ResNet-56 and ResNet-164 architectures, respectively. Therefore, the FPAe of our approach is close to that of the FDR model by absolute margins of 3.86%, 2.44% and 4.38% for the ResNet-20, ResNet-56 and ResNet-164 architectures, respectively. Therefore, our ERwP approach makes the model behave similar to the FDR model even though it was trained on only limited data from the excluded and remaining classes. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training
Model Methods Top-1 Top-5
FAe CAne FAe CAne
Res-18 Original 69.76% 69.76% 89.58% 89.02%ERwP 0.28% 69.13% 1.01% 88.93%
Res-50 Original 76.30% 76.11% 93.04% 92.84%ERwP 0.25% 75.45% 2.55% 92.39%
Mob-V2 Original 72.38% 70.83% 91.28% 90.18%ERwP 0.17% 70.81% 0.81% 89.95%
data is only 10% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 30 ∗ 10 = 300× faster than the FDR method that is trained on the full training data for 300 epochs.
8.2 IMAGENET RESULTS
Table 2 reports the experimental results for different approaches to RCRMR-LD problem over the ImageNet-1k dataset using the ResNet-18, ResNet-50 and MobileNet V2 architectures. Our proposed ERwP approach achieves a top-1 constraint accuracy CAne that is very close to that of the original model by absolute margins of 0.63%, 0.66% and 0.02% for the ResNet-18, ResNet-50 and MobileNet V2 architectures, respectively. It achieves close to 0% top-1 forgetting accuracy FAe for all the three architectures. Therefore, our approach performs well even on the large-scale ImageNet-1k dataset. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training data is only 5% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 20 ∗ 10 = 200× faster than the FDR method that is trained on the full data for 100 epochs.
8.3 RCRMR-LD PROBLEM IN INCREMENTAL LEARNING
In this section, we experimentally demonstrate how the RCRMR-LD problem in the incremental learning setting is addressed using our proposed approach. We consider an incremental learning setting on the CIFAR-100 dataset in which each task contains 20 classes. We use the BIC (Wu et al., 2019) method for incremental learning on this dataset. The exemplar memory size is fixed at 2000 as per the setting in (Wu et al., 2019). In this setting, there are 5 tasks. Let us assume that the model (M4) has already been trained on 4 tasks (80 classes), and we are in the fifth training session. Suppose, at this stage, it is noticed that all the classes in the first task (20 classes) have become restricted and need to be removed before the model is trained on task 5. However, we only have a limited number of exemplars of the 80 classes seen till now, i.e., 2000/80 = 25 per class. We apply our proposed approach to the model obtained after training session 4, and the results are reported in Table 4. The results indicate that our approach modified the model obtained after session 4, such that the forgetting accuracy of the restricted classes approaches 0% and the constraint accuracy of the remaining classes is not affected. In fact, the modified model behaves as if, it was never trained on the classes from task 1. We can now perform the incremental training of the modified model on task 5.
9 ABLATION STUDIES
9.1 RESTRICTED CLASS RELEVANT PARAMETERS
We perform ablation experiments to verify our approach of identifying the highly relevant parameters for any restricted class. We perform these experiments on the CIFAR-100 dataset with the ResNet-56 architecture and report the forgetting accuracy FAe for the randomly chosen excluded class. Please note that in this case, only the chosen class of CIFAR-100 is the restricted class and all the remaining
classes constitute the non-excluded classes. In order to show the effectiveness of our approach, we sort the absolute gradients of the parameters in the model (obtained through backpropagation for the excluded class augmented images) and choose a set of high relevance and low relevance parameters. We then prune/zero out these parameters and record the forgetting accuracy. Fig. 4 demonstrates that as we zero out the high relevance parameters, the forgetting accuracy of the excluded class drops by a huge margin. It also shows that as we zero out the low relevance parameters, there is only a minor change in the forgetting accuracy of the excluded class. This validates our approach for identifying the high relevant parameters for the restricted classes.
9.2 SIGNIFICANCE OF THE COMPONENTS OF THE PROPOSED ERWP APPROACH
We perform ablations on the CIFAR-100 dataset using the ResNet-56 model to study the significance of the Lec, Lnec and Lkd components of our proposed ERwP approach. Table 3 indicates that optimizing the restricted class relevant parameters using only Lnec cannot significantly remove the information regarding the restricted classes from the model. Applying Lnec along with Lec significantly reduces the forgetting accuracy FAe and forgetting prototype accuracy FPAe but also significantly reduces the constraint accuracy CAne. Finally, applying the Lkd loss along with Lnec and Lec significantly reduces FAe and FPAe while maintaining the constraint accuracy CAne very close to that of the original model.
9.3 ABLATION ON THE NUMBER OF EXCLUDED CLASSES
We report the experimental results for our approach for different splits of excluded and remaining classes of the CIFAR-100 dataset in Table 5. We observe that our ERwP performs well for all the splits for both the ResNet-20 and ResNet-56 architectures.
9.4 PERFORMANCE OF ERWP OVER TRAINING EPOCHS
We analyze the change in the performance of the model after every epoch of our proposed ERwP approach in Fig. 5. We observe that as the training progresses the constraint accuracy is maintained close to that of the original model, the forgetting accuracy keeps dropping till it reaches 0% and the forgetting prototype accuracy keeps falling and comes closer to that of the FDR model.
10 CONCLUSION
In this paper, we present a “Restricted Category Removal from Model Representations with Limited Data” problem in which the objective is to remove the information regarding a set of excluded/restricted classes from a trained deep learning model without hurting its predictive power for the remaining classes. We propose several baseline approaches and also the performance metrics for this setting. First, we propose a novel approach to identify the model parameters that are highly relevant to the restricted classes. Next, we propose a novel efficient approach that optimizes these model parameters in order to remove the restricted class information and re-use these parameters for the remaining classes. We experimentally show how our approach significantly outperforms all the proposed baselines and performs similar to the full data retraining model.
11 APPENDIX
11.1 PROCESS FOR SELECTING THE RESTRICTED CLASS RELEVANT PARAMETERS
First, we apply a data augmentation technique f , not used during training, to the images of the given restricted class. Next, we combine the predictions for these images and perform backpropagation. Finally, we select the parameters with the highest absolute gradient as the relevant parameters for the corresponding restricted class. Specifically, for a given restricted class, we choose the minimum number of such parameters from each network layer such that pruning these parameters will result in the maximum degradation of model performance on that restricted class. We use a process similar to the binary search for automatically selecting the parameters with the highest absolute gradient. We use an automated script that first creates a list of parameters in each layer, sorts them in descending order according to the gradient values, and checks if zeroing out the weights of the first 5% parameters from this list leads to near zero accuracy for that class. If not, then we select double the number of parameters chosen earlier and repeat this process. If the accuracy is near zero, we repeat the process with half the number of parameters chosen earlier. Please note, this process is just for identifying the parameters relevant to the restricted classes, and their weights are restored after this process. The combined set of the relevant parameters for all the excluded classes is referred to as the restricted/excluded class relevant parameters.
11.2 BASELINES
We propose several baseline models for the RCRMR-LD problem and compare our proposed approach with them. The baseline approaches are defined as follows:
Original model: It refers to the original model that is trained on the complete training set containing all the training examples from both the excluded and non-excluded classes. It represents the model that has not been modified by any technique to remove the excluded class information.
Baseline 1 - Weight Deletion (WD): It refers to the original model with a modified fully-connected classification layer. Specifically, we remove the weights corresponding to the excluded classes in the fully-connected classification layer so that it cannot classify the excluded classes.
Baseline 2 - Training from Scratch on Limited Non-Restricted Class data (TSLNRC): In this baseline, we train a new model from scratch using the limited training examples of only the nonexcluded classes. It uses the complete training schedule as the original model and only uses the classification loss for training the model.
Baseline 3 - Training from Scratch on Limited Non-Restricted Class data with KD (TSLNRCKD): This baseline is the same as baseline 2, but in addition to the classification loss, it also uses a knowledge distillation loss to ensure that the non-excluded class logits of the model (student) match that of the original model (teacher).
Baseline 4 - Training of Original model on Limited Non-Restricted Class data (TOLNRC): This baseline is the same as baseline 2, but the model is initialized with the weights of the original model instead of randomly initializing it.
Baseline 5 - Training of Original model on Limited Non-Restricted Class data with KD (TOLNRC-KD): This baseline is the same as baseline 4, but in addition to the classification loss, it also uses a knowledge distillation loss.
Baseline 6 - Fine-tuning of Original model on Limited data after Mapping Restricted Classes to a Single Class (FOLMRCSC): In this baseline approach, we first replace all the excluded class labels in the limited training data with a new single excluded class label and then fine-tune the original model for a few epochs on the limited training data of both the excluded and remaining classes. In the case of the examples from the excluded classes, the model is trained to predict the new single excluded class. In the case of the examples from the remaining classes, the model is trained to predict the corresponding non-excluded classes.
Baseline 7 - Fine-tuning of Original model on Limited data after Mapping Restricted Classes to a Single Class with KD (FOLMRCSC-KD): This baseline is the same as baseline 6, but in
addition to the classification loss, it also uses a knowledge distillation loss to ensure that the nonexcluded class logits of the model (student) match that of the original model (teacher).
Baseline 8 - Fine-tuning of Original model on Limited Non-Restricted Class data (FOLNRC): In this baseline approach, we fine-tune the original model for a few epochs on the limited training data of non-excluded/remaining classes. The model is trained to predict the corresponding excluded classes of the training examples.
Baseline 9 - Fine-tuning of Original model on Limited Non-Restricted Class data with KD (FOLNRC-KD): This baseline is the same as baseline 8, but in addition to the classification loss, it also uses a knowledge distillation loss.
11.3 DATASETS
For the RCRMR-LD problem setting, we modify the CIFAR-100 (Krizhevsky et al., 2009), CUB (Wah et al., 2011) and ImageNet-1k (Russakovsky et al., 2015) datasets. In order to simulate the RCRMRLD problem setting with limited training data, we choose the last 20 classes of the CIFAR-100 dataset as the excluded classes and take only 10% of the training images of each class. Similarly, we choose the last 20 classes of the CUB dataset as the excluded classes with only 3 training images per class. For ImageNet-1K, we choose the last 100 classes as the excluded classes with 5% of the training images to simulate the limited data available for this problem setting.
11.4 IMPLEMENTATION DETAILS
In this section, we provide all the details required to reproduce our experimental results. We use the ResNet-20 (He et al., 2016), ResNet-56, ResNet-164 architectures for the experiments on the CIFAR-100 dataset. We use the standard data augmentation methods of random cropping to a size of 32 × 32 (zero-padded on each side with four pixels before taking a random crop) and random horizontal flipping, which is a standard practice for training a model on CIFAR-100. In order to obtain the original and FDR models for the CIFAR-100 dataset, we train the network for 300 epochs with a mini-batch size of 128 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e− 4. We choose the initial learning rate as 0.1, and we decrease it by a factor of 5 after every 50 epochs. For the CIFAR-100 experiments with ERwP using the ResNet-20, ResNet-56, and ResNet-164 architectures, we take learning rate= 1e− 4, β = 10 and optimize the network for 10 epochs. Since the available limited training data is only 10% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 30 ∗ 10 = 300× faster than the FDR method. For the experiments on the ImageNet dataset, we use the ResNet-18, ResNet-50, and MobileNet-V2 architectures. We use the standard data augmentation methods of random cropping to a size of 224 × 224 and random horizontal flipping, which is a standard practice for training a model on ImageNet-1k. In order to obtain the original and FDR models for the ImageNet dataset, we train the network for 100 epochs with a mini-batch size of 256 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e − 4. We choose the initial learning rate as 0.1, and we decrease it by a factor of 10 after every 30 epochs. For evaluation, the validation images are subjected to center cropping of size 224 × 224. For the ImageNet-1k experiments (5% training data) with ERwP using the ResNet-50 architecture, we optimize the network for 10 epochs with a learning rate of 9e − 5 using β = 200. For the ERwP experiments using the ResNet-18 architecture, we optimize the network for 10 epochs using β = 200 with an initial learning rate of 1.1e − 4 and a learning rate of 1.1e− 5 from the third epoch onward. In the case of the ERwP experiments with the MobileNet-V2 architecture, we optimize the network for 10 epochs using β = 400 with an initial learning rate of 1.5e − 4 and a learning rate of 1.5e − 5 from the third epoch onward. Since the available limited training data is only 5% of the entire ImageNet-1k dataset, therefore, our ERwP approach is approximately 20 ∗ 10 = 200× faster than the FDR method. For the experiments on the CUB-200 dataset, we use the ResNet-50 architecture pre-trained on the ImageNet dataset. In order to obtain the original and FDR models for the CUB-200 dataset, we train the network for 50 epochs with a mini-batch size of 64 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e − 3. We choose the initial learning rate as 1e − 2, and we decrease it by a factor of 10 after epochs 30 and 40. For the CUB-200 experiments (3 images per class, i.e., 10% training data) with ERwP using the ResNet-50 architecture, we optimize the network for 10 epochs with a learning rate of 1e − 4 using β = 10. Since the available limited training data is only 10% of the
entire CUB-200 dataset, therefore, our ERwP approach is approximately 5 ∗ 10 = 50× faster than the FDR method.
In our proposed approach, we use κ = 2 for all the experiments. We use a popular Pytorch implementation1 for performing knowledge distillation. We run all the experiments 3 times (using different random seeds) and report the average accuracy. We perform all the experiments using the Pytorch framework version 1.6.0 (Paszke et al., 2017) and Python 3.0. We use 4 GeForce GTX 1080 Ti graphics processing units for our experiments.
11.5 RCRMR-LD PROBLEM IN CUB-200 CLASSIFICATION
Table 6 reports the experimental results for different approaches to the RCRMR-LD problem over the CUB dataset using the ResNet-50 architecture. Our proposed ERwP approach achieves a constraint accuracy CAne that is very close to that of the original model even though we use only 3 images per class for optimizing the model. It achieves close to 0% forgetting accuracy FAe and achieves a FPAe that is significantly lower than that of the original model by an absolute margin of 35.80%. Similar to the CIFAR-100 experiments, our ERwP approach outperforms all the baseline approaches. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training data is only 10% of the entire CUB dataset, therefore, our ERwP approach is approximately 5 ∗ 10 = 50× faster than the FDR method that is trained on the full training data for 50 epochs.
11.6 ABLATION EXPERIMENTS FOR β AND κ
We perform ablation experiments to identify the most suitable values for the hyper-parameters β and κ for our proposed ERwP. The ablation results in Tables 7, 8, validate our choice of hyper-parameter values considering the forgetting accuracy and the constraint accuracy of the resulting model.
11.7 EFFECT OF DIFFERENT DATA AUGMENTATIONS ON THE IDENTIFICATION OF CLASS RELEVANT MODEL PARAMETERS
We perform experiments to verify our approach of identifying the highly relevant parameters for any restricted class using various augmentation techniques (grayscale, vertical flip, rotation, random affine augmentations). We chose the same restricted class of CIFAR-100 and used the ResNet-56 network for all the experiments. The results in Fig. 6 indicate that for all the compared data augmentations approaches, pruning/zeroing out the high relevance parameters obtained using our approach, results
1https://github.com/peterliht/knowledge-distillation-pytorch/blob/master/model/net.py
Table 7: Experimental results on the CIFAR100 dataset with ResNet-20 architecture for the RCRMR-LD problem with 20 excluded classes using our proposed ERwP with different values of β.
Table 8: Experimental results on the CIFAR100 dataset with ResNet-20 architecture for the RCRMR-LD problem with 20 excluded classes using our proposed ERwP with different values of κ.
in a huge drop in the forgetting accuracy of the excluded class. Further, zeroing out the low relevance parameters, has a minor impact on the forgetting accuracy of the excluded class.
11.8 ABLATION EXPERIMENTS ON THE RESTRICTED CLASS RELEVANT PARAMETERS
We perform ablation experiments with ERwP to check if only 25% and 50% of the restricted class relevant parameters of each layer identified using our proposed procedure can be used for ERwP. We run each of these experiments for the same number of epochs for the CIFAR-100 dataset and ResNet56 network. However, we observed that the final FPAe falls from 68.65% to 60.35% and 53.7%, respectively, for 25% and 50% of restricted class relevant parameters of each layer as compared to 47.84% when using all the restricted class relevant parameters per layer identified using our approach. The good performance of our approach is more evident in light of the performance of the FDR model that achieves a FPAe accuracy of 45.40%. We provide this result as a reference to demonstrate that the 47.84% FPAe accuracy is due to the generalization power of the model and not due to the restricted classes information in the model. This shows that our approach effectively identifies the class-relevant parameters of the model for a given class.
11.9 PERFORMANCE OF ERWP OVER TRAINING EPOCHS
We analyze the change in the performance of the model after every epoch of our proposed ERwP approach in Fig. 7 for the CIFAR-100 dataset with 20 excluded classes using the ResNet-20 and ResNet-56 architectures. For both architectures, we observe that as the training progresses, ERwP maintains the constraint accuracy close to that of the original model and forces the forgetting accuracy to drop to 0%. ERwP also forces the forgetting prototype accuracy to keep dropping and makes it similar to the FDR model.
11.10 FINETUNING RESTRICTED CLASS RELEVANT PARAMETERS ON REMAINING CLASSES
In our experimental results, we demonstrated how finetuning the model on the limited training data of the non-excluded classes cannot sufficiently remove the excluded class information from the model. We also perform an ablation experiment to demonstrate that finetuning only the restricted class relevant parameters using the limited training data of the non-excluded classes is also not effective in sufficiently removing the excluded class information from the model. We perform these experiments on the CIFAR-100 dataset with the ResNet-56 architecture. We observe the constraint accuracy CAne, forgetting accuracy FAe and the forgetting prototype accuracy FAe are almost the same even in this case. Therefore, finetuning only on the remaining class data cannot sufficiently remove the excluded class information from the model representations.
11.11 EFFECT OF USING THE PROPOSED ERWP APPROACH WHEN THE ENTIRE DATASET IS AVAILABLE
We perform ablation experiments to demonstrate the performance of our proposed ERwP approach when the entire training data is available. We perform these experiments on the CIFAR-100 dataset using ResNet-20 and ResNet-56. We observe experimentally that for both the ResNet-20 and ResNet-
56 experiments using ERwP, the forgetting accuracy FAe accuracy is 0% and the constraint accuracy CAne matches that of the original model. Further, the gap between the forgetting prototype accuracy FPAe of ERwP and the FDR model reduces from 3.86% (for limited data) to 2.79% for ResNet-20. Similarly, the gap reduces from 2.44% (for limited data) to 1.65% for ResNet-56. However, ERwP requires only 2-3 epochs of optimization (∼100-150× faster than the FDR model) for achieving this performance when trained on the entire dataset. This makes it significantly faster than any approach that trains on the entire dataset.
11.12 QUALITATIVE ANALYSIS
In order to analyze the effect of removing the excluded class information from the model using our proposed ERwP approach, we study the class activation map visualizations (Selvaraju et al., 2017) of the model before and after applying ERwP. We observe in Fig. 8 that for the images from the excluded classes, the model’s region of attention gets scattered after applying ERwP, unlike the images from the remaining classes. | 1. What is the focus and contribution of the paper regarding removing information from trained models?
2. What are the strengths of the proposed approach, particularly in terms of speed and accuracy?
3. What are the weaknesses of the paper, especially regarding the lack of detail in certain areas?
4. Do you have any concerns about the significance of the problem addressed by the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors present a new method to remove information about specific classes from a trained model without reducing the performance of the remaining classes. After the information is removed, the model should not be able to identify the class anymore. Instead of retraining the complete model from scratch without the restricted classes, the presented method only needs a few examples of the restricted classes and the remaining classes. In terms of speed, the presented method is ~200 times faster on ImageNet than a new model training without the restricted classes. Furthermore, they present a method for identifying model parameters that are mainly relevant to the restricted classes. The evaluation of the model is performed on the CIFAR-100, ImageNet-1k, and the CUB-200 dataset. For a detailed comparison, eight baseline methods were designed and evaluated. An ablation study is performed on the class relevant parameters and the number of classes that are excluded. The presented method achieves an accuracy close to the original model on the remaining classes in terms of accuracy. Also, the forgetting prototype accuracy is close to the model trained only on the remaining classes.
Review
Introduction: The paper is well written, and the evaluation is very detailed. It is an interesting idea to remove class information from the model with a limited amount of data. However, from the description in the paper, it is not clear why this is a real-world problem. It would be beneficial to rate the importance of this application if the authors had provided sources for such cases or a more detailed description of a specific scenario.
Method: The re-training procedure of the model with only a limited amount of training data is very detailed. The description of the identification of the relevant parameters for the restricted classes is missing some details. For example, it is not defined what other transformation besides the grayscale transformation is used. If other transformations are optional, it would be good to know what type of transformations are used in the experiments. Furthermore, it is not clear how the parameters with the highest gradients are selected. Is a fixed threshold used? What is the minimum number of parameters of each layer that are selected? Is this a fixed number for each class? Does it depend on the number of excluded classes?
Evaluation: The evaluation is very detailed, with eight baseline methods to show the performance of the presented method. However, the results of the FDR model are not shown in Table 1, only mentioned in the text. Is there a reason for this? Adding the results of the FDR to Table 1 would be beneficial. It would be very interesting to see how the parameter selection influences the accuracy of the model. Unfortunately this is not part of the evaluation. |
ICLR | Title
Restricted Category Removal from Model Representations using Limited Data
Abstract
Deep learning models are trained on multiple categories jointly to solve several real-world problems. However, there can be cases where some of the classes may become restricted in the future and need to be excluded after the model has already been trained on them (Class-level Privacy). It can be due to privacy, ethical or legal concerns. A naive solution is to simply train the model from scratch on the complete training data while leaving out the training samples from the restricted classes (FDR full data retraining). But this can be a very time-consuming process. Further, this approach will not work well if we no longer have access to the complete training data and instead only have access to very few training data. The objective of this work is to remove the information about the restricted classes from the network representations of all layers using limited data without affecting the prediction power of the model for the remaining classes. Simply fine-tuning the model on the limited available training data for the remaining classes will not be able to sufficiently remove the restricted class information, and aggressive fine-tuning on the limited data may also lead to overfitting. We propose a novel solution to achieve this objective that is significantly faster (∼ 200× on ImageNet) than the naive solution. Specifically, we propose a novel technique for identifying the model parameters that are mainly relevant to the restricted classes. We also propose a novel technique that uses the limited training data of the restricted classes to remove the restricted class information from these parameters and uses the limited training data of the remaining classes to reuse these parameters for the remaining classes. The model obtained through our approach behaves as if it was never trained on the restricted classes and performs similar to FDR (which needs the complete training data). We also propose several baseline approaches and compare our approach with them in order to demonstrate its efficacy.
1 INTRODUCTION
There are several real-world problems in which deep learning models have exceeded human-level performance. This has led to a wide deployment of deep learning models. Deep learning models generally train jointly on a number of categories/classes of data. However, the use of some of these classes may get restricted in the future (restricted classes), and a model with the capability to identify these classes may violate legal/privacy concerns, e.g., a company may legally prevent a deep learning model from having the capability to identify its copyright-protected logo, patented products, and so on. Another example is a treatment prediction model that predicts the best treatment for a patient based on the disease. If one of the treatments for a disease is banned due to its side-effects or ethical concerns, the restricted treatment category has to be excluded from the trained model. Individuals and organizations are becoming increasingly aware of these issues leading to an increasing number of legal cases on privacy issues in recent years. In such situations, the model has to be stripped of its capability to identify these categories. This is a difficult problem to solve, especially if the full training data is no longer available and only a few training examples are available. We present a “Restricted Category Removal from Model Representations with Limited Data” (RCRMR-LD) problem setting that simulates the above problem. In this paper, we propose to solve this problem in a fast and efficient manner.
The objective of the RCRMR-LD problem is to remove the information regarding the restricted classes from the network representations of all layers using the limited training data available without
affecting the ability of the model to identify the remaining classes. If we have access to the full training data, then we can simply exclude the restricted class examples from the training data and perform a full training of the model from scratch using the abundant data (FDR - full data retraining). However, the RCRMR-LD problem setting is based on the scenario that the directive to exclude the restricted classes is received in the future after the model has already been trained on the full data and now only a limited amount of training data is available to carry out this process. Simply training the network from scratch on only the limited training data of the remaining classes will result in severe overfitting and significantly affect the model performance (Baseline 2, as shown in Tables 1, 6).
Another possible solution to this problem is to remove the weights of the fully-connected classification layer of the network corresponding to the excluded classes such that it can no longer classify the excluded classes. However, this approach suffers from a serious problem. Since, in this approach, we only remove some of the weights of the classification layer and the rest of the model remains unchanged, it still contains the information required for recognizing the excluded classes. This information can be easily accessed through the features that the model extracts from the images. Therefore, we can use these features for performing classification. In this paper, we use a nearest prototype-based classifier to demonstrate that the model features still contain information regarding the restricted classes. Specifically, we use the model features of the examples from the limited training data to compute the average class prototype for each class and create a nearest class prototype-based classifier using them. Next, for any given test image, we extract its features using the model and then find the class prototype closest to the given test image. This nearest class prototype-based classifier performs close to the original fully-connected classifier on the excluded classes as shown in Tables 1, 6 (Baseline 1). Therefore, even after using this approach, the resulting model still contains information regarding the restricted classes. Another possible approach can be to apply the standard fine-tuning approach to the model using the limited available training data of the remaining classes (Baseline 8). However, fine-tuning on such limited training data is not able to sufficiently remove the restricted class information from the model representations (see Tables 1, 6), and aggressive fine-tuning on the limited training data may result in overfitting.
Considering the problems faced by the naive approaches mentioned above, we propose a novel “Efficient Removal with Preservation” (ERwP) approach to address the RCRMR-LD problem. First, we propose a novel technique to identify the model parameters that are highly relevant to the restricted classes, and to the best of our knowledge, there are no existing prior works for finding such classspecific relevant parameters. Next, we propose a novel technique that optimizes the model on the limited available training data in such a way that the restricted class information is discarded from the restricted class relevant parameters, and these parameters are reused for the remaining classes.
To the best of our knowledge, this is the first work that addresses the RCRMR-LD problem. Therefore, we also propose several baseline approaches for this problem (see Sec. 11.2). However, our proposed approach significantly outperforms all the proposed baseline approaches. Our proposed approach requires very few epochs to address the RCRMR-LD problem and is, therefore, very fast (∼ 200 times faster than the full data retraining model for the ImageNet dataset) and efficient. The model obtained after applying our approach forgets the excluded classes to such an extent that it behaves as though it was never trained on examples from the excluded classes. The performance of our model is very similar to the full data retraining (FDR) model (see Sec. 8.1, Fig. 5). We also propose the performance metrics needed to evaluate the performance of any approach for the RCRMR-LD problem. We perform experiments on several datasets to demonstrate the efficacy of our method.
2 PROBLEM SETTING
In this work, we present the restricted category removal from model representations with limited data (RCRMR-LD) problem setting, in which a deep learning model Mo trained on a specific dataset has to be modified to exclude information regarding a set of restricted/excluded classes from all layers of the deep learning model without affecting its identification power for the remaining classes (see Fig. 1). The classes that need to be excluded are referred to as the restricted/excluded classes. Let {Ce1 , Ce2 , ..., CeNe} be the restricted/excluded classes, where Ne refers to the number of excluded classes. The remaining classes of the dataset are the remaining/non-excluded classes. Let {Cne1 , Cne2 , ..., CneNne} be the non-excluded classes, whereNne refers to the number of remaining/nonexcluded classes. Additionally, we only have access to a limited amount of training data for the restricted classes and the remaining classes, for carrying out this process. Therefore, any approach for addressing this problem can only utilize this limited training data.
3 RCRMR-LD PROBLEM IN REAL WORLD SCENARIOS
A real-world scenario where our proposed RCRMR-LD problem can arise is federated learning (McMahan et al., 2017). In the federated learning setting, there are multiple collaborators that have a part of the training data stored locally, and a model is trained collaboratively using these private data without sharing or collating the data due to privacy concerns. Suppose organization A has a part of the training data, and there are other collaborators that have other parts of the training data for the same classes. Organization A collaboratively trains a model with other collaborators using federated learning. After the model has been trained, a few classes may become restricted in the future due to some ethical or privacy concerns, and these classes should be removed from the model. However, the other collaborators may not be available or may charge a huge amount of money for collaborating again to train a fresh model from scratch. In this case, organization A does not have access to the full training data of the non-excluded/remaining classes that it can use to re-train a model from scratch in order to exclude the restricted classes information. This clearly shows that the RCRMR-LD problem is possible in federated learning.
Another real-world scenario is the incremental learning setting (Rebuffi et al., 2017; Kemker & Kanan, 2018), where the model receives training data in the form of sequentially arriving tasks. Each task contains a new set of classes. During a training session t, the model receives the task t for training and cannot access the full data of the previous tasks. Instead, the model has access to very few exemplars of the classes in the previous tasks. Suppose before training a model on training session t, it is noticed that some classes from a previous task (< t) have to be removed from the model since those classes have become restricted due to privacy or ethical concerns. In this case, only a limited number of exemplars are available for all these previous classes (restricted and remaining). This demonstrates that the RCRMR-LD problem is also possible in the incremental learning setting. We experimentally demonstrate in Sec. 8.3, how our approach can address the RCRMR-LD problem in the incremental learning setting.
4 PROPOSED METHOD
Let, B refer to a mini-batch (of size S) from the available limited training data, and B contains training datapoints from the restricted/excluded classes ({(xei , yei )|(xe1, ye1), ..., (xeSe , y e Se
)}) and from the remaining/non-excluded classes ({(xnej , ynej )|(xne1 , yne1 ), ..., (xneSne , y ne Sne
)}). Here, (xei , yei ) refers to a training datapoint from the excluded classes where xei is an image, y e i is the corresponding label and yei ∈ {Ce1 , Ce2 , ..., CeNe}. (x ne j , y ne j ) refers to a training datapoint from the non-excluded classes where xnej is an image, y ne j is the corresponding label and y ne j ∈ {Cne1 , Cne2 , ..., CneNne}. Here, Se and Sne refer to the number of training examples in the mini-batch from the excluded and non-excluded classes, respectively, such that S = Se + Sne. Ne and Nne refer to the number of excluded and non-excluded classes, respectively. Let M refer to the deep learning model being trained using our approach and Mo is the original trained deep learning model.
In a trained model, some of the parameters may be highly relevant to the restricted classes, and the performance of the model on the restricted classes is mainly dependent on such highly relevant parameters. Therefore, in our approach, we focus on removing the excluded class information from these restricted class relevant parameters. Since the model is trained on all the classes jointly, the parameters are shared across the different classes. Therefore identifying these class-specific relevant parameters is very difficult. Let us consider a model that is trained on color images of a class. If we now train it on grayscale images of the class, then the model has to learn to identify these new images. In order to do so, the parameters relevant to that class will receive large gradient updates as compared to the other parameters (see Sec. 9.1). We propose a novel approach for identifying the relevant parameters for the restricted classes using this idea. For each restricted class, we choose the training images belonging to that class from the limited available training data. Next, we apply a grayscale data augmentation technique/transformation f to these images so that these images become different from the images that the original model was earlier trained on (assuming that the original model has not been trained on grayscale images). We can also use other data augmentation techniques that are not seen during the training process of the original model and that do not change the class of the image (refer to Sec. 11.7 in the appendix). Next, we combine the predictions for each training image into a single average prediction and perform backpropagation. During the backpropagation, we study the gradients for all the parameters in each layer of the model. Accordingly, we select the parameters with the highest absolute gradient as the relevant parameters for the corresponding restricted class. Specifically, for a given restricted class, we choose the minimum number of such parameters from each network layer such that pruning these parameters will result in the maximum degradation of model performance on that restricted class. We provide a detailed description of the process for identifying the restricted class relevant parameters in Sec. 11.1 of the appendix. The combined set of the relevant parameters for all the excluded classes is referred to as the restricted/excluded class relevant parameters Θexrel (see Fig. 2). Please note that we use this process only to identify Θexrel, and we do not update the model parameters during this step.
Pruning the relevant parameters for a restricted class can severely impact the performance of the model for that class (see Sec. 9.1). However, this may also degrade the performance of the model on the non-excluded classes because the parameters are shared across multiple classes. Therefore, we cannot address the RCRMR-LD problem by pruning the relevant parameters of the excluded classes. Finetuning these parameters on the limited remaining class data will also not be able to sufficiently remove the restricted class information from the model (see Sec. 11.10 in appendix). Based on this, we propose to address the RCRMR-LD problem by optimizing the relevant parameters of the restricted classes to remove the restricted class information from them and to reuse them for the remaining classes.
After identifying the restricted class relevant parameters, our ERwP approach uses a classification loss based on the cross-entropy loss function to optimize the restricted class relevant parameters of the model on each mini-batch (see Fig. 3). We know that the gradient ascent optimization algorithm can be used to maximize a loss function and encourage the model to perform badly on the given input. Therefore, we use the gradient ascent optimization on the classification loss for the limited restricted class training examples to remove the information regarding the restricted classes from Θexrel. We achieve this by multiplying the classification loss for the augmented training examples from the excluded classes by a constant negative factor of -1. We also optimize Θexrel using the gradient descent optimization on the classification loss for the limited remaining class training example, in order to reuse these parameters for the remaining classes. We validate using this approach through various ablation experiments as shown in Sec. 9.2. The classification loss for the examples from the
excluded and non-excluded classes and the overall classification loss for each mini-batch are defined as follows.
Lec = Se∑ i=1 −1 ∗ `(yei , ye∗i ) (1)
Lnec = Sne∑ j=1 `(ynej , y ne∗ j ) (2)
Lc = 1
S (Lec + Lnec ) (3)
Where, ye∗i and y ne∗ j refer to the predicted class labels for x e i and x ne j , respectively. `(., .) refers to the cross-entropy loss function. Lec and Lnec refer to the classification loss for the examples from the excluded and non-excluded classes in the mini-batch, respectively. Lc refers to the overall classification loss for each mini-batch.
Since all the network parameters were jointly trained on all the classes (restricted and remaining), the restricted class relevant parameters also contain information relevant to the remaining classes. Applying the above process alone will still harm the model’s predictive power for the non-excluded classes (as shown in Sec. 9.2, Table 3). This is because the gradient ascent optimization strategy will also erase some of the relevant information regarding the remaining classes. Further, applying Lnec on the limited training examples of the remaining classes will lead to overfitting and will not be effective enough to fully preserve the model performance on the remaining classes. In order to ensure that the model’s predictive power for the non-excluded classes does not change, we use a knowledge distillation-based regularization loss. Knowledge distillation (Hinton et al., 2014) ensures that the predictive power of the teacher network is replicated in the student network. In this problem setting, we want the final model to replicate the same predictive power of the original model for the remaining classes. Therefore, given any training example, we use the knowledge distillation-based regularization loss to ensure that the output logits produced by the model corresponding to only the non-excluded classes remain the same as that produced by the original model. We apply the knowledge distillation loss to the limited training examples from both the excluded and remaining classes, to preserve the non-excluded class logits of the model for any input image. We validate this regularization loss through ablation experiments as shown in Table 3. We use the original model Mo (before applying ERwP) as the teacher network and the current model M being processed by ERwP as the student network, for the knowledge distillation process. Please note that the optimization for this loss is also carried out only for the restricted class relevant parameters of the model. Let KD refer to the knowledge distillation loss function. It computes the Kullback-Liebler (KL) divergence between the soft predictions of the teacher and the student networks and can be defined as follows:
KD(ps, pt) = KL(σ(ps), σ(pt)) (4)
where, σ(.) refers to the softmax activation function that converts logit ai for each class i into a probability by comparing ai with logits of other classes aj , i.e., σ(ai) = exp
ai/κ∑ j exp aj/κ . κ refers to the
temperature (Hinton et al., 2014), KL refers to the KL-Divergence function. ps, pt refer to the logits produced by the student network and the teacher network, respectively.
The knowledge distillation-based regularization losses in our approach are defined as follows.
Lekd = Se∑ i=1 KD(M(xei )[C ne],Mo(x e i )[C ne]) (5)
Lnekd = Sne∑ j=1 KD(M(xnej )[C ne],Mo(x ne j )[C ne]) (6)
Lkd = 1
S (Lekd + Lnekd) (7)
Where, M(#)[Cne] and Mo(#)[Cne] refer to the output logits corresponding to the remaining classes produced by M and Mo, respectively. # can be either xei or x ne j . Lekd and Lnekd refer to knowledge distillation-based regularization loss for the examples from the excluded and non-excluded classes, respectively. Lkd refers to the overall knowledge distillation-based regularization loss for each mini-batch.
The total loss Lerwp of our approach for each mini-batch is defined as follows.
Lerwp = Lc + βLkd (8)
Where, β is a hyper-parameter that controls the contribution of the knowledge distillation-based regularization loss. We use this loss for fine-tuning the model for very few epochs.
5 RELATED WORK
Pruning involves removing redundant and unimportant weights (Carreira-Perpinán & Idelbayev, 2018; Dong et al., 2017; Guo et al., 2016; Han et al., 2015a;b; Tung & Mori, 2018; Zhang et al., 2018) or filters (He et al., 2019a; 2018; 2019b;c; Li et al., 2016) from a deep learning model without affecting the model performance. In contrast, our approach identifies class-specific important parameters, and therefore, the pruning techniques cannot be applied in our approach. In the incremental learning setting (Douillard et al., 2020; Hou et al., 2019; Tao et al., 2020; Yu et al., 2020; Liu et al., 2021), the objective is to preserve the predictive power of the model for previously seen classes while learning a new set of classes. In contrast, our proposed RCRMR-LD problem setting involves removing the information regarding specific classes from the pre-trained model while preserving the capacity of the model to identify the remaining classes. Privacy-preserving deep learning (Nan & Tao, 2020; Louizos et al., 2015; Edwards & Storkey, 2015; Hamm, 2017) involves learning representations that incorporate features from the data relevant to the given task and ignore sensitive information (such as the identity of a person). In contrast, the objective of the RCRMR-LD problem setting, is to achieve class-level privacy, i.e., if a class is declared as private/restricted, then all information about this class should be removed from the model trained on it, without affecting its ability to identify the remaining classes. The authors in (Ginart et al., 2019) propose an approach to delete individual data points from trained machine learning models like a clustering model. In contrast, RCRMR-LD involves removing the information of a set of classes from all layers of a deep learning model. Therefore, the approach proposed in (Ginart et al., 2019) cannot be applied to the RCRMR-LD problem setting.
6 BASELINES
We propose 9 baseline models for the RCRMR-LD problem and compare our proposed approach with them. The baseline 1 involves deleting the weights of the fully-connected classification layer corresponding to the excluded classes. Baselines 2, 3, 4, 5 involve training the model on the limited training data of the remaining classes. Baselines 6, 7, 8, 9 involve fine-tuning the model on the available limited training data. Please refer to Sec. 11.2 in the appendix for details about the baselines.
7 PERFORMANCE METRICS
In the RCRMR-LD problem setting, we propose three performance metrics to validate the performance of any method: forgetting accuracy (FAe), forgetting prototype accuracy (FPAe), and constraint accuracy (CAne). The forgetting accuracy refers to the fully-connected classification layer accuracy of the model for the excluded classes. The forgetting prototype accuracy refers to the nearest class prototype-based classifier accuracy of the model for the excluded classes. CAne refers to the fully-connected classification layer accuracy of the model for the non-excluded classes.
In order to judge any approach on the basis of these metrics, we follow the following sequence. First, we analyze the constraint accuracy (CAne) of the model produced by the given approach to verify if the approach has preserved the prediction power of the model for the non-excluded classes. CAne of the model should be close to that of the original model. If this condition is not satisfied, then the approach is not suitable for this problem, and we need not analyze the other metrics. This is because if the constraint accuracy is not maintained, then the overall usability of the model is hurt significantly. Next, we analyze the forgetting accuracy (FAe) of the model to verify if the excluded class information has been removed from the model at the classifier level. FAe of the model should be as close to 0% as possible. Finally, we analyze the forgetting prototype accuracy (FPAe) of the model to verify if the excluded class information has been removed from the model at the feature level. FPAe of the model should be significantly less than that of the original model. However, the FPAe will not become zero since any trained model will learn to extract meaningful features, which will help the nearest class prototype-based classifier to achieve some non-negligible accuracy even on the excluded classes. Therefore, for a better analysis of the level of forgetting of the excluded classes at the feature level, we compare the FPAe of the model with the FPAe of the FDR model. The
FDR model is a good candidate for this analysis since it has not been trained on the excluded classes (only trained on the complete dataset of the remaining classes), and it still achieves a non-negligible performance of the excluded classes (see Sec 8.1). However, it should be noted that this comparison is only for analysis and the comparison is not fair since the FDR model needs to train on the entire dataset (except the excluded classes).
8 EXPERIMENTS
We have reported the experimental results for the CIFAR-100 and ImageNet-1k datasets in this section. We have provided the results on the CUB-200 dataset in the appendix. Please refer to the appendix for the details regarding the dataset and implementation.
8.1 CIFAR-100 RESULTS
We report the performance of different baselines and our proposed ERwP method on the RCRMR-LD problem using the CIFAR-100 dataset with different architectures in Table 1. We observe that the baseline 1 (weight deletion) achieves high constraint accuracy CAne and 0% forgetting accuracy FAe. But its forgetting prototype accuracy FPAe remains the same as the original model for all the three architectures, i.e., ResNet-20/56/164. Therefore, baseline 1 fails to remove the excluded class information from the model at the feature level. Baseline 2 is not able to preserve the constraint accuracy CAne even though it performs full training on the limited excluded class data. Baseline 3 achieves higher CAne than baseline 2, but the constraint accuracy is still too low. Baselines 4 and 5 demonstrate significantly better constraint accuracy than baseline 2 and 3, but their constraint accuracy is still significantly lower than the original model (except baseline 5 for ResNet-20). The baseline 5 with ResNet-20 maintains the constraint accuracy and achieves 0% forgetting accuracy FAe but its FPAe is still significantly high and, therefore, is unable to remove the excluded class information from the model at the feature level. The fine-tuning based baselines 6 and 7 are able to significantly reduce the forgetting accuracy FAe but their constraint accuracy CAne drops significantly. The fine-tuning based baselines 8 and 9 only finetune the model on the limited remaining class data and as a result they are not able to sufficiently reduce either the forgetting accuracy FAe or the forgetting prototype accuracy FPAe.
Our proposed ERwP approach achieves a constraint accuracy CAne that is very close to the original model for all three architectures. It achieves close to 0% FAe. Further, it achieves a significantly lower FPAe than the original model. Specifically, the FPAe of our approach is significantly lower than that of the original model by absolute margins of 17.19%, 20.81%, and 20.17% for the ResNet-20, ResNet56, and ResNet-164 architectures, respectively. The FPAe for the FDR model is 44.20%, 45.40% and 51.85% for the ResNet-20, ResNet-56 and ResNet-164 architectures, respectively. Therefore, the FPAe of our approach is close to that of the FDR model by absolute margins of 3.86%, 2.44% and 4.38% for the ResNet-20, ResNet-56 and ResNet-164 architectures, respectively. Therefore, our ERwP approach makes the model behave similar to the FDR model even though it was trained on only limited data from the excluded and remaining classes. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training
Model Methods Top-1 Top-5
FAe CAne FAe CAne
Res-18 Original 69.76% 69.76% 89.58% 89.02%ERwP 0.28% 69.13% 1.01% 88.93%
Res-50 Original 76.30% 76.11% 93.04% 92.84%ERwP 0.25% 75.45% 2.55% 92.39%
Mob-V2 Original 72.38% 70.83% 91.28% 90.18%ERwP 0.17% 70.81% 0.81% 89.95%
data is only 10% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 30 ∗ 10 = 300× faster than the FDR method that is trained on the full training data for 300 epochs.
8.2 IMAGENET RESULTS
Table 2 reports the experimental results for different approaches to RCRMR-LD problem over the ImageNet-1k dataset using the ResNet-18, ResNet-50 and MobileNet V2 architectures. Our proposed ERwP approach achieves a top-1 constraint accuracy CAne that is very close to that of the original model by absolute margins of 0.63%, 0.66% and 0.02% for the ResNet-18, ResNet-50 and MobileNet V2 architectures, respectively. It achieves close to 0% top-1 forgetting accuracy FAe for all the three architectures. Therefore, our approach performs well even on the large-scale ImageNet-1k dataset. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training data is only 5% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 20 ∗ 10 = 200× faster than the FDR method that is trained on the full data for 100 epochs.
8.3 RCRMR-LD PROBLEM IN INCREMENTAL LEARNING
In this section, we experimentally demonstrate how the RCRMR-LD problem in the incremental learning setting is addressed using our proposed approach. We consider an incremental learning setting on the CIFAR-100 dataset in which each task contains 20 classes. We use the BIC (Wu et al., 2019) method for incremental learning on this dataset. The exemplar memory size is fixed at 2000 as per the setting in (Wu et al., 2019). In this setting, there are 5 tasks. Let us assume that the model (M4) has already been trained on 4 tasks (80 classes), and we are in the fifth training session. Suppose, at this stage, it is noticed that all the classes in the first task (20 classes) have become restricted and need to be removed before the model is trained on task 5. However, we only have a limited number of exemplars of the 80 classes seen till now, i.e., 2000/80 = 25 per class. We apply our proposed approach to the model obtained after training session 4, and the results are reported in Table 4. The results indicate that our approach modified the model obtained after session 4, such that the forgetting accuracy of the restricted classes approaches 0% and the constraint accuracy of the remaining classes is not affected. In fact, the modified model behaves as if, it was never trained on the classes from task 1. We can now perform the incremental training of the modified model on task 5.
9 ABLATION STUDIES
9.1 RESTRICTED CLASS RELEVANT PARAMETERS
We perform ablation experiments to verify our approach of identifying the highly relevant parameters for any restricted class. We perform these experiments on the CIFAR-100 dataset with the ResNet-56 architecture and report the forgetting accuracy FAe for the randomly chosen excluded class. Please note that in this case, only the chosen class of CIFAR-100 is the restricted class and all the remaining
classes constitute the non-excluded classes. In order to show the effectiveness of our approach, we sort the absolute gradients of the parameters in the model (obtained through backpropagation for the excluded class augmented images) and choose a set of high relevance and low relevance parameters. We then prune/zero out these parameters and record the forgetting accuracy. Fig. 4 demonstrates that as we zero out the high relevance parameters, the forgetting accuracy of the excluded class drops by a huge margin. It also shows that as we zero out the low relevance parameters, there is only a minor change in the forgetting accuracy of the excluded class. This validates our approach for identifying the high relevant parameters for the restricted classes.
9.2 SIGNIFICANCE OF THE COMPONENTS OF THE PROPOSED ERWP APPROACH
We perform ablations on the CIFAR-100 dataset using the ResNet-56 model to study the significance of the Lec, Lnec and Lkd components of our proposed ERwP approach. Table 3 indicates that optimizing the restricted class relevant parameters using only Lnec cannot significantly remove the information regarding the restricted classes from the model. Applying Lnec along with Lec significantly reduces the forgetting accuracy FAe and forgetting prototype accuracy FPAe but also significantly reduces the constraint accuracy CAne. Finally, applying the Lkd loss along with Lnec and Lec significantly reduces FAe and FPAe while maintaining the constraint accuracy CAne very close to that of the original model.
9.3 ABLATION ON THE NUMBER OF EXCLUDED CLASSES
We report the experimental results for our approach for different splits of excluded and remaining classes of the CIFAR-100 dataset in Table 5. We observe that our ERwP performs well for all the splits for both the ResNet-20 and ResNet-56 architectures.
9.4 PERFORMANCE OF ERWP OVER TRAINING EPOCHS
We analyze the change in the performance of the model after every epoch of our proposed ERwP approach in Fig. 5. We observe that as the training progresses the constraint accuracy is maintained close to that of the original model, the forgetting accuracy keeps dropping till it reaches 0% and the forgetting prototype accuracy keeps falling and comes closer to that of the FDR model.
10 CONCLUSION
In this paper, we present a “Restricted Category Removal from Model Representations with Limited Data” problem in which the objective is to remove the information regarding a set of excluded/restricted classes from a trained deep learning model without hurting its predictive power for the remaining classes. We propose several baseline approaches and also the performance metrics for this setting. First, we propose a novel approach to identify the model parameters that are highly relevant to the restricted classes. Next, we propose a novel efficient approach that optimizes these model parameters in order to remove the restricted class information and re-use these parameters for the remaining classes. We experimentally show how our approach significantly outperforms all the proposed baselines and performs similar to the full data retraining model.
11 APPENDIX
11.1 PROCESS FOR SELECTING THE RESTRICTED CLASS RELEVANT PARAMETERS
First, we apply a data augmentation technique f , not used during training, to the images of the given restricted class. Next, we combine the predictions for these images and perform backpropagation. Finally, we select the parameters with the highest absolute gradient as the relevant parameters for the corresponding restricted class. Specifically, for a given restricted class, we choose the minimum number of such parameters from each network layer such that pruning these parameters will result in the maximum degradation of model performance on that restricted class. We use a process similar to the binary search for automatically selecting the parameters with the highest absolute gradient. We use an automated script that first creates a list of parameters in each layer, sorts them in descending order according to the gradient values, and checks if zeroing out the weights of the first 5% parameters from this list leads to near zero accuracy for that class. If not, then we select double the number of parameters chosen earlier and repeat this process. If the accuracy is near zero, we repeat the process with half the number of parameters chosen earlier. Please note, this process is just for identifying the parameters relevant to the restricted classes, and their weights are restored after this process. The combined set of the relevant parameters for all the excluded classes is referred to as the restricted/excluded class relevant parameters.
11.2 BASELINES
We propose several baseline models for the RCRMR-LD problem and compare our proposed approach with them. The baseline approaches are defined as follows:
Original model: It refers to the original model that is trained on the complete training set containing all the training examples from both the excluded and non-excluded classes. It represents the model that has not been modified by any technique to remove the excluded class information.
Baseline 1 - Weight Deletion (WD): It refers to the original model with a modified fully-connected classification layer. Specifically, we remove the weights corresponding to the excluded classes in the fully-connected classification layer so that it cannot classify the excluded classes.
Baseline 2 - Training from Scratch on Limited Non-Restricted Class data (TSLNRC): In this baseline, we train a new model from scratch using the limited training examples of only the nonexcluded classes. It uses the complete training schedule as the original model and only uses the classification loss for training the model.
Baseline 3 - Training from Scratch on Limited Non-Restricted Class data with KD (TSLNRCKD): This baseline is the same as baseline 2, but in addition to the classification loss, it also uses a knowledge distillation loss to ensure that the non-excluded class logits of the model (student) match that of the original model (teacher).
Baseline 4 - Training of Original model on Limited Non-Restricted Class data (TOLNRC): This baseline is the same as baseline 2, but the model is initialized with the weights of the original model instead of randomly initializing it.
Baseline 5 - Training of Original model on Limited Non-Restricted Class data with KD (TOLNRC-KD): This baseline is the same as baseline 4, but in addition to the classification loss, it also uses a knowledge distillation loss.
Baseline 6 - Fine-tuning of Original model on Limited data after Mapping Restricted Classes to a Single Class (FOLMRCSC): In this baseline approach, we first replace all the excluded class labels in the limited training data with a new single excluded class label and then fine-tune the original model for a few epochs on the limited training data of both the excluded and remaining classes. In the case of the examples from the excluded classes, the model is trained to predict the new single excluded class. In the case of the examples from the remaining classes, the model is trained to predict the corresponding non-excluded classes.
Baseline 7 - Fine-tuning of Original model on Limited data after Mapping Restricted Classes to a Single Class with KD (FOLMRCSC-KD): This baseline is the same as baseline 6, but in
addition to the classification loss, it also uses a knowledge distillation loss to ensure that the nonexcluded class logits of the model (student) match that of the original model (teacher).
Baseline 8 - Fine-tuning of Original model on Limited Non-Restricted Class data (FOLNRC): In this baseline approach, we fine-tune the original model for a few epochs on the limited training data of non-excluded/remaining classes. The model is trained to predict the corresponding excluded classes of the training examples.
Baseline 9 - Fine-tuning of Original model on Limited Non-Restricted Class data with KD (FOLNRC-KD): This baseline is the same as baseline 8, but in addition to the classification loss, it also uses a knowledge distillation loss.
11.3 DATASETS
For the RCRMR-LD problem setting, we modify the CIFAR-100 (Krizhevsky et al., 2009), CUB (Wah et al., 2011) and ImageNet-1k (Russakovsky et al., 2015) datasets. In order to simulate the RCRMRLD problem setting with limited training data, we choose the last 20 classes of the CIFAR-100 dataset as the excluded classes and take only 10% of the training images of each class. Similarly, we choose the last 20 classes of the CUB dataset as the excluded classes with only 3 training images per class. For ImageNet-1K, we choose the last 100 classes as the excluded classes with 5% of the training images to simulate the limited data available for this problem setting.
11.4 IMPLEMENTATION DETAILS
In this section, we provide all the details required to reproduce our experimental results. We use the ResNet-20 (He et al., 2016), ResNet-56, ResNet-164 architectures for the experiments on the CIFAR-100 dataset. We use the standard data augmentation methods of random cropping to a size of 32 × 32 (zero-padded on each side with four pixels before taking a random crop) and random horizontal flipping, which is a standard practice for training a model on CIFAR-100. In order to obtain the original and FDR models for the CIFAR-100 dataset, we train the network for 300 epochs with a mini-batch size of 128 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e− 4. We choose the initial learning rate as 0.1, and we decrease it by a factor of 5 after every 50 epochs. For the CIFAR-100 experiments with ERwP using the ResNet-20, ResNet-56, and ResNet-164 architectures, we take learning rate= 1e− 4, β = 10 and optimize the network for 10 epochs. Since the available limited training data is only 10% of the entire CIFAR-100 dataset, therefore, our ERwP approach is approximately 30 ∗ 10 = 300× faster than the FDR method. For the experiments on the ImageNet dataset, we use the ResNet-18, ResNet-50, and MobileNet-V2 architectures. We use the standard data augmentation methods of random cropping to a size of 224 × 224 and random horizontal flipping, which is a standard practice for training a model on ImageNet-1k. In order to obtain the original and FDR models for the ImageNet dataset, we train the network for 100 epochs with a mini-batch size of 256 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e − 4. We choose the initial learning rate as 0.1, and we decrease it by a factor of 10 after every 30 epochs. For evaluation, the validation images are subjected to center cropping of size 224 × 224. For the ImageNet-1k experiments (5% training data) with ERwP using the ResNet-50 architecture, we optimize the network for 10 epochs with a learning rate of 9e − 5 using β = 200. For the ERwP experiments using the ResNet-18 architecture, we optimize the network for 10 epochs using β = 200 with an initial learning rate of 1.1e − 4 and a learning rate of 1.1e− 5 from the third epoch onward. In the case of the ERwP experiments with the MobileNet-V2 architecture, we optimize the network for 10 epochs using β = 400 with an initial learning rate of 1.5e − 4 and a learning rate of 1.5e − 5 from the third epoch onward. Since the available limited training data is only 5% of the entire ImageNet-1k dataset, therefore, our ERwP approach is approximately 20 ∗ 10 = 200× faster than the FDR method. For the experiments on the CUB-200 dataset, we use the ResNet-50 architecture pre-trained on the ImageNet dataset. In order to obtain the original and FDR models for the CUB-200 dataset, we train the network for 50 epochs with a mini-batch size of 64 using the stochastic gradient descent optimizer with momentum 0.9 and weight decay 1e − 3. We choose the initial learning rate as 1e − 2, and we decrease it by a factor of 10 after epochs 30 and 40. For the CUB-200 experiments (3 images per class, i.e., 10% training data) with ERwP using the ResNet-50 architecture, we optimize the network for 10 epochs with a learning rate of 1e − 4 using β = 10. Since the available limited training data is only 10% of the
entire CUB-200 dataset, therefore, our ERwP approach is approximately 5 ∗ 10 = 50× faster than the FDR method.
In our proposed approach, we use κ = 2 for all the experiments. We use a popular Pytorch implementation1 for performing knowledge distillation. We run all the experiments 3 times (using different random seeds) and report the average accuracy. We perform all the experiments using the Pytorch framework version 1.6.0 (Paszke et al., 2017) and Python 3.0. We use 4 GeForce GTX 1080 Ti graphics processing units for our experiments.
11.5 RCRMR-LD PROBLEM IN CUB-200 CLASSIFICATION
Table 6 reports the experimental results for different approaches to the RCRMR-LD problem over the CUB dataset using the ResNet-50 architecture. Our proposed ERwP approach achieves a constraint accuracy CAne that is very close to that of the original model even though we use only 3 images per class for optimizing the model. It achieves close to 0% forgetting accuracy FAe and achieves a FPAe that is significantly lower than that of the original model by an absolute margin of 35.80%. Similar to the CIFAR-100 experiments, our ERwP approach outperforms all the baseline approaches. Further, our ERwP requires only 10 epochs to remove the excluded class information from the model. Since the available limited training data is only 10% of the entire CUB dataset, therefore, our ERwP approach is approximately 5 ∗ 10 = 50× faster than the FDR method that is trained on the full training data for 50 epochs.
11.6 ABLATION EXPERIMENTS FOR β AND κ
We perform ablation experiments to identify the most suitable values for the hyper-parameters β and κ for our proposed ERwP. The ablation results in Tables 7, 8, validate our choice of hyper-parameter values considering the forgetting accuracy and the constraint accuracy of the resulting model.
11.7 EFFECT OF DIFFERENT DATA AUGMENTATIONS ON THE IDENTIFICATION OF CLASS RELEVANT MODEL PARAMETERS
We perform experiments to verify our approach of identifying the highly relevant parameters for any restricted class using various augmentation techniques (grayscale, vertical flip, rotation, random affine augmentations). We chose the same restricted class of CIFAR-100 and used the ResNet-56 network for all the experiments. The results in Fig. 6 indicate that for all the compared data augmentations approaches, pruning/zeroing out the high relevance parameters obtained using our approach, results
1https://github.com/peterliht/knowledge-distillation-pytorch/blob/master/model/net.py
Table 7: Experimental results on the CIFAR100 dataset with ResNet-20 architecture for the RCRMR-LD problem with 20 excluded classes using our proposed ERwP with different values of β.
Table 8: Experimental results on the CIFAR100 dataset with ResNet-20 architecture for the RCRMR-LD problem with 20 excluded classes using our proposed ERwP with different values of κ.
in a huge drop in the forgetting accuracy of the excluded class. Further, zeroing out the low relevance parameters, has a minor impact on the forgetting accuracy of the excluded class.
11.8 ABLATION EXPERIMENTS ON THE RESTRICTED CLASS RELEVANT PARAMETERS
We perform ablation experiments with ERwP to check if only 25% and 50% of the restricted class relevant parameters of each layer identified using our proposed procedure can be used for ERwP. We run each of these experiments for the same number of epochs for the CIFAR-100 dataset and ResNet56 network. However, we observed that the final FPAe falls from 68.65% to 60.35% and 53.7%, respectively, for 25% and 50% of restricted class relevant parameters of each layer as compared to 47.84% when using all the restricted class relevant parameters per layer identified using our approach. The good performance of our approach is more evident in light of the performance of the FDR model that achieves a FPAe accuracy of 45.40%. We provide this result as a reference to demonstrate that the 47.84% FPAe accuracy is due to the generalization power of the model and not due to the restricted classes information in the model. This shows that our approach effectively identifies the class-relevant parameters of the model for a given class.
11.9 PERFORMANCE OF ERWP OVER TRAINING EPOCHS
We analyze the change in the performance of the model after every epoch of our proposed ERwP approach in Fig. 7 for the CIFAR-100 dataset with 20 excluded classes using the ResNet-20 and ResNet-56 architectures. For both architectures, we observe that as the training progresses, ERwP maintains the constraint accuracy close to that of the original model and forces the forgetting accuracy to drop to 0%. ERwP also forces the forgetting prototype accuracy to keep dropping and makes it similar to the FDR model.
11.10 FINETUNING RESTRICTED CLASS RELEVANT PARAMETERS ON REMAINING CLASSES
In our experimental results, we demonstrated how finetuning the model on the limited training data of the non-excluded classes cannot sufficiently remove the excluded class information from the model. We also perform an ablation experiment to demonstrate that finetuning only the restricted class relevant parameters using the limited training data of the non-excluded classes is also not effective in sufficiently removing the excluded class information from the model. We perform these experiments on the CIFAR-100 dataset with the ResNet-56 architecture. We observe the constraint accuracy CAne, forgetting accuracy FAe and the forgetting prototype accuracy FAe are almost the same even in this case. Therefore, finetuning only on the remaining class data cannot sufficiently remove the excluded class information from the model representations.
11.11 EFFECT OF USING THE PROPOSED ERWP APPROACH WHEN THE ENTIRE DATASET IS AVAILABLE
We perform ablation experiments to demonstrate the performance of our proposed ERwP approach when the entire training data is available. We perform these experiments on the CIFAR-100 dataset using ResNet-20 and ResNet-56. We observe experimentally that for both the ResNet-20 and ResNet-
56 experiments using ERwP, the forgetting accuracy FAe accuracy is 0% and the constraint accuracy CAne matches that of the original model. Further, the gap between the forgetting prototype accuracy FPAe of ERwP and the FDR model reduces from 3.86% (for limited data) to 2.79% for ResNet-20. Similarly, the gap reduces from 2.44% (for limited data) to 1.65% for ResNet-56. However, ERwP requires only 2-3 epochs of optimization (∼100-150× faster than the FDR model) for achieving this performance when trained on the entire dataset. This makes it significantly faster than any approach that trains on the entire dataset.
11.12 QUALITATIVE ANALYSIS
In order to analyze the effect of removing the excluded class information from the model using our proposed ERwP approach, we study the class activation map visualizations (Selvaraju et al., 2017) of the model before and after applying ERwP. We observe in Fig. 8 that for the images from the excluded classes, the model’s region of attention gets scattered after applying ERwP, unlike the images from the remaining classes. | 1. What is the focus and contribution of the paper regarding the removal of restricted categories from model representations?
2. What are the strengths of the proposed approach, particularly in terms of its practicality and efficiency?
3. What are the weaknesses of the paper, especially regarding the transformation used and the heuristic nature of identifying relevant parameters?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or questions regarding the paper's experiments, comparisons with other works, and references? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a novel and practical problem called RCRMR-LD, aiming to removel restricted categories from model representations with limited data. They first give some direct solutions and analyze their weaknesses. Then, they propose their own solution to discard the restricted class information from the restricted class relevant parameters. Experiments verify that this approach not only performs similar to FDR but also is faster than it.
Review
Pros:
The problem RCRMR-LD seems interesting and practical, which addresses the specific class-level restriction by removing corresponding model representations. This setting also save time and computational resource for large scale datasets.
Experiments in this paper are solid and convincing enough. They design 5 basic baselines and perform corresponding ablation study. Considering RCRMR-LD is a new problem, if there exists, comparing some related works will be better.
################################################
Cons:
From my point of view, the transformation
f
plays a key role in identifying the parameters that are highly relevant to the restricted classes. However, they seem only try the grayscale transformation and do not give more discussion about
f
. If the model is just trained by grayscale images, will this method fail? For natural language tasks, what transformation are you going to use? I suggest that the authors make more discussion and comparison of the various transformations.
From Table 1, I find all the FPA
e
of ERwP are relatively high, indicating that the feature representations of the model still contain much restricted category information. Although they indeed remove restricted category from class-level, attackers still can use some model inversion techniques (e.g., [1]) to restore the restricted class data with few owned ones, leading to privacy leakage.
Identifying those parameters that are relevant to the restricted classes through ERwP is still heuristic. I admit that ERwP seems to make sense, but some verifications about this claim need to be included.
Except for "Related work", I do not find any references in this paper. At least in the "Introduction", you should cite some related works to support your claims.
Colloquial expressions and grammar issues are common, thus the writting needs further improvement.
#########################################################
Typo:
"Baseline 4 - Training of Original model on Limited Non-Restricted Class data with (TOLNRC):" -- missing words?
"
N
e
and
N
r
refer to the number of excluded classes, respectively." -- missing words?
and so on...
############################################################
Questions during rebuttal period:
Please address and clarify the cons above.
##############################################################
References:
[1] Zhang Y, Jia R, Pei H, et al. The secret revealer: Generative model-inversion attacks against deep neural networks. CVPR, 2020. |
ICLR | Title
Self-Guided Diffusion Models
Abstract
Diffusion models have demonstrated remarkable progress in image generation quality, especially when guidance is used to control the generative process. However, guidance requires a large amount of image-annotation pairs for training and is thus dependent on their availability, correctness and unbiasedness. In this paper, we eliminate the need for such annotation by instead leveraging the flexibility of self-supervision signals to design a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our method provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments on single-label and multi-label image datasets demonstrate that self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels, especially on unbalanced data. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation. Self-guided diffusion is simple, flexible and expected to profit from deployment at scale.
1 INTRODUCTION
The image fidelity of diffusion models is spectacularly enhanced by conditioning on class labels (Dhariwal & Nichol, 2021). Classifier guidance goes a step further and offers control over the alignment with the class label, by using the classifier gradient to guide the image generation (Dhariwal & Nichol, 2021). Classifier-free guidance (Ho & Salimans, 2021) replaces the dedicated classifier with a diffusion model that is trained by randomly setting the condition to the special non-label class. This has proven a fruitful research line for several other condition modalities, such as text (Saharia et al., 2022; Ramesh et al., 2021), image layout (Rombach et al., 2022), visual neighbors (Ashual et al., 2022), and image features (Giannone et al., 2022). However, all these conditioning and guidance methods require ground-truth annotations. This is an unrealistic and too costly assumption in many domains. For example, medical images require domain experts to annotate very high-resolution data, which is infeasible to do exhaustively (Panteli et al., 2021). In this paper, we propose to remove the necessity of any ground-truth annotation for guidance diffusion models.
We are inspired by progress in self-supervised learning (Chen et al., 2020; Caron et al., 2021), which encodes data, and especially images, into semantically meaningful latent vectors without using any label information. It usually does so by solving a pretext task (Zhang et al., 2017; Gidaris et al., 2018; Asano et al., 2020; He et al., 2020) on image-level to remove the necessity of labels. This annotationfree paradigm enables the representation learning to upscale to larger and more diverse (image) datasets (Gao et al., 2021). Recently, the holistic image-level self-supervision has been extended to more expressive dense representations, including bounding boxes, e.g., (Siméoni et al., 2021; Melas-Kyriazi et al., 2022) and pixel-precise segmentation masks, e.g., (Hamilton et al., 2022; Ziegler & Asano, 2022). Some self-supervised learning methods even outperform supervised alternatives (He et al., 2020; Caron et al., 2021). We hypothesize that for diffusion models, self-supervision may also provide a flexible and competitive, possibly even stronger guidance signal than ground-truth labeled guidance.
In this paper, we propose self-guided diffusion, a framework for image generation using guided diffusion without the need for any annotated image-label pairs. The framework encompasses a feature extraction function and a self-annotation function, that are compatible with recent selfsupervised learning advances. Furthermore, we leverage the flexibility of self-supervised learning
to generalize the guidance signal from the holistic image level to (unsupervised) local bounding boxes and segmentation masks for more fine-grained guidance. We demonstrate the potential of our proposal on single-label and multi-label image datasets, where self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation.
2 APPROACH
Before detailing our self-guided diffusion framework, we provide a brief background on diffusion models and the classifier-free guidance technique.
2.1 BACKGROUND
Diffusion models. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) gradually add noise to an image x0 until the original signal is fully diminished. By learning to reverse this process one can turn random noise xT into images. This diffusion process is modeled as a Gaussian process with Markovian structure:
q(xt|xt−1) := N (xt; √ 1− βtxt−1, βtI), q(xt|x0) := N (xt; √ αtx0, (1− αt)I, (1)
where β1, . . . , βT is a fixed variance schedule on which we define αt := 1− βt and αt := ∏t s=1 αs. All latent variables have the same dimensionality as the image x0 and differ by the proportion of the retained signal and added noise.
Learning the reverse process reduces to learning a denoiser xt ∼ q(xt|x0) that recovers the original image as (xt − (1− αt) θ(xt, t))/ √ αt ≈ x0. Ho et al. (2020) optimize the network parameters θ by minimizing the error of the noise prediction:
L(θ) = E ,x,t [ || θ(xt, t)− ||22 ] , (2)
in which ∼ N (0, I), x ∈ D is a sample from the training dataset D and the noise prediction function θ(·) are encouraged to be as close as possible to .
The standard sampling (Ho et al., 2020) requires many neural function evaluations to get good quality samples. Instead, the faster Denoising Diffusion Implicit Models (DDIM) sampler (Song et al., 2021a) has a non-Markovian sampling process:
xt−1 = √ αt−1
( xt −
√ 1− αt θ(xt, t)√
αt
) + √ 1− αt−1 − σ2t · θ(xt, t) + σt , (3)
where ∼ N (0, I) is standard Gaussian noise independent of xt.
Classifier-free guidance. To trade off mode coverage and sample fidelity in a conditional diffusion model, Dhariwal & Nichol (2021) propose to guide the image generation process using the gradients of a classifier, with the additional cost of having to train the classifier on noisy images. Motivated by this drawback, Ho & Salimans (2021) introduce label-conditioned guidance that does not require a classifier. They obtain a combination of a conditional and unconditional network in a single model, by randomly dropping the guidance signal c during training. After training, it empowers the model with progressive control over the degree of alignment between the guidance signal and the sample by varying the guidance strength w:
̃θ(xt, t; c, w) = (1− w) θ(xt, t) + w θ(xt, t; c). (4)
A larger w leads to greater alignment with the guidance signal, and vice versa. Classifier-free guidance (Ho & Salimans, 2021) provides progressive control over the specific guidance direction at the expense of labor-consuming data annotation. In this paper, we propose to remove the necessity of data annotation using a self-guided principle based on self-supervised learning.
2.2 SELF-GUIDED DIFFUSION
The equations describing diffusion by classifier-free guidance implicitly assume dataset D and its images each come with a single manually annotated class label. We prefer to make the label requirement explicit. We denote the human annotation process as the function ξ(x;D, C) : D → C, where C defines the annotation taxonomy, and plug this into Equation (4):
̃θ(xt, t; ξ(x; D, C), w) = (1− w) θ(xt, t) + w θ(xt, t; ξ(x; D, C)). (5)
We propose to replace the supervised labeling process ξ with a self-supervised process that requires no human annotation:
̃θ(xt, t; fψ(gφ(x; D); D), w) = (1− w) θ(xt, t) + w θ(xt, t; fψ(gφ(x; D); D), (6)
where g is a self-supervised feature extraction function parameterized by φ that maps the input data to feature space H, g : x → gφ(x),∀x ∈ D, and f is a self-annotation function parameterized by ψ to map the raw feature representation to the ultimate guidance signal k, fψ : gφ(·;D) → k. The guidance signal k can be any form of annotation, e.g., label, box, pixel, that can be paired with an image, which we derive by k=fψ(gφ(x; D); D). The choice of the self-annotation function f can be non-parametric by heuristically searching over dataset D based on the extracted feature gφ(·; D), or parametric by fine-tuning on the feature map gφ(·; D).
For the noise prediction function θ(·), we adopt the traditional UNet network architecture (Ronneberger et al., 2015) due to its superior image generation performance, following (Ho et al., 2020; Song et al., 2021b; Ramesh et al., 2022; Saharia et al., 2022).
Stemming from this general framework, we present three methods working at different spatial granularities, all without relying on any ground-truth labels. Specifically, we cover image-level, box-level and pixel-level guidance by setting the feature extraction function gφ(·), self-annotation function fψ(·), and guidance signal k to an approximate form.
Self-labeled guidance. To achieve self-labeled guidance, we need a self-annotation function f that produces a representative guidance signal k ∈ RK . Firstly, we need an embedding function gφ(x),x ∈ D which provides semantically meaningful image-level guidance for the model. We obtain gφ(·) in a self-supervised manner by mapping from image space, gφ(·) : RW×H×3 → RC , where W and H are image width and height and C is the feature dimension. We may use any type of feature for the feature embedding function g, which we will vary and validate in the experiments. As the image-level feature gφ(·; D) is not compact enough for guidance, we further conduct a non-parametric clustering algorithm, e.g., k-means, as our self-annotation function f . For all features gφ(·), we obtain the self-labeled guidance via self-annotation function fψ(·) : RC → RK . Motivated by Rolfe (2017), we use a one-hot embedding k ∈ RK for each image to achieve a compact guidance.
We inject the guidance information into the noise prediction function θ by concatenating it with timestep embedding t and feed the concatenated information concat[t,k] into every block of the UNet. Thus, the noise prediction function θ is rewritten as:
θ(xt, t; k) = θ(xt,concat[t, k]), (7)
where k=fψ(gφ(x; D); D) is the self-annotated image-level guidance signal. For simplicity, we ignore the self-annotation function fψ(·) here and in the later text. Self-labeled guidance focuses on the image-level global guidance. Next we consider a more fine-grained spatial guidance.
Self-boxed guidance. Bounding boxes specify the location of an object in an image (Ren et al., 2015; Carion et al., 2020) and complements the content information of class labels. Our selfboxed guidance approach aims to attain this signal via self-supervised models. We represent the bounding box as a feature map (W ×H) rather than coordinates ([X,Y,W,H]). We propose the selfannotation function f that obtains a bounding box ks ∈ RW×H by mapping from feature space H to the bounding box space via fψ(·; D) : RW×H×C → RW×H , and inject the guidance signal by concatenating in the channel dimension: xt := concat[xt, ks] Usually in self-supervised learning, the derived bounding box is class-agnostic (Vo et al., 2020; 2021). To inject a self-supervised pseudo label to further enhance the guidance signal, we again resort to clustering to obtain k and concatenate
it with the time embedding t := concat[t,k]. To incorporate such guidance, we reformulate the noise prediction function θ as:
θ(xt, t; ks,k) = θ(concat[xt,ks],concat[t,k]), (8)
in which ks is the self-supervised box guidance obtained by self-annotation functions fψ, k is the self-supervised image-level guidance from clustering. The design of fψ is flexible as long as it obtains self-supervised bounding boxes by fψ(·; D) : RW×H×C → RW×H . Self-boxed guidance guides the diffusion model by boxes, which specifies the box area in which the object will be generated. Sometimes, we may need an even finer granularity, e.g., pixels, which we detail next.
Self-segmented guidance. Compared to a bounding box, a segmentation mask is a more finegrained signal. Additionally, a multichannel mask is more expressive than a binary foregroundbackground mask. Therefore, we propose a self-annotation function f that acts as a plug-in built on feature gφ(·; D) to extract the segmentation mask ks via function mapping fψ(·; D) : RW×H×C → RW×H×K , where K is the number of segmentation clusters.
To inject the self-segmented guidance into the noise prediction function θ, we consider two pathways for injection of such guidance. We first concatenate the segmentation mask to xt in the channel dimension, xt := concat[xt, ks], to retain the spatial inductive bias of the guidance signal. Secondly, we also incorporate the image-level guidance to further amplify the guidance signal along the channel dimension. As the segmentation mask from the self-annotation function fψ already contains image-level information, we do not apply the image-level clustering as before in our selflabeled guidance. Instead, we directly derive the image-level guidance from the self-annotation result fψ(·) via spatial maximum pooling: RW×H×K → RK , and feed the image-level guidance k̂ into the noise prediction function via concatenating it with the timestep embedding t:=concat[t, k̂]. The concatenated results will be sent to every block of the UNet. In the end, the overall noise prediction function for self-segmented guidance is formulated as:
θ(xt, t; ks, k̂) = θ(concat[xt,ks], concat[t, k̂]), (9)
in which ks is the spatial mask guidance obtained from self-annotation function f , k̂ is a multi-hot image-level guidance derived from the self-supervised learning mask ks.
We have described three variants of self-guidances by setting the feature extraction function gφ(·), self-annotation function fψ(·), guidance signal k to an approximate form. In the end, we arrive at three noise prediction functions θ, which we utilize for diffusion model training and sampling, following the standard guided (Ho & Salimans, 2021) diffusion approach as detailed in Section 2.1.
3 EXPERIMENTS
In this section, we aim to answer the overarching question: Can we substitute ground-truth annotations with self-annotations? First, we consider the image-label setting, in which we examine what kind of self-labeling is required to improve image fidelity. Next, we look at image-bounding box pairs. Finally, we examine whether it is possible to gain fine-grained control with self-labeled image-segmentation pairs. We first present the general settings relevant for all experiments.
Evaluation metric. We evaluate both diversity and fidelity of the generated images by the Fréchet Inception Distance (FID) (Heusel et al., 2017), as it is the de facto metric for the evaluation of generative methods, e.g., (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019; Saharia et al., 2022). It provides a symmetric measure of the distance between two distributions in the feature space of Inception-V3 (Szegedy et al., 2016). We use FID as our main metric for the sampling quality.
Baselines & implementation details. Throughout our experiments, we always use the diffusion model following (Ho et al., 2020). We train with timestep T=1000, guidance drop probability p=0.1 and the linear variance schedule. For sampling, we set the guidance strength w=2 and deploy a clipping operation in every sampling timestep. As the baseline for a guided diffusion model with ground-truth labels we follow classifier-free guidance (Ho & Salimans, 2021). We use DDIM (Song et al., 2021a) samplers with 250 steps, σt=0. All hyperparameter of our self-guided diffusion and the
baselines are the same, allowing us to compare methods under a fixed compute budget. For details of the learning rate, optimizer, and hyperparameters, we refer to Appendix C. All code will be released.
3.1 SELF-LABELED GUIDANCE
We use ImageNet32/64 (Deng et al., 2009) to validate the efficacy of self-labeled guidance. For better evaluation of sampling quality, we also adopt the Inception Score (IS) (Salimans et al., 2016), following the common practice on this dataset (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019). IS measures how well a model fits into the full ImageNet class distribution.
Choice of feature extraction function g. We first measure the influence of the feature extraction function g used before clustering. We consider two supervised feature backbones: ResNet50 (He et al., 2016) and ViTB/16 (Dosovitskiy et al., 2021), and four selfsupervised backbones: SimCLR (Chen et al., 2020), MAE (He et al., 2022), MSN (Assran et al., 2022) and DINO (Caron et al., 2021). To assure a fair comparison we use 10k clusters for all architectures. From the results in Table 1, we make the following observations. First, features from the supervised ResNet50, and ViT-B/16 lead to a satisfactory FID performance, at the expense of relatively limited diversity (low IS). However, they still require label
annotation, which we strive to avoid in our work. Second, among the self-supervised feature extraction functions, the MSN- and DINO-pretrained ViT backbones have the best trade-off in terms of both FID and IS. They even improve over the label-supervised backbones. This implies the label assignment for the guidance is not unique, pretext labels on top of self-supervised learning can still provide influential guidance signal in comparison with human annotated labels. Also, the diversity of label-supervised ViT-B/16 is much lower than self-supervised ViT-B/16 with an IS of 7.81 vs. 10.41, suggesting that self-supervised guidance leads to more unbiased representation in comparison to supervised guidance. From now on we pick the DINO ViT-B/16 architecture as our self-supervised feature extraction function g.
Effect of number of clusters. Next, we ablate the influence of the number of clusters on the overall sampling quality. We consider 1, 10, 100, 500, 1,000, 5,000 and 10,000 clusters on the extracted CLS token from the DINO ViT-B/16 feature. For efficient, yet uniform, comparison we only run 20 epochs on ImageNet32. To put our sampling results in perspective, we also provide FID and IS results for DDPM and classifier-free guidance. From the result in Figure 1 we first observe that when the cluster number is ranging from 1 to 5,000, our model’s performance monotonously increases and
always surpasses the DDPM model. Beyond 1,000 clusters, we are competitive with classifier-free guidance using ground-truth labels. For 5,000 clusters, there is a sweet spot where we outperform classifier-free guidance with an FID of 16.4 vs. 17.9 and an IS of 9.94 vs. 10.35, see also the generated images in Figure 1. The performance of FID starts to deteriorate from 5,000 to 10,000 clusters. We conclude that self-labeled guidance outperforms DDPM without any guidance beyond a single cluster, is competitive with classifier-free guidance beyond 1,000 clusters, and is even able to outperform guidance by ground-truth labels for 5,000 clusters. From now on we use 5,000 clusters for self-labeled guidance on ImageNet.
Varying guidance strength w. Next we consider the influence of the guidance strength w on our sampling results. As the validation set of ImageNet32 is strictly balanced, we also consider an unbalanced setting which is more similar to real-world deployment. Under both settings we compare the FID between our self-labeled guidance and ground-truth guidance. We train both models for 100 epochs. For the standard ImageNet32 validation setting in Figure 2a, our method achieves a 17.8% improvement for respective optimal guidance strength of the two methods. Self-labeled guidance is especially effective for lower values of w. We observe similar trends for the unbalanced setting in Figure 2b, be it that the overall FID results are slightly higher for both methods. The improvement increases to 18.7%. We conjecture this is due to the unbalanced nature of the k-means algorithm (Last et al., 2017), and clustering based on the statistics of the overall dataset can potentially lead to more robust performance in unbalanced setting.
Self-labeled comparisons on ImageNet32/64. We compare our self-labeled guidance with groundtruth labels guidance, which utilizes the technique of classifier-free guidance (Ho & Salimans, 2021). We train all experiments for 100 epochs which take about 6 days to converge on four RTX A5000 GPUs. All hyperparameters are the same between the two methods to make the comparison as fair as possible. Results on ImageNet32 and ImageNet64 are in Table 2. Similar to Dhariwal & Nichol (2021), we observe that any guidance setting improves considerably over the unconditional & no-guidance model. Surprisingly, our self-labeled model even outperforms the ground-truth labels by a large gap in terms of FID of 2.9 and 4.7 points respectively. We hypothesize that the ground-truth taxonomy might be suboptimal for learning generative models and the self-supervised clusters offer a better guidance signal due to better alignment with the visual similarity of the images. This suggests that the label-conditioned guidance from Ho & Salimans (2021) can be completely replaced by guidance from self-supervision, which would enable guided diffusion models to learn from even larger (unlabeled) datasets than feasible today.
3.2 SELF-BOXED GUIDANCE
We report on Pascal VOC and COCO_20K to validate the efficacy of self-boxed guidance. To obtain class-agnostic object bounding boxes, we use LOST (Siméoni et al., 2021) as our self-annotation function f . We report train FID for Pascal VOC and train/validation FID for COCO_20K. For
image-level clustering to attain the guidance signal, we empirically found k=100 works best on both datasets as those datasets are relatively small-scale in images and labels compared to ImageNet. We train our diffusion model for 800 epochs with input image size 64×64. See Appendix C for more details.
Self-boxed comparisons on Pascal VOC and COCO_20K. For the ground-truth labels guidance baseline, we condition on a class embedding. Since there are now multiple objects per image, we represent the ground-truth class with a multi-hot embedding. Aside from the class embedding which is multi-hot in our method, all other settings remain the same for a fair comparison. The results in Table 3, confirm that the multi-hot class embedding is indeed effective for multi-label datasets, improving over the no-guidance model by a large margin. This improvement comes at the cost of manually annotating multiple classes per image. Self-boxed guidance further improves upon this result, by reducing the FID by an additional 5.1 and 3.3 points respectively without using any groundtruth annotation. In Figure 3, we show our method generates diverse and semantically well-aligned images.
3.3 SELF-SEGMENTED GUIDANCE
Finally, we validate the efficacy of self-segmented guidance on Pascal VOC and COCO-Stuff. For COCO-Stuff we follow the split from (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang
et al., 2022), with a train set of 49,629 images and a validation set of 2,175 images. Classes are merged into 27 (15 stuff and 12 things) categories. For self-segmented guidance we apply STEGO (Hamilton et al., 2022) as our self-annotation function f . We set the cluster number to 27 for COCO-Stuff, and 21 for Pascal VOC, following STEGO. We train all models on images of size 64×64, for 800 epochs on Pascal VOC, and for 400 epochs on COCO-Stuff. We report the train FID for Pascal VOC and both train and validation FID for COCO-Stuff. More details on the dataset and experimental setup are provided in Appendix C.
Self-segmented comparisons on Pascal VOC and COCO-Stuff. We compare against both the ground-truth labels guidance baseline from the previous section and a model trained with ground-truth semantic masks guidance. The results in Table 4 demonstrate that our self-segmented guidance still outperforms the ground-truth labels guidance baseline on both datasets. The comparison between ground-truth labels and segmentation masks reveals an improvement in image quality when using the more fine-grained segmentation mask as the condition signal. But these segmentation masks are one of the most costly types of image annotations that require every pixel to be labeled. Our self-segmented approach avoids the necessity for annotations while narrowing the performance gap, and more importantly offering fine-grained control over the image layout. We demonstrate this controllability with examples in Figure 4. These examples further highlight a robustness against noise in the segmentation masks, which our method acquires naturally due to training with noisy segmentations.
4 RELATED WORK
Conditional generative models. Earlier works on generative adversarial networks (GANs) have already observed improvements in image quality by conditioning on ground-truth labels (Mirza & Osindero, 2014; Brock et al., 2019; Casanova et al., 2021). Recently, conditional diffusion models have reported similar improvements, while also offering a great amount of controllability via classifierfree guidance by training on images paired with textual descriptions (Ramesh et al., 2021; 2022; Saharia et al., 2022), semantic segmentations (Wang et al., 2022), or other modalities (Bordes et al., 2022; Yang et al., 2022; Song et al., 2022). Our work also aims to realize the benefits of conditioning and guidance, but instead of relying on additional human-generated supervision signals, we leverage the strength of pretrained self-supervised visual encoders.
Zhou et al. (2022) train a GAN for text-to-image generation without any image-text pairs, by leveraging the CLIP (Radford et al., 2021) model that was pretrained on a large collection of paired data. In this work, we do not assume any paired data for the generative models and rely purely on images. Additionally, image layouts are difficult to be expressed by text, thus our self-boxed and selfsegmented methods are complementary to text conditioning. Instance-Conditioned GAN (Casanova et al., 2021), Retrieval-augmented Diffusion (Blattmann et al., 2022) and KNN-diffusion (Ashual et al., 2022) are three recent methods that utilize nearest neighbors as guidance signals in generative models. Similar to our work, these methods rely on conditional guidance from an unsupervised source, we differ from them by further attempting to provide more diverse spatial guidance, including (self-supervised) bounding boxes and segmentation masks.
Self-supervised learning in generative models. Self-supervised learning (Caron et al., 2020; Chen et al., 2020; Asano et al., 2020; Caron et al., 2021) has shown great potential for representation learning in many downstream tasks. As a consequence, it is also commonly explored in GAN for evaluation and analysis (Morozov et al., 2020), conditioning (Casanova et al., 2021; Mangla et al., 2022), stabilizing training (Chen et al., 2019), reducing labeling costs (Lučić et al., 2019) and avoiding mode collapse (Armandpour et al., 2021). Our work focuses on translating the benefits of self-supervised methods to the generative domain and providing flexible guidance signals to diffusion models at various image granularities. In order to analyze the feature representation from self-supervised models, Bordes et al. (2022) condition on self-supervised features in their diffusion model for better visualization in data space. We instead condition on the compact clustering after the self-supervised feature, and further introduce the elasticity of self-supervised learning into diffusion models for multi-granular image generation.
5 CONCLUSION
We have explored the potential of self-supervision signals for diffusion models and propose a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our framework provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments indicate that self-supervision signals are an adequate replacement for existing guidance methods that generate images by relying on annotated image-label pairs during training. Furthermore, both self-boxed and self-segmented approaches demonstrate that we can acquire fine-grained control over the image content, without any ground-truth bounding boxes or segmentation masks. Due to limited computational resources, we restricted our experiments to images of a maximum size of 64×64. For future work, it will be of interest to verify our findings on larger image resolutions. Ultimately, our goal is to enable the benefits of self-guided diffusion for unlabeled and more diverse datasets at scale, wherein we believe this work is a promising first step.
1 Introduction 1
2 Approach 2
2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Self-Guided Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Experiments 4
3.1 Self-labeled Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Self-boxed Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Self-segmented Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Related Work 8
5 Conclusion 9
A Main framework 15
B More quantitative results 16
B.1 Correlation between NMI and FID in different feature backbones. . . . . . . . . . 16
B.2 Precision and Recall in ImageNet32/64 dataset . . . . . . . . . . . . . . . . . . . 16
B.3 Cluster number ablation in self-boxed guidance . . . . . . . . . . . . . . . . . . . 16
B.4 Trend visualization of training loss and validation FID . . . . . . . . . . . . . . . 17
C More experimental details 17
C.1 UNet structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
C.2 Training Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
C.3 Dataset preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C.4 LOST, STEGO algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
D More qualitative results 19
A MAIN FRAMEWORK
We illustrate the pipeline of our framework in Figure 5.
B MORE QUANTITATIVE RESULTS
B.1 CORRELATION BETWEEN NMI AND FID IN DIFFERENT FEATURE BACKBONES.
To assess the correlation between cluster quality and sample fidelity, we consider the Normalized Mutual Information (NMI), which is commonly adopted as a mutual information-derived metric to assess clustering quality based on provided ground truth labels. In Figure 6 we plot the connection between NMI and FID. For the label-supervised functions, the NMI is unrelated to the FID, but for the self-supervised functions, NMI and FID are negatively correlated, suggesting that NMI — a metric commonly applied to assess the quality of self-supervised methods — is also predictive of the model’s usefulness in our setting.
B.2 PRECISION AND RECALL IN IMAGENET32/64 DATASET
We show the extra results of ImageNet on precision and recall in Table 5. We follow the evaluation code of precision and recall from ICGAN (Casanova et al., 2021), our self-labeled guidance also outperforms ground-truth labels in precision and remains competitive in recall.
B.3 CLUSTER NUMBER ABLATION IN SELF-BOXED GUIDANCE
In table 6, we empirically evaluate the performance when we alter the cluster number in our self-boxed guidance. We find the performance will increase from k = 21 to k = 100, and saturated at k = 100.
B.4 TREND VISUALIZATION OF TRAINING LOSS AND VALIDATION FID
We visualize the trend of training loss and validation FID in Figure 7.
C MORE EXPERIMENTAL DETAILS
Training details. For our best results, we train 100 epochs on 4 GPUs of A5000 (24G) in ImageNet. We train 800/800/400 epochs on 1GPU of A6000 (48G) in Pascal VOC, COCO_20K, COCO-Stuff, respectively. All qualitative results in this paper are trained in the same setting as mentioned above. We train and evaluate the Pascal VOC, COCO_20K, COCO-Stuff in image size 64, and visualize them by bilinear up sampling to 256, following (Liu et al., 2022).
Sampling details. We sample the guidance signal from the distribution of training set in our all experiments. For each timestep, we need twice of Number of Forward Evaluation (NFE), we optimize them by concatenating the conditional and unconditional signal along the batch dimension so that we only need one time of NFE in every timestep.
Evaluation details. We use the common package Clean-FID (Parmar et al., 2022), torchfidelity (Obukhov et al., 2020) for FID, IS calculation, respectively. For IS, we use the standard 10-split setting, we only report IS on ImageNet, as it might be not an appropriate metric for non object-centric datasets (Barratt & Sharma, 2018). For checking point, we pick the checking point every 10 epochs by minimal FID between generated sample set and train set.
C.1 UNET STRUCTURE
Guidance signal injection. We describe the detail of guidance signal injection in Figure 8. The injection of self-labeled guidance and self-boxed/segmented guidance is slightly different. The common part is by concatenation between timestep embedding and noisy input, the concatenated feature will be sent to every block of the UNet. For the self-boxed/segmented guidance, we not only conduct the information fusion as above, but also incorporate the spatial inductive-bias by concatenating it with input, the concatenated result will be fed into the UNet.
Timestep embedding. We embed the raw timestep information by two-layer MLP: FC(512, 128)→SiLU→FC(128, 128).
Guidance embedding. The guidance is in the form of one/multi-hot embedding RK , we feed it into two-layer MLP: FC(K, 256)→SiLU→FC(256, 256), then feed those guidance signal into the UNet following in Figure 8.
Cross-attention. In training for non object-centric dataset, we also tokenize the guidance signal to several tokens following Imagen Saharia et al. (2022), we concatenate those tokens with image tokens (can be transposed to a token from typical feature map by RW×H×C → RC×WH ), the cross-attention (Rombach et al., 2022; Blattmann et al., 2022) is conducted by CA(m, concat[k,m]). Due to the quadratic complexity of transformer (Katharopoulos et al., 2020; Lu et al., 2021), we only apply the cross-attention in lower-resolution feature maps.
C.2 TRAINING PARAMETER
3×32×32 model,4GPU,ImageNet32
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 3e− 4 Blocks per resolution: 2 Batch size: 128 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 4GPU, ImageNet64
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 48 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 1GPU, Pascal VOC, COCO_20K, COCO-Stuff
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 80 Attention resolutions: 4 EMA: 0.9999 Number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 1 × A6000(45G) Conditioning embedding MLP layers: 2 Training Epochs: 800/800/400 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256 Context token number: 8 Context dim: 32
C.3 DATASET PREPARATION
The preparation of unbalanced dataset. There are 50,000 images in the validation set of ImageNet with 1,000 classes (50 instances for each). We index the class from 0 to 999, for each class ci, the instance of the class ci is bi× 50/1000c = bi/200c.
Pascal VOC. We use the standard split from (Siméoni et al., 2021). It has 12,031 training images. As there is no validation set for Pascal VOC dataset, therefore, we only evaluate FID on train set. We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO_20K. We follow the split from (Siméoni et al., 2021; Vo et al., 2020; Lin et al., 2014). COCO_20k is a subset of the COCO2014 trainval dataset, consisting of 19,817 randomly chosen images, used in unsupervised object discovery (Siméoni et al., 2021; Vo et al., 2020). We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO-Stuff. It has a train set of 49,629 images, validation set of 2,175 images, where the original classes are merged into 27 (15 stuff and 12 things) high-level categories. We use the dataset split following (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang et al., 2022), We sample 10,000 images and use 10,000 train/validation images as reference set for FID evaluation.
C.4 LOST, STEGO ALGORITHMS
LOST algorithm details. We conduct padding to make the original image can be patchified to be fed into the ViT architecture (Dosovitskiy et al., 2021), and feed the original padded image into the LOST architecture using official source code 1. LOST can also be utilized in a two-stage approach to provide multi-object, due to its complexity, we opt for only single-object discovery in this paper.
STEGO algorithm details. We follow the official source code 2, and apply padding to make the original image can be fed into the ViT architecture to extract the self-segmented guidance signal.
For COCO-Stuff dataset, we directly use the official pretrained weight. For Pascal VOC, we train STEGO ourselves using the official hyperparameters.
In STEGO’s pre-processing for the k-NN, the number of neighbors for k-NN is 7. The segmentation head of STEGO is composed of a two-layer MLP (with ReLU activation) and outputs a 70-dimension feature. The learning rate is 5e− 4, batch size is 64.
D MORE QUALITATIVE RESULTS
1https://github.com/valeoai/LOST 2https://github.com/mhamilton723/STEGO
Guidance signal from training set: | 1. What is the focus and contribution of the paper regarding diffusion models and self-supervision?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. Do you have any concerns or questions about the application scenarios, clustering method, and generation process?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces the idea of self-supervision into the guidance generation of diffusion models. Specifically, previous work required paired datasets with annotations to train the model or auxiliary network, this paper uses a self-supervised approach to obtain the required labels for the guidance information to train and sample, eliminating the drawbacks of using annotated data. The authors provide guidance at different fine-grained levels: image level, object level, and pixel level. Quantitative and qualitative experiments show the effectiveness of their approach.
Strengths And Weaknesses
Strengths
The overall idea of this paper is interesting, the use of self-supervised labeling eliminates the reliance on availability, correctness and unbiasedness of image annotation.
The writing is logical and easy to understand.
Weaknesses
The motivation of this paper comes from improving the supervised training methods in [1] and [2] to self-supervised and mainly builds on the ideas of [2]. However, the biggest problem in this is that the application scenarios of the method in this paper are not as flexible as those in [1] and [2], but you consider self-annotations can substitute ground-truth annotations. The methods in [1] and [2] are able to specify specific and meaningful categories to guide the generation, while your method, due to the use of clustering, cannot specify the specific generated category and can only generate images according to the clustering results of the guide images. However, in the experiment, the ImageNet dataset has only 1000 classes and you use 5000 clusters, which makes it unclear whether the results of clustering and generation are meaningful.
On the basis of the aforementioned, I think the performance is quite relevant to the clustering results and the choice of function f. However, you have not provided an analysis of how clustering affects the generation and the clustering experimental results.
Some of the details are not clear. For the Self-boxed guidance, I have not figured out how the bounding box is generated and why the size is still WxH, how it is associated with the position and size of the object, and whether it is possible to visualize the resulting WxH matrix?
The image resolution sizes of the dataset used for the experiments are 32x32 and 64x64, which are too small to be practical. Can it compete with [1] and [2] on larger resolution sizes?
[1] Diffusion Models Beat GANs on Image Synthesis [2] Classifier-Free Diffusion Guidance
Clarity, Quality, Novelty And Reproducibility
The idea is interesting, while the experiments should be further improved. |
ICLR | Title
Self-Guided Diffusion Models
Abstract
Diffusion models have demonstrated remarkable progress in image generation quality, especially when guidance is used to control the generative process. However, guidance requires a large amount of image-annotation pairs for training and is thus dependent on their availability, correctness and unbiasedness. In this paper, we eliminate the need for such annotation by instead leveraging the flexibility of self-supervision signals to design a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our method provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments on single-label and multi-label image datasets demonstrate that self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels, especially on unbalanced data. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation. Self-guided diffusion is simple, flexible and expected to profit from deployment at scale.
1 INTRODUCTION
The image fidelity of diffusion models is spectacularly enhanced by conditioning on class labels (Dhariwal & Nichol, 2021). Classifier guidance goes a step further and offers control over the alignment with the class label, by using the classifier gradient to guide the image generation (Dhariwal & Nichol, 2021). Classifier-free guidance (Ho & Salimans, 2021) replaces the dedicated classifier with a diffusion model that is trained by randomly setting the condition to the special non-label class. This has proven a fruitful research line for several other condition modalities, such as text (Saharia et al., 2022; Ramesh et al., 2021), image layout (Rombach et al., 2022), visual neighbors (Ashual et al., 2022), and image features (Giannone et al., 2022). However, all these conditioning and guidance methods require ground-truth annotations. This is an unrealistic and too costly assumption in many domains. For example, medical images require domain experts to annotate very high-resolution data, which is infeasible to do exhaustively (Panteli et al., 2021). In this paper, we propose to remove the necessity of any ground-truth annotation for guidance diffusion models.
We are inspired by progress in self-supervised learning (Chen et al., 2020; Caron et al., 2021), which encodes data, and especially images, into semantically meaningful latent vectors without using any label information. It usually does so by solving a pretext task (Zhang et al., 2017; Gidaris et al., 2018; Asano et al., 2020; He et al., 2020) on image-level to remove the necessity of labels. This annotationfree paradigm enables the representation learning to upscale to larger and more diverse (image) datasets (Gao et al., 2021). Recently, the holistic image-level self-supervision has been extended to more expressive dense representations, including bounding boxes, e.g., (Siméoni et al., 2021; Melas-Kyriazi et al., 2022) and pixel-precise segmentation masks, e.g., (Hamilton et al., 2022; Ziegler & Asano, 2022). Some self-supervised learning methods even outperform supervised alternatives (He et al., 2020; Caron et al., 2021). We hypothesize that for diffusion models, self-supervision may also provide a flexible and competitive, possibly even stronger guidance signal than ground-truth labeled guidance.
In this paper, we propose self-guided diffusion, a framework for image generation using guided diffusion without the need for any annotated image-label pairs. The framework encompasses a feature extraction function and a self-annotation function, that are compatible with recent selfsupervised learning advances. Furthermore, we leverage the flexibility of self-supervised learning
to generalize the guidance signal from the holistic image level to (unsupervised) local bounding boxes and segmentation masks for more fine-grained guidance. We demonstrate the potential of our proposal on single-label and multi-label image datasets, where self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation.
2 APPROACH
Before detailing our self-guided diffusion framework, we provide a brief background on diffusion models and the classifier-free guidance technique.
2.1 BACKGROUND
Diffusion models. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) gradually add noise to an image x0 until the original signal is fully diminished. By learning to reverse this process one can turn random noise xT into images. This diffusion process is modeled as a Gaussian process with Markovian structure:
q(xt|xt−1) := N (xt; √ 1− βtxt−1, βtI), q(xt|x0) := N (xt; √ αtx0, (1− αt)I, (1)
where β1, . . . , βT is a fixed variance schedule on which we define αt := 1− βt and αt := ∏t s=1 αs. All latent variables have the same dimensionality as the image x0 and differ by the proportion of the retained signal and added noise.
Learning the reverse process reduces to learning a denoiser xt ∼ q(xt|x0) that recovers the original image as (xt − (1− αt) θ(xt, t))/ √ αt ≈ x0. Ho et al. (2020) optimize the network parameters θ by minimizing the error of the noise prediction:
L(θ) = E ,x,t [ || θ(xt, t)− ||22 ] , (2)
in which ∼ N (0, I), x ∈ D is a sample from the training dataset D and the noise prediction function θ(·) are encouraged to be as close as possible to .
The standard sampling (Ho et al., 2020) requires many neural function evaluations to get good quality samples. Instead, the faster Denoising Diffusion Implicit Models (DDIM) sampler (Song et al., 2021a) has a non-Markovian sampling process:
xt−1 = √ αt−1
( xt −
√ 1− αt θ(xt, t)√
αt
) + √ 1− αt−1 − σ2t · θ(xt, t) + σt , (3)
where ∼ N (0, I) is standard Gaussian noise independent of xt.
Classifier-free guidance. To trade off mode coverage and sample fidelity in a conditional diffusion model, Dhariwal & Nichol (2021) propose to guide the image generation process using the gradients of a classifier, with the additional cost of having to train the classifier on noisy images. Motivated by this drawback, Ho & Salimans (2021) introduce label-conditioned guidance that does not require a classifier. They obtain a combination of a conditional and unconditional network in a single model, by randomly dropping the guidance signal c during training. After training, it empowers the model with progressive control over the degree of alignment between the guidance signal and the sample by varying the guidance strength w:
̃θ(xt, t; c, w) = (1− w) θ(xt, t) + w θ(xt, t; c). (4)
A larger w leads to greater alignment with the guidance signal, and vice versa. Classifier-free guidance (Ho & Salimans, 2021) provides progressive control over the specific guidance direction at the expense of labor-consuming data annotation. In this paper, we propose to remove the necessity of data annotation using a self-guided principle based on self-supervised learning.
2.2 SELF-GUIDED DIFFUSION
The equations describing diffusion by classifier-free guidance implicitly assume dataset D and its images each come with a single manually annotated class label. We prefer to make the label requirement explicit. We denote the human annotation process as the function ξ(x;D, C) : D → C, where C defines the annotation taxonomy, and plug this into Equation (4):
̃θ(xt, t; ξ(x; D, C), w) = (1− w) θ(xt, t) + w θ(xt, t; ξ(x; D, C)). (5)
We propose to replace the supervised labeling process ξ with a self-supervised process that requires no human annotation:
̃θ(xt, t; fψ(gφ(x; D); D), w) = (1− w) θ(xt, t) + w θ(xt, t; fψ(gφ(x; D); D), (6)
where g is a self-supervised feature extraction function parameterized by φ that maps the input data to feature space H, g : x → gφ(x),∀x ∈ D, and f is a self-annotation function parameterized by ψ to map the raw feature representation to the ultimate guidance signal k, fψ : gφ(·;D) → k. The guidance signal k can be any form of annotation, e.g., label, box, pixel, that can be paired with an image, which we derive by k=fψ(gφ(x; D); D). The choice of the self-annotation function f can be non-parametric by heuristically searching over dataset D based on the extracted feature gφ(·; D), or parametric by fine-tuning on the feature map gφ(·; D).
For the noise prediction function θ(·), we adopt the traditional UNet network architecture (Ronneberger et al., 2015) due to its superior image generation performance, following (Ho et al., 2020; Song et al., 2021b; Ramesh et al., 2022; Saharia et al., 2022).
Stemming from this general framework, we present three methods working at different spatial granularities, all without relying on any ground-truth labels. Specifically, we cover image-level, box-level and pixel-level guidance by setting the feature extraction function gφ(·), self-annotation function fψ(·), and guidance signal k to an approximate form.
Self-labeled guidance. To achieve self-labeled guidance, we need a self-annotation function f that produces a representative guidance signal k ∈ RK . Firstly, we need an embedding function gφ(x),x ∈ D which provides semantically meaningful image-level guidance for the model. We obtain gφ(·) in a self-supervised manner by mapping from image space, gφ(·) : RW×H×3 → RC , where W and H are image width and height and C is the feature dimension. We may use any type of feature for the feature embedding function g, which we will vary and validate in the experiments. As the image-level feature gφ(·; D) is not compact enough for guidance, we further conduct a non-parametric clustering algorithm, e.g., k-means, as our self-annotation function f . For all features gφ(·), we obtain the self-labeled guidance via self-annotation function fψ(·) : RC → RK . Motivated by Rolfe (2017), we use a one-hot embedding k ∈ RK for each image to achieve a compact guidance.
We inject the guidance information into the noise prediction function θ by concatenating it with timestep embedding t and feed the concatenated information concat[t,k] into every block of the UNet. Thus, the noise prediction function θ is rewritten as:
θ(xt, t; k) = θ(xt,concat[t, k]), (7)
where k=fψ(gφ(x; D); D) is the self-annotated image-level guidance signal. For simplicity, we ignore the self-annotation function fψ(·) here and in the later text. Self-labeled guidance focuses on the image-level global guidance. Next we consider a more fine-grained spatial guidance.
Self-boxed guidance. Bounding boxes specify the location of an object in an image (Ren et al., 2015; Carion et al., 2020) and complements the content information of class labels. Our selfboxed guidance approach aims to attain this signal via self-supervised models. We represent the bounding box as a feature map (W ×H) rather than coordinates ([X,Y,W,H]). We propose the selfannotation function f that obtains a bounding box ks ∈ RW×H by mapping from feature space H to the bounding box space via fψ(·; D) : RW×H×C → RW×H , and inject the guidance signal by concatenating in the channel dimension: xt := concat[xt, ks] Usually in self-supervised learning, the derived bounding box is class-agnostic (Vo et al., 2020; 2021). To inject a self-supervised pseudo label to further enhance the guidance signal, we again resort to clustering to obtain k and concatenate
it with the time embedding t := concat[t,k]. To incorporate such guidance, we reformulate the noise prediction function θ as:
θ(xt, t; ks,k) = θ(concat[xt,ks],concat[t,k]), (8)
in which ks is the self-supervised box guidance obtained by self-annotation functions fψ, k is the self-supervised image-level guidance from clustering. The design of fψ is flexible as long as it obtains self-supervised bounding boxes by fψ(·; D) : RW×H×C → RW×H . Self-boxed guidance guides the diffusion model by boxes, which specifies the box area in which the object will be generated. Sometimes, we may need an even finer granularity, e.g., pixels, which we detail next.
Self-segmented guidance. Compared to a bounding box, a segmentation mask is a more finegrained signal. Additionally, a multichannel mask is more expressive than a binary foregroundbackground mask. Therefore, we propose a self-annotation function f that acts as a plug-in built on feature gφ(·; D) to extract the segmentation mask ks via function mapping fψ(·; D) : RW×H×C → RW×H×K , where K is the number of segmentation clusters.
To inject the self-segmented guidance into the noise prediction function θ, we consider two pathways for injection of such guidance. We first concatenate the segmentation mask to xt in the channel dimension, xt := concat[xt, ks], to retain the spatial inductive bias of the guidance signal. Secondly, we also incorporate the image-level guidance to further amplify the guidance signal along the channel dimension. As the segmentation mask from the self-annotation function fψ already contains image-level information, we do not apply the image-level clustering as before in our selflabeled guidance. Instead, we directly derive the image-level guidance from the self-annotation result fψ(·) via spatial maximum pooling: RW×H×K → RK , and feed the image-level guidance k̂ into the noise prediction function via concatenating it with the timestep embedding t:=concat[t, k̂]. The concatenated results will be sent to every block of the UNet. In the end, the overall noise prediction function for self-segmented guidance is formulated as:
θ(xt, t; ks, k̂) = θ(concat[xt,ks], concat[t, k̂]), (9)
in which ks is the spatial mask guidance obtained from self-annotation function f , k̂ is a multi-hot image-level guidance derived from the self-supervised learning mask ks.
We have described three variants of self-guidances by setting the feature extraction function gφ(·), self-annotation function fψ(·), guidance signal k to an approximate form. In the end, we arrive at three noise prediction functions θ, which we utilize for diffusion model training and sampling, following the standard guided (Ho & Salimans, 2021) diffusion approach as detailed in Section 2.1.
3 EXPERIMENTS
In this section, we aim to answer the overarching question: Can we substitute ground-truth annotations with self-annotations? First, we consider the image-label setting, in which we examine what kind of self-labeling is required to improve image fidelity. Next, we look at image-bounding box pairs. Finally, we examine whether it is possible to gain fine-grained control with self-labeled image-segmentation pairs. We first present the general settings relevant for all experiments.
Evaluation metric. We evaluate both diversity and fidelity of the generated images by the Fréchet Inception Distance (FID) (Heusel et al., 2017), as it is the de facto metric for the evaluation of generative methods, e.g., (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019; Saharia et al., 2022). It provides a symmetric measure of the distance between two distributions in the feature space of Inception-V3 (Szegedy et al., 2016). We use FID as our main metric for the sampling quality.
Baselines & implementation details. Throughout our experiments, we always use the diffusion model following (Ho et al., 2020). We train with timestep T=1000, guidance drop probability p=0.1 and the linear variance schedule. For sampling, we set the guidance strength w=2 and deploy a clipping operation in every sampling timestep. As the baseline for a guided diffusion model with ground-truth labels we follow classifier-free guidance (Ho & Salimans, 2021). We use DDIM (Song et al., 2021a) samplers with 250 steps, σt=0. All hyperparameter of our self-guided diffusion and the
baselines are the same, allowing us to compare methods under a fixed compute budget. For details of the learning rate, optimizer, and hyperparameters, we refer to Appendix C. All code will be released.
3.1 SELF-LABELED GUIDANCE
We use ImageNet32/64 (Deng et al., 2009) to validate the efficacy of self-labeled guidance. For better evaluation of sampling quality, we also adopt the Inception Score (IS) (Salimans et al., 2016), following the common practice on this dataset (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019). IS measures how well a model fits into the full ImageNet class distribution.
Choice of feature extraction function g. We first measure the influence of the feature extraction function g used before clustering. We consider two supervised feature backbones: ResNet50 (He et al., 2016) and ViTB/16 (Dosovitskiy et al., 2021), and four selfsupervised backbones: SimCLR (Chen et al., 2020), MAE (He et al., 2022), MSN (Assran et al., 2022) and DINO (Caron et al., 2021). To assure a fair comparison we use 10k clusters for all architectures. From the results in Table 1, we make the following observations. First, features from the supervised ResNet50, and ViT-B/16 lead to a satisfactory FID performance, at the expense of relatively limited diversity (low IS). However, they still require label
annotation, which we strive to avoid in our work. Second, among the self-supervised feature extraction functions, the MSN- and DINO-pretrained ViT backbones have the best trade-off in terms of both FID and IS. They even improve over the label-supervised backbones. This implies the label assignment for the guidance is not unique, pretext labels on top of self-supervised learning can still provide influential guidance signal in comparison with human annotated labels. Also, the diversity of label-supervised ViT-B/16 is much lower than self-supervised ViT-B/16 with an IS of 7.81 vs. 10.41, suggesting that self-supervised guidance leads to more unbiased representation in comparison to supervised guidance. From now on we pick the DINO ViT-B/16 architecture as our self-supervised feature extraction function g.
Effect of number of clusters. Next, we ablate the influence of the number of clusters on the overall sampling quality. We consider 1, 10, 100, 500, 1,000, 5,000 and 10,000 clusters on the extracted CLS token from the DINO ViT-B/16 feature. For efficient, yet uniform, comparison we only run 20 epochs on ImageNet32. To put our sampling results in perspective, we also provide FID and IS results for DDPM and classifier-free guidance. From the result in Figure 1 we first observe that when the cluster number is ranging from 1 to 5,000, our model’s performance monotonously increases and
always surpasses the DDPM model. Beyond 1,000 clusters, we are competitive with classifier-free guidance using ground-truth labels. For 5,000 clusters, there is a sweet spot where we outperform classifier-free guidance with an FID of 16.4 vs. 17.9 and an IS of 9.94 vs. 10.35, see also the generated images in Figure 1. The performance of FID starts to deteriorate from 5,000 to 10,000 clusters. We conclude that self-labeled guidance outperforms DDPM without any guidance beyond a single cluster, is competitive with classifier-free guidance beyond 1,000 clusters, and is even able to outperform guidance by ground-truth labels for 5,000 clusters. From now on we use 5,000 clusters for self-labeled guidance on ImageNet.
Varying guidance strength w. Next we consider the influence of the guidance strength w on our sampling results. As the validation set of ImageNet32 is strictly balanced, we also consider an unbalanced setting which is more similar to real-world deployment. Under both settings we compare the FID between our self-labeled guidance and ground-truth guidance. We train both models for 100 epochs. For the standard ImageNet32 validation setting in Figure 2a, our method achieves a 17.8% improvement for respective optimal guidance strength of the two methods. Self-labeled guidance is especially effective for lower values of w. We observe similar trends for the unbalanced setting in Figure 2b, be it that the overall FID results are slightly higher for both methods. The improvement increases to 18.7%. We conjecture this is due to the unbalanced nature of the k-means algorithm (Last et al., 2017), and clustering based on the statistics of the overall dataset can potentially lead to more robust performance in unbalanced setting.
Self-labeled comparisons on ImageNet32/64. We compare our self-labeled guidance with groundtruth labels guidance, which utilizes the technique of classifier-free guidance (Ho & Salimans, 2021). We train all experiments for 100 epochs which take about 6 days to converge on four RTX A5000 GPUs. All hyperparameters are the same between the two methods to make the comparison as fair as possible. Results on ImageNet32 and ImageNet64 are in Table 2. Similar to Dhariwal & Nichol (2021), we observe that any guidance setting improves considerably over the unconditional & no-guidance model. Surprisingly, our self-labeled model even outperforms the ground-truth labels by a large gap in terms of FID of 2.9 and 4.7 points respectively. We hypothesize that the ground-truth taxonomy might be suboptimal for learning generative models and the self-supervised clusters offer a better guidance signal due to better alignment with the visual similarity of the images. This suggests that the label-conditioned guidance from Ho & Salimans (2021) can be completely replaced by guidance from self-supervision, which would enable guided diffusion models to learn from even larger (unlabeled) datasets than feasible today.
3.2 SELF-BOXED GUIDANCE
We report on Pascal VOC and COCO_20K to validate the efficacy of self-boxed guidance. To obtain class-agnostic object bounding boxes, we use LOST (Siméoni et al., 2021) as our self-annotation function f . We report train FID for Pascal VOC and train/validation FID for COCO_20K. For
image-level clustering to attain the guidance signal, we empirically found k=100 works best on both datasets as those datasets are relatively small-scale in images and labels compared to ImageNet. We train our diffusion model for 800 epochs with input image size 64×64. See Appendix C for more details.
Self-boxed comparisons on Pascal VOC and COCO_20K. For the ground-truth labels guidance baseline, we condition on a class embedding. Since there are now multiple objects per image, we represent the ground-truth class with a multi-hot embedding. Aside from the class embedding which is multi-hot in our method, all other settings remain the same for a fair comparison. The results in Table 3, confirm that the multi-hot class embedding is indeed effective for multi-label datasets, improving over the no-guidance model by a large margin. This improvement comes at the cost of manually annotating multiple classes per image. Self-boxed guidance further improves upon this result, by reducing the FID by an additional 5.1 and 3.3 points respectively without using any groundtruth annotation. In Figure 3, we show our method generates diverse and semantically well-aligned images.
3.3 SELF-SEGMENTED GUIDANCE
Finally, we validate the efficacy of self-segmented guidance on Pascal VOC and COCO-Stuff. For COCO-Stuff we follow the split from (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang
et al., 2022), with a train set of 49,629 images and a validation set of 2,175 images. Classes are merged into 27 (15 stuff and 12 things) categories. For self-segmented guidance we apply STEGO (Hamilton et al., 2022) as our self-annotation function f . We set the cluster number to 27 for COCO-Stuff, and 21 for Pascal VOC, following STEGO. We train all models on images of size 64×64, for 800 epochs on Pascal VOC, and for 400 epochs on COCO-Stuff. We report the train FID for Pascal VOC and both train and validation FID for COCO-Stuff. More details on the dataset and experimental setup are provided in Appendix C.
Self-segmented comparisons on Pascal VOC and COCO-Stuff. We compare against both the ground-truth labels guidance baseline from the previous section and a model trained with ground-truth semantic masks guidance. The results in Table 4 demonstrate that our self-segmented guidance still outperforms the ground-truth labels guidance baseline on both datasets. The comparison between ground-truth labels and segmentation masks reveals an improvement in image quality when using the more fine-grained segmentation mask as the condition signal. But these segmentation masks are one of the most costly types of image annotations that require every pixel to be labeled. Our self-segmented approach avoids the necessity for annotations while narrowing the performance gap, and more importantly offering fine-grained control over the image layout. We demonstrate this controllability with examples in Figure 4. These examples further highlight a robustness against noise in the segmentation masks, which our method acquires naturally due to training with noisy segmentations.
4 RELATED WORK
Conditional generative models. Earlier works on generative adversarial networks (GANs) have already observed improvements in image quality by conditioning on ground-truth labels (Mirza & Osindero, 2014; Brock et al., 2019; Casanova et al., 2021). Recently, conditional diffusion models have reported similar improvements, while also offering a great amount of controllability via classifierfree guidance by training on images paired with textual descriptions (Ramesh et al., 2021; 2022; Saharia et al., 2022), semantic segmentations (Wang et al., 2022), or other modalities (Bordes et al., 2022; Yang et al., 2022; Song et al., 2022). Our work also aims to realize the benefits of conditioning and guidance, but instead of relying on additional human-generated supervision signals, we leverage the strength of pretrained self-supervised visual encoders.
Zhou et al. (2022) train a GAN for text-to-image generation without any image-text pairs, by leveraging the CLIP (Radford et al., 2021) model that was pretrained on a large collection of paired data. In this work, we do not assume any paired data for the generative models and rely purely on images. Additionally, image layouts are difficult to be expressed by text, thus our self-boxed and selfsegmented methods are complementary to text conditioning. Instance-Conditioned GAN (Casanova et al., 2021), Retrieval-augmented Diffusion (Blattmann et al., 2022) and KNN-diffusion (Ashual et al., 2022) are three recent methods that utilize nearest neighbors as guidance signals in generative models. Similar to our work, these methods rely on conditional guidance from an unsupervised source, we differ from them by further attempting to provide more diverse spatial guidance, including (self-supervised) bounding boxes and segmentation masks.
Self-supervised learning in generative models. Self-supervised learning (Caron et al., 2020; Chen et al., 2020; Asano et al., 2020; Caron et al., 2021) has shown great potential for representation learning in many downstream tasks. As a consequence, it is also commonly explored in GAN for evaluation and analysis (Morozov et al., 2020), conditioning (Casanova et al., 2021; Mangla et al., 2022), stabilizing training (Chen et al., 2019), reducing labeling costs (Lučić et al., 2019) and avoiding mode collapse (Armandpour et al., 2021). Our work focuses on translating the benefits of self-supervised methods to the generative domain and providing flexible guidance signals to diffusion models at various image granularities. In order to analyze the feature representation from self-supervised models, Bordes et al. (2022) condition on self-supervised features in their diffusion model for better visualization in data space. We instead condition on the compact clustering after the self-supervised feature, and further introduce the elasticity of self-supervised learning into diffusion models for multi-granular image generation.
5 CONCLUSION
We have explored the potential of self-supervision signals for diffusion models and propose a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our framework provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments indicate that self-supervision signals are an adequate replacement for existing guidance methods that generate images by relying on annotated image-label pairs during training. Furthermore, both self-boxed and self-segmented approaches demonstrate that we can acquire fine-grained control over the image content, without any ground-truth bounding boxes or segmentation masks. Due to limited computational resources, we restricted our experiments to images of a maximum size of 64×64. For future work, it will be of interest to verify our findings on larger image resolutions. Ultimately, our goal is to enable the benefits of self-guided diffusion for unlabeled and more diverse datasets at scale, wherein we believe this work is a promising first step.
1 Introduction 1
2 Approach 2
2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Self-Guided Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Experiments 4
3.1 Self-labeled Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Self-boxed Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Self-segmented Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Related Work 8
5 Conclusion 9
A Main framework 15
B More quantitative results 16
B.1 Correlation between NMI and FID in different feature backbones. . . . . . . . . . 16
B.2 Precision and Recall in ImageNet32/64 dataset . . . . . . . . . . . . . . . . . . . 16
B.3 Cluster number ablation in self-boxed guidance . . . . . . . . . . . . . . . . . . . 16
B.4 Trend visualization of training loss and validation FID . . . . . . . . . . . . . . . 17
C More experimental details 17
C.1 UNet structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
C.2 Training Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
C.3 Dataset preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C.4 LOST, STEGO algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
D More qualitative results 19
A MAIN FRAMEWORK
We illustrate the pipeline of our framework in Figure 5.
B MORE QUANTITATIVE RESULTS
B.1 CORRELATION BETWEEN NMI AND FID IN DIFFERENT FEATURE BACKBONES.
To assess the correlation between cluster quality and sample fidelity, we consider the Normalized Mutual Information (NMI), which is commonly adopted as a mutual information-derived metric to assess clustering quality based on provided ground truth labels. In Figure 6 we plot the connection between NMI and FID. For the label-supervised functions, the NMI is unrelated to the FID, but for the self-supervised functions, NMI and FID are negatively correlated, suggesting that NMI — a metric commonly applied to assess the quality of self-supervised methods — is also predictive of the model’s usefulness in our setting.
B.2 PRECISION AND RECALL IN IMAGENET32/64 DATASET
We show the extra results of ImageNet on precision and recall in Table 5. We follow the evaluation code of precision and recall from ICGAN (Casanova et al., 2021), our self-labeled guidance also outperforms ground-truth labels in precision and remains competitive in recall.
B.3 CLUSTER NUMBER ABLATION IN SELF-BOXED GUIDANCE
In table 6, we empirically evaluate the performance when we alter the cluster number in our self-boxed guidance. We find the performance will increase from k = 21 to k = 100, and saturated at k = 100.
B.4 TREND VISUALIZATION OF TRAINING LOSS AND VALIDATION FID
We visualize the trend of training loss and validation FID in Figure 7.
C MORE EXPERIMENTAL DETAILS
Training details. For our best results, we train 100 epochs on 4 GPUs of A5000 (24G) in ImageNet. We train 800/800/400 epochs on 1GPU of A6000 (48G) in Pascal VOC, COCO_20K, COCO-Stuff, respectively. All qualitative results in this paper are trained in the same setting as mentioned above. We train and evaluate the Pascal VOC, COCO_20K, COCO-Stuff in image size 64, and visualize them by bilinear up sampling to 256, following (Liu et al., 2022).
Sampling details. We sample the guidance signal from the distribution of training set in our all experiments. For each timestep, we need twice of Number of Forward Evaluation (NFE), we optimize them by concatenating the conditional and unconditional signal along the batch dimension so that we only need one time of NFE in every timestep.
Evaluation details. We use the common package Clean-FID (Parmar et al., 2022), torchfidelity (Obukhov et al., 2020) for FID, IS calculation, respectively. For IS, we use the standard 10-split setting, we only report IS on ImageNet, as it might be not an appropriate metric for non object-centric datasets (Barratt & Sharma, 2018). For checking point, we pick the checking point every 10 epochs by minimal FID between generated sample set and train set.
C.1 UNET STRUCTURE
Guidance signal injection. We describe the detail of guidance signal injection in Figure 8. The injection of self-labeled guidance and self-boxed/segmented guidance is slightly different. The common part is by concatenation between timestep embedding and noisy input, the concatenated feature will be sent to every block of the UNet. For the self-boxed/segmented guidance, we not only conduct the information fusion as above, but also incorporate the spatial inductive-bias by concatenating it with input, the concatenated result will be fed into the UNet.
Timestep embedding. We embed the raw timestep information by two-layer MLP: FC(512, 128)→SiLU→FC(128, 128).
Guidance embedding. The guidance is in the form of one/multi-hot embedding RK , we feed it into two-layer MLP: FC(K, 256)→SiLU→FC(256, 256), then feed those guidance signal into the UNet following in Figure 8.
Cross-attention. In training for non object-centric dataset, we also tokenize the guidance signal to several tokens following Imagen Saharia et al. (2022), we concatenate those tokens with image tokens (can be transposed to a token from typical feature map by RW×H×C → RC×WH ), the cross-attention (Rombach et al., 2022; Blattmann et al., 2022) is conducted by CA(m, concat[k,m]). Due to the quadratic complexity of transformer (Katharopoulos et al., 2020; Lu et al., 2021), we only apply the cross-attention in lower-resolution feature maps.
C.2 TRAINING PARAMETER
3×32×32 model,4GPU,ImageNet32
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 3e− 4 Blocks per resolution: 2 Batch size: 128 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 4GPU, ImageNet64
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 48 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 1GPU, Pascal VOC, COCO_20K, COCO-Stuff
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 80 Attention resolutions: 4 EMA: 0.9999 Number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 1 × A6000(45G) Conditioning embedding MLP layers: 2 Training Epochs: 800/800/400 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256 Context token number: 8 Context dim: 32
C.3 DATASET PREPARATION
The preparation of unbalanced dataset. There are 50,000 images in the validation set of ImageNet with 1,000 classes (50 instances for each). We index the class from 0 to 999, for each class ci, the instance of the class ci is bi× 50/1000c = bi/200c.
Pascal VOC. We use the standard split from (Siméoni et al., 2021). It has 12,031 training images. As there is no validation set for Pascal VOC dataset, therefore, we only evaluate FID on train set. We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO_20K. We follow the split from (Siméoni et al., 2021; Vo et al., 2020; Lin et al., 2014). COCO_20k is a subset of the COCO2014 trainval dataset, consisting of 19,817 randomly chosen images, used in unsupervised object discovery (Siméoni et al., 2021; Vo et al., 2020). We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO-Stuff. It has a train set of 49,629 images, validation set of 2,175 images, where the original classes are merged into 27 (15 stuff and 12 things) high-level categories. We use the dataset split following (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang et al., 2022), We sample 10,000 images and use 10,000 train/validation images as reference set for FID evaluation.
C.4 LOST, STEGO ALGORITHMS
LOST algorithm details. We conduct padding to make the original image can be patchified to be fed into the ViT architecture (Dosovitskiy et al., 2021), and feed the original padded image into the LOST architecture using official source code 1. LOST can also be utilized in a two-stage approach to provide multi-object, due to its complexity, we opt for only single-object discovery in this paper.
STEGO algorithm details. We follow the official source code 2, and apply padding to make the original image can be fed into the ViT architecture to extract the self-segmented guidance signal.
For COCO-Stuff dataset, we directly use the official pretrained weight. For Pascal VOC, we train STEGO ourselves using the official hyperparameters.
In STEGO’s pre-processing for the k-NN, the number of neighbors for k-NN is 7. The segmentation head of STEGO is composed of a two-layer MLP (with ReLU activation) and outputs a 70-dimension feature. The learning rate is 5e− 4, batch size is 64.
D MORE QUALITATIVE RESULTS
1https://github.com/valeoai/LOST 2https://github.com/mhamilton723/STEGO
Guidance signal from training set: | 1. What is the focus and contribution of the paper on self-supervised diffusion models?
2. What are the strengths of the proposed approach, particularly in its performance compared to other methods?
3. What are the weaknesses of the paper regarding the experimental results and limitations of the method?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes self-supervised diffusion models. The primary purpose is to eliminate the explicit guidance for conditional diffusion models. The proposed method leverages clustering to obtain a "pseudo label" of an image, then generate the image that aligns with the pseudo label using diffusion models with classifier-free guidance. These labels can be categorical labels of images, object bounding boxes, and pixel-level segmentation masks of an image. The authors compared their self-guided diffusion models with both the unconditional diffusion models and the ground-truth-guided diffusion models under different tasks. They also did some analysis on the hyper-parameters of their model, including the number of clustering and the guidance strength.
Strengths And Weaknesses
Strength: The proposed self-guided diffusion model outperforms the unconditional diffusion model and the real-label-guided diffusion model, showing great potential in this direction.
Weakness:
My biggest concern about this paper is that the experiment results are not convincing enough. The experiments are only conducted on images of 32x32 and 64x64 resolutions, which are not large enough. I suggest the authors conduct experiments on larger datasets with images of larger resolutions (say 256x256). I also hope to see results on the commonly used CIFAR10, and CIFAR100 datasets.
There are two primary advantages of conditional image generation. The first is that it divides the original generation space into several subspaces. By narrowing the domain or the space, the generation task becomes more specified, and the image quality can be significantly better. The second is that it provides more controllability to the model. This paper improves the quality of generated images by accomplishing the first one. However, it is still unclear to me how the model can indeed improve the model's controllability. The generated labels or masks do not seem to have provided significant meanings to guide us in the generation of specific visual content.
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is well-written. The formulations are clear.
Quality: This work proposes a new model. But it needs more work to validate the effectiveness of their model.
Originality: The proposed model is somewhat novel. |
ICLR | Title
Self-Guided Diffusion Models
Abstract
Diffusion models have demonstrated remarkable progress in image generation quality, especially when guidance is used to control the generative process. However, guidance requires a large amount of image-annotation pairs for training and is thus dependent on their availability, correctness and unbiasedness. In this paper, we eliminate the need for such annotation by instead leveraging the flexibility of self-supervision signals to design a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our method provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments on single-label and multi-label image datasets demonstrate that self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels, especially on unbalanced data. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation. Self-guided diffusion is simple, flexible and expected to profit from deployment at scale.
1 INTRODUCTION
The image fidelity of diffusion models is spectacularly enhanced by conditioning on class labels (Dhariwal & Nichol, 2021). Classifier guidance goes a step further and offers control over the alignment with the class label, by using the classifier gradient to guide the image generation (Dhariwal & Nichol, 2021). Classifier-free guidance (Ho & Salimans, 2021) replaces the dedicated classifier with a diffusion model that is trained by randomly setting the condition to the special non-label class. This has proven a fruitful research line for several other condition modalities, such as text (Saharia et al., 2022; Ramesh et al., 2021), image layout (Rombach et al., 2022), visual neighbors (Ashual et al., 2022), and image features (Giannone et al., 2022). However, all these conditioning and guidance methods require ground-truth annotations. This is an unrealistic and too costly assumption in many domains. For example, medical images require domain experts to annotate very high-resolution data, which is infeasible to do exhaustively (Panteli et al., 2021). In this paper, we propose to remove the necessity of any ground-truth annotation for guidance diffusion models.
We are inspired by progress in self-supervised learning (Chen et al., 2020; Caron et al., 2021), which encodes data, and especially images, into semantically meaningful latent vectors without using any label information. It usually does so by solving a pretext task (Zhang et al., 2017; Gidaris et al., 2018; Asano et al., 2020; He et al., 2020) on image-level to remove the necessity of labels. This annotationfree paradigm enables the representation learning to upscale to larger and more diverse (image) datasets (Gao et al., 2021). Recently, the holistic image-level self-supervision has been extended to more expressive dense representations, including bounding boxes, e.g., (Siméoni et al., 2021; Melas-Kyriazi et al., 2022) and pixel-precise segmentation masks, e.g., (Hamilton et al., 2022; Ziegler & Asano, 2022). Some self-supervised learning methods even outperform supervised alternatives (He et al., 2020; Caron et al., 2021). We hypothesize that for diffusion models, self-supervision may also provide a flexible and competitive, possibly even stronger guidance signal than ground-truth labeled guidance.
In this paper, we propose self-guided diffusion, a framework for image generation using guided diffusion without the need for any annotated image-label pairs. The framework encompasses a feature extraction function and a self-annotation function, that are compatible with recent selfsupervised learning advances. Furthermore, we leverage the flexibility of self-supervised learning
to generalize the guidance signal from the holistic image level to (unsupervised) local bounding boxes and segmentation masks for more fine-grained guidance. We demonstrate the potential of our proposal on single-label and multi-label image datasets, where self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation.
2 APPROACH
Before detailing our self-guided diffusion framework, we provide a brief background on diffusion models and the classifier-free guidance technique.
2.1 BACKGROUND
Diffusion models. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) gradually add noise to an image x0 until the original signal is fully diminished. By learning to reverse this process one can turn random noise xT into images. This diffusion process is modeled as a Gaussian process with Markovian structure:
q(xt|xt−1) := N (xt; √ 1− βtxt−1, βtI), q(xt|x0) := N (xt; √ αtx0, (1− αt)I, (1)
where β1, . . . , βT is a fixed variance schedule on which we define αt := 1− βt and αt := ∏t s=1 αs. All latent variables have the same dimensionality as the image x0 and differ by the proportion of the retained signal and added noise.
Learning the reverse process reduces to learning a denoiser xt ∼ q(xt|x0) that recovers the original image as (xt − (1− αt) θ(xt, t))/ √ αt ≈ x0. Ho et al. (2020) optimize the network parameters θ by minimizing the error of the noise prediction:
L(θ) = E ,x,t [ || θ(xt, t)− ||22 ] , (2)
in which ∼ N (0, I), x ∈ D is a sample from the training dataset D and the noise prediction function θ(·) are encouraged to be as close as possible to .
The standard sampling (Ho et al., 2020) requires many neural function evaluations to get good quality samples. Instead, the faster Denoising Diffusion Implicit Models (DDIM) sampler (Song et al., 2021a) has a non-Markovian sampling process:
xt−1 = √ αt−1
( xt −
√ 1− αt θ(xt, t)√
αt
) + √ 1− αt−1 − σ2t · θ(xt, t) + σt , (3)
where ∼ N (0, I) is standard Gaussian noise independent of xt.
Classifier-free guidance. To trade off mode coverage and sample fidelity in a conditional diffusion model, Dhariwal & Nichol (2021) propose to guide the image generation process using the gradients of a classifier, with the additional cost of having to train the classifier on noisy images. Motivated by this drawback, Ho & Salimans (2021) introduce label-conditioned guidance that does not require a classifier. They obtain a combination of a conditional and unconditional network in a single model, by randomly dropping the guidance signal c during training. After training, it empowers the model with progressive control over the degree of alignment between the guidance signal and the sample by varying the guidance strength w:
̃θ(xt, t; c, w) = (1− w) θ(xt, t) + w θ(xt, t; c). (4)
A larger w leads to greater alignment with the guidance signal, and vice versa. Classifier-free guidance (Ho & Salimans, 2021) provides progressive control over the specific guidance direction at the expense of labor-consuming data annotation. In this paper, we propose to remove the necessity of data annotation using a self-guided principle based on self-supervised learning.
2.2 SELF-GUIDED DIFFUSION
The equations describing diffusion by classifier-free guidance implicitly assume dataset D and its images each come with a single manually annotated class label. We prefer to make the label requirement explicit. We denote the human annotation process as the function ξ(x;D, C) : D → C, where C defines the annotation taxonomy, and plug this into Equation (4):
̃θ(xt, t; ξ(x; D, C), w) = (1− w) θ(xt, t) + w θ(xt, t; ξ(x; D, C)). (5)
We propose to replace the supervised labeling process ξ with a self-supervised process that requires no human annotation:
̃θ(xt, t; fψ(gφ(x; D); D), w) = (1− w) θ(xt, t) + w θ(xt, t; fψ(gφ(x; D); D), (6)
where g is a self-supervised feature extraction function parameterized by φ that maps the input data to feature space H, g : x → gφ(x),∀x ∈ D, and f is a self-annotation function parameterized by ψ to map the raw feature representation to the ultimate guidance signal k, fψ : gφ(·;D) → k. The guidance signal k can be any form of annotation, e.g., label, box, pixel, that can be paired with an image, which we derive by k=fψ(gφ(x; D); D). The choice of the self-annotation function f can be non-parametric by heuristically searching over dataset D based on the extracted feature gφ(·; D), or parametric by fine-tuning on the feature map gφ(·; D).
For the noise prediction function θ(·), we adopt the traditional UNet network architecture (Ronneberger et al., 2015) due to its superior image generation performance, following (Ho et al., 2020; Song et al., 2021b; Ramesh et al., 2022; Saharia et al., 2022).
Stemming from this general framework, we present three methods working at different spatial granularities, all without relying on any ground-truth labels. Specifically, we cover image-level, box-level and pixel-level guidance by setting the feature extraction function gφ(·), self-annotation function fψ(·), and guidance signal k to an approximate form.
Self-labeled guidance. To achieve self-labeled guidance, we need a self-annotation function f that produces a representative guidance signal k ∈ RK . Firstly, we need an embedding function gφ(x),x ∈ D which provides semantically meaningful image-level guidance for the model. We obtain gφ(·) in a self-supervised manner by mapping from image space, gφ(·) : RW×H×3 → RC , where W and H are image width and height and C is the feature dimension. We may use any type of feature for the feature embedding function g, which we will vary and validate in the experiments. As the image-level feature gφ(·; D) is not compact enough for guidance, we further conduct a non-parametric clustering algorithm, e.g., k-means, as our self-annotation function f . For all features gφ(·), we obtain the self-labeled guidance via self-annotation function fψ(·) : RC → RK . Motivated by Rolfe (2017), we use a one-hot embedding k ∈ RK for each image to achieve a compact guidance.
We inject the guidance information into the noise prediction function θ by concatenating it with timestep embedding t and feed the concatenated information concat[t,k] into every block of the UNet. Thus, the noise prediction function θ is rewritten as:
θ(xt, t; k) = θ(xt,concat[t, k]), (7)
where k=fψ(gφ(x; D); D) is the self-annotated image-level guidance signal. For simplicity, we ignore the self-annotation function fψ(·) here and in the later text. Self-labeled guidance focuses on the image-level global guidance. Next we consider a more fine-grained spatial guidance.
Self-boxed guidance. Bounding boxes specify the location of an object in an image (Ren et al., 2015; Carion et al., 2020) and complements the content information of class labels. Our selfboxed guidance approach aims to attain this signal via self-supervised models. We represent the bounding box as a feature map (W ×H) rather than coordinates ([X,Y,W,H]). We propose the selfannotation function f that obtains a bounding box ks ∈ RW×H by mapping from feature space H to the bounding box space via fψ(·; D) : RW×H×C → RW×H , and inject the guidance signal by concatenating in the channel dimension: xt := concat[xt, ks] Usually in self-supervised learning, the derived bounding box is class-agnostic (Vo et al., 2020; 2021). To inject a self-supervised pseudo label to further enhance the guidance signal, we again resort to clustering to obtain k and concatenate
it with the time embedding t := concat[t,k]. To incorporate such guidance, we reformulate the noise prediction function θ as:
θ(xt, t; ks,k) = θ(concat[xt,ks],concat[t,k]), (8)
in which ks is the self-supervised box guidance obtained by self-annotation functions fψ, k is the self-supervised image-level guidance from clustering. The design of fψ is flexible as long as it obtains self-supervised bounding boxes by fψ(·; D) : RW×H×C → RW×H . Self-boxed guidance guides the diffusion model by boxes, which specifies the box area in which the object will be generated. Sometimes, we may need an even finer granularity, e.g., pixels, which we detail next.
Self-segmented guidance. Compared to a bounding box, a segmentation mask is a more finegrained signal. Additionally, a multichannel mask is more expressive than a binary foregroundbackground mask. Therefore, we propose a self-annotation function f that acts as a plug-in built on feature gφ(·; D) to extract the segmentation mask ks via function mapping fψ(·; D) : RW×H×C → RW×H×K , where K is the number of segmentation clusters.
To inject the self-segmented guidance into the noise prediction function θ, we consider two pathways for injection of such guidance. We first concatenate the segmentation mask to xt in the channel dimension, xt := concat[xt, ks], to retain the spatial inductive bias of the guidance signal. Secondly, we also incorporate the image-level guidance to further amplify the guidance signal along the channel dimension. As the segmentation mask from the self-annotation function fψ already contains image-level information, we do not apply the image-level clustering as before in our selflabeled guidance. Instead, we directly derive the image-level guidance from the self-annotation result fψ(·) via spatial maximum pooling: RW×H×K → RK , and feed the image-level guidance k̂ into the noise prediction function via concatenating it with the timestep embedding t:=concat[t, k̂]. The concatenated results will be sent to every block of the UNet. In the end, the overall noise prediction function for self-segmented guidance is formulated as:
θ(xt, t; ks, k̂) = θ(concat[xt,ks], concat[t, k̂]), (9)
in which ks is the spatial mask guidance obtained from self-annotation function f , k̂ is a multi-hot image-level guidance derived from the self-supervised learning mask ks.
We have described three variants of self-guidances by setting the feature extraction function gφ(·), self-annotation function fψ(·), guidance signal k to an approximate form. In the end, we arrive at three noise prediction functions θ, which we utilize for diffusion model training and sampling, following the standard guided (Ho & Salimans, 2021) diffusion approach as detailed in Section 2.1.
3 EXPERIMENTS
In this section, we aim to answer the overarching question: Can we substitute ground-truth annotations with self-annotations? First, we consider the image-label setting, in which we examine what kind of self-labeling is required to improve image fidelity. Next, we look at image-bounding box pairs. Finally, we examine whether it is possible to gain fine-grained control with self-labeled image-segmentation pairs. We first present the general settings relevant for all experiments.
Evaluation metric. We evaluate both diversity and fidelity of the generated images by the Fréchet Inception Distance (FID) (Heusel et al., 2017), as it is the de facto metric for the evaluation of generative methods, e.g., (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019; Saharia et al., 2022). It provides a symmetric measure of the distance between two distributions in the feature space of Inception-V3 (Szegedy et al., 2016). We use FID as our main metric for the sampling quality.
Baselines & implementation details. Throughout our experiments, we always use the diffusion model following (Ho et al., 2020). We train with timestep T=1000, guidance drop probability p=0.1 and the linear variance schedule. For sampling, we set the guidance strength w=2 and deploy a clipping operation in every sampling timestep. As the baseline for a guided diffusion model with ground-truth labels we follow classifier-free guidance (Ho & Salimans, 2021). We use DDIM (Song et al., 2021a) samplers with 250 steps, σt=0. All hyperparameter of our self-guided diffusion and the
baselines are the same, allowing us to compare methods under a fixed compute budget. For details of the learning rate, optimizer, and hyperparameters, we refer to Appendix C. All code will be released.
3.1 SELF-LABELED GUIDANCE
We use ImageNet32/64 (Deng et al., 2009) to validate the efficacy of self-labeled guidance. For better evaluation of sampling quality, we also adopt the Inception Score (IS) (Salimans et al., 2016), following the common practice on this dataset (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019). IS measures how well a model fits into the full ImageNet class distribution.
Choice of feature extraction function g. We first measure the influence of the feature extraction function g used before clustering. We consider two supervised feature backbones: ResNet50 (He et al., 2016) and ViTB/16 (Dosovitskiy et al., 2021), and four selfsupervised backbones: SimCLR (Chen et al., 2020), MAE (He et al., 2022), MSN (Assran et al., 2022) and DINO (Caron et al., 2021). To assure a fair comparison we use 10k clusters for all architectures. From the results in Table 1, we make the following observations. First, features from the supervised ResNet50, and ViT-B/16 lead to a satisfactory FID performance, at the expense of relatively limited diversity (low IS). However, they still require label
annotation, which we strive to avoid in our work. Second, among the self-supervised feature extraction functions, the MSN- and DINO-pretrained ViT backbones have the best trade-off in terms of both FID and IS. They even improve over the label-supervised backbones. This implies the label assignment for the guidance is not unique, pretext labels on top of self-supervised learning can still provide influential guidance signal in comparison with human annotated labels. Also, the diversity of label-supervised ViT-B/16 is much lower than self-supervised ViT-B/16 with an IS of 7.81 vs. 10.41, suggesting that self-supervised guidance leads to more unbiased representation in comparison to supervised guidance. From now on we pick the DINO ViT-B/16 architecture as our self-supervised feature extraction function g.
Effect of number of clusters. Next, we ablate the influence of the number of clusters on the overall sampling quality. We consider 1, 10, 100, 500, 1,000, 5,000 and 10,000 clusters on the extracted CLS token from the DINO ViT-B/16 feature. For efficient, yet uniform, comparison we only run 20 epochs on ImageNet32. To put our sampling results in perspective, we also provide FID and IS results for DDPM and classifier-free guidance. From the result in Figure 1 we first observe that when the cluster number is ranging from 1 to 5,000, our model’s performance monotonously increases and
always surpasses the DDPM model. Beyond 1,000 clusters, we are competitive with classifier-free guidance using ground-truth labels. For 5,000 clusters, there is a sweet spot where we outperform classifier-free guidance with an FID of 16.4 vs. 17.9 and an IS of 9.94 vs. 10.35, see also the generated images in Figure 1. The performance of FID starts to deteriorate from 5,000 to 10,000 clusters. We conclude that self-labeled guidance outperforms DDPM without any guidance beyond a single cluster, is competitive with classifier-free guidance beyond 1,000 clusters, and is even able to outperform guidance by ground-truth labels for 5,000 clusters. From now on we use 5,000 clusters for self-labeled guidance on ImageNet.
Varying guidance strength w. Next we consider the influence of the guidance strength w on our sampling results. As the validation set of ImageNet32 is strictly balanced, we also consider an unbalanced setting which is more similar to real-world deployment. Under both settings we compare the FID between our self-labeled guidance and ground-truth guidance. We train both models for 100 epochs. For the standard ImageNet32 validation setting in Figure 2a, our method achieves a 17.8% improvement for respective optimal guidance strength of the two methods. Self-labeled guidance is especially effective for lower values of w. We observe similar trends for the unbalanced setting in Figure 2b, be it that the overall FID results are slightly higher for both methods. The improvement increases to 18.7%. We conjecture this is due to the unbalanced nature of the k-means algorithm (Last et al., 2017), and clustering based on the statistics of the overall dataset can potentially lead to more robust performance in unbalanced setting.
Self-labeled comparisons on ImageNet32/64. We compare our self-labeled guidance with groundtruth labels guidance, which utilizes the technique of classifier-free guidance (Ho & Salimans, 2021). We train all experiments for 100 epochs which take about 6 days to converge on four RTX A5000 GPUs. All hyperparameters are the same between the two methods to make the comparison as fair as possible. Results on ImageNet32 and ImageNet64 are in Table 2. Similar to Dhariwal & Nichol (2021), we observe that any guidance setting improves considerably over the unconditional & no-guidance model. Surprisingly, our self-labeled model even outperforms the ground-truth labels by a large gap in terms of FID of 2.9 and 4.7 points respectively. We hypothesize that the ground-truth taxonomy might be suboptimal for learning generative models and the self-supervised clusters offer a better guidance signal due to better alignment with the visual similarity of the images. This suggests that the label-conditioned guidance from Ho & Salimans (2021) can be completely replaced by guidance from self-supervision, which would enable guided diffusion models to learn from even larger (unlabeled) datasets than feasible today.
3.2 SELF-BOXED GUIDANCE
We report on Pascal VOC and COCO_20K to validate the efficacy of self-boxed guidance. To obtain class-agnostic object bounding boxes, we use LOST (Siméoni et al., 2021) as our self-annotation function f . We report train FID for Pascal VOC and train/validation FID for COCO_20K. For
image-level clustering to attain the guidance signal, we empirically found k=100 works best on both datasets as those datasets are relatively small-scale in images and labels compared to ImageNet. We train our diffusion model for 800 epochs with input image size 64×64. See Appendix C for more details.
Self-boxed comparisons on Pascal VOC and COCO_20K. For the ground-truth labels guidance baseline, we condition on a class embedding. Since there are now multiple objects per image, we represent the ground-truth class with a multi-hot embedding. Aside from the class embedding which is multi-hot in our method, all other settings remain the same for a fair comparison. The results in Table 3, confirm that the multi-hot class embedding is indeed effective for multi-label datasets, improving over the no-guidance model by a large margin. This improvement comes at the cost of manually annotating multiple classes per image. Self-boxed guidance further improves upon this result, by reducing the FID by an additional 5.1 and 3.3 points respectively without using any groundtruth annotation. In Figure 3, we show our method generates diverse and semantically well-aligned images.
3.3 SELF-SEGMENTED GUIDANCE
Finally, we validate the efficacy of self-segmented guidance on Pascal VOC and COCO-Stuff. For COCO-Stuff we follow the split from (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang
et al., 2022), with a train set of 49,629 images and a validation set of 2,175 images. Classes are merged into 27 (15 stuff and 12 things) categories. For self-segmented guidance we apply STEGO (Hamilton et al., 2022) as our self-annotation function f . We set the cluster number to 27 for COCO-Stuff, and 21 for Pascal VOC, following STEGO. We train all models on images of size 64×64, for 800 epochs on Pascal VOC, and for 400 epochs on COCO-Stuff. We report the train FID for Pascal VOC and both train and validation FID for COCO-Stuff. More details on the dataset and experimental setup are provided in Appendix C.
Self-segmented comparisons on Pascal VOC and COCO-Stuff. We compare against both the ground-truth labels guidance baseline from the previous section and a model trained with ground-truth semantic masks guidance. The results in Table 4 demonstrate that our self-segmented guidance still outperforms the ground-truth labels guidance baseline on both datasets. The comparison between ground-truth labels and segmentation masks reveals an improvement in image quality when using the more fine-grained segmentation mask as the condition signal. But these segmentation masks are one of the most costly types of image annotations that require every pixel to be labeled. Our self-segmented approach avoids the necessity for annotations while narrowing the performance gap, and more importantly offering fine-grained control over the image layout. We demonstrate this controllability with examples in Figure 4. These examples further highlight a robustness against noise in the segmentation masks, which our method acquires naturally due to training with noisy segmentations.
4 RELATED WORK
Conditional generative models. Earlier works on generative adversarial networks (GANs) have already observed improvements in image quality by conditioning on ground-truth labels (Mirza & Osindero, 2014; Brock et al., 2019; Casanova et al., 2021). Recently, conditional diffusion models have reported similar improvements, while also offering a great amount of controllability via classifierfree guidance by training on images paired with textual descriptions (Ramesh et al., 2021; 2022; Saharia et al., 2022), semantic segmentations (Wang et al., 2022), or other modalities (Bordes et al., 2022; Yang et al., 2022; Song et al., 2022). Our work also aims to realize the benefits of conditioning and guidance, but instead of relying on additional human-generated supervision signals, we leverage the strength of pretrained self-supervised visual encoders.
Zhou et al. (2022) train a GAN for text-to-image generation without any image-text pairs, by leveraging the CLIP (Radford et al., 2021) model that was pretrained on a large collection of paired data. In this work, we do not assume any paired data for the generative models and rely purely on images. Additionally, image layouts are difficult to be expressed by text, thus our self-boxed and selfsegmented methods are complementary to text conditioning. Instance-Conditioned GAN (Casanova et al., 2021), Retrieval-augmented Diffusion (Blattmann et al., 2022) and KNN-diffusion (Ashual et al., 2022) are three recent methods that utilize nearest neighbors as guidance signals in generative models. Similar to our work, these methods rely on conditional guidance from an unsupervised source, we differ from them by further attempting to provide more diverse spatial guidance, including (self-supervised) bounding boxes and segmentation masks.
Self-supervised learning in generative models. Self-supervised learning (Caron et al., 2020; Chen et al., 2020; Asano et al., 2020; Caron et al., 2021) has shown great potential for representation learning in many downstream tasks. As a consequence, it is also commonly explored in GAN for evaluation and analysis (Morozov et al., 2020), conditioning (Casanova et al., 2021; Mangla et al., 2022), stabilizing training (Chen et al., 2019), reducing labeling costs (Lučić et al., 2019) and avoiding mode collapse (Armandpour et al., 2021). Our work focuses on translating the benefits of self-supervised methods to the generative domain and providing flexible guidance signals to diffusion models at various image granularities. In order to analyze the feature representation from self-supervised models, Bordes et al. (2022) condition on self-supervised features in their diffusion model for better visualization in data space. We instead condition on the compact clustering after the self-supervised feature, and further introduce the elasticity of self-supervised learning into diffusion models for multi-granular image generation.
5 CONCLUSION
We have explored the potential of self-supervision signals for diffusion models and propose a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our framework provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments indicate that self-supervision signals are an adequate replacement for existing guidance methods that generate images by relying on annotated image-label pairs during training. Furthermore, both self-boxed and self-segmented approaches demonstrate that we can acquire fine-grained control over the image content, without any ground-truth bounding boxes or segmentation masks. Due to limited computational resources, we restricted our experiments to images of a maximum size of 64×64. For future work, it will be of interest to verify our findings on larger image resolutions. Ultimately, our goal is to enable the benefits of self-guided diffusion for unlabeled and more diverse datasets at scale, wherein we believe this work is a promising first step.
1 Introduction 1
2 Approach 2
2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Self-Guided Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Experiments 4
3.1 Self-labeled Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Self-boxed Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Self-segmented Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Related Work 8
5 Conclusion 9
A Main framework 15
B More quantitative results 16
B.1 Correlation between NMI and FID in different feature backbones. . . . . . . . . . 16
B.2 Precision and Recall in ImageNet32/64 dataset . . . . . . . . . . . . . . . . . . . 16
B.3 Cluster number ablation in self-boxed guidance . . . . . . . . . . . . . . . . . . . 16
B.4 Trend visualization of training loss and validation FID . . . . . . . . . . . . . . . 17
C More experimental details 17
C.1 UNet structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
C.2 Training Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
C.3 Dataset preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C.4 LOST, STEGO algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
D More qualitative results 19
A MAIN FRAMEWORK
We illustrate the pipeline of our framework in Figure 5.
B MORE QUANTITATIVE RESULTS
B.1 CORRELATION BETWEEN NMI AND FID IN DIFFERENT FEATURE BACKBONES.
To assess the correlation between cluster quality and sample fidelity, we consider the Normalized Mutual Information (NMI), which is commonly adopted as a mutual information-derived metric to assess clustering quality based on provided ground truth labels. In Figure 6 we plot the connection between NMI and FID. For the label-supervised functions, the NMI is unrelated to the FID, but for the self-supervised functions, NMI and FID are negatively correlated, suggesting that NMI — a metric commonly applied to assess the quality of self-supervised methods — is also predictive of the model’s usefulness in our setting.
B.2 PRECISION AND RECALL IN IMAGENET32/64 DATASET
We show the extra results of ImageNet on precision and recall in Table 5. We follow the evaluation code of precision and recall from ICGAN (Casanova et al., 2021), our self-labeled guidance also outperforms ground-truth labels in precision and remains competitive in recall.
B.3 CLUSTER NUMBER ABLATION IN SELF-BOXED GUIDANCE
In table 6, we empirically evaluate the performance when we alter the cluster number in our self-boxed guidance. We find the performance will increase from k = 21 to k = 100, and saturated at k = 100.
B.4 TREND VISUALIZATION OF TRAINING LOSS AND VALIDATION FID
We visualize the trend of training loss and validation FID in Figure 7.
C MORE EXPERIMENTAL DETAILS
Training details. For our best results, we train 100 epochs on 4 GPUs of A5000 (24G) in ImageNet. We train 800/800/400 epochs on 1GPU of A6000 (48G) in Pascal VOC, COCO_20K, COCO-Stuff, respectively. All qualitative results in this paper are trained in the same setting as mentioned above. We train and evaluate the Pascal VOC, COCO_20K, COCO-Stuff in image size 64, and visualize them by bilinear up sampling to 256, following (Liu et al., 2022).
Sampling details. We sample the guidance signal from the distribution of training set in our all experiments. For each timestep, we need twice of Number of Forward Evaluation (NFE), we optimize them by concatenating the conditional and unconditional signal along the batch dimension so that we only need one time of NFE in every timestep.
Evaluation details. We use the common package Clean-FID (Parmar et al., 2022), torchfidelity (Obukhov et al., 2020) for FID, IS calculation, respectively. For IS, we use the standard 10-split setting, we only report IS on ImageNet, as it might be not an appropriate metric for non object-centric datasets (Barratt & Sharma, 2018). For checking point, we pick the checking point every 10 epochs by minimal FID between generated sample set and train set.
C.1 UNET STRUCTURE
Guidance signal injection. We describe the detail of guidance signal injection in Figure 8. The injection of self-labeled guidance and self-boxed/segmented guidance is slightly different. The common part is by concatenation between timestep embedding and noisy input, the concatenated feature will be sent to every block of the UNet. For the self-boxed/segmented guidance, we not only conduct the information fusion as above, but also incorporate the spatial inductive-bias by concatenating it with input, the concatenated result will be fed into the UNet.
Timestep embedding. We embed the raw timestep information by two-layer MLP: FC(512, 128)→SiLU→FC(128, 128).
Guidance embedding. The guidance is in the form of one/multi-hot embedding RK , we feed it into two-layer MLP: FC(K, 256)→SiLU→FC(256, 256), then feed those guidance signal into the UNet following in Figure 8.
Cross-attention. In training for non object-centric dataset, we also tokenize the guidance signal to several tokens following Imagen Saharia et al. (2022), we concatenate those tokens with image tokens (can be transposed to a token from typical feature map by RW×H×C → RC×WH ), the cross-attention (Rombach et al., 2022; Blattmann et al., 2022) is conducted by CA(m, concat[k,m]). Due to the quadratic complexity of transformer (Katharopoulos et al., 2020; Lu et al., 2021), we only apply the cross-attention in lower-resolution feature maps.
C.2 TRAINING PARAMETER
3×32×32 model,4GPU,ImageNet32
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 3e− 4 Blocks per resolution: 2 Batch size: 128 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 4GPU, ImageNet64
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 48 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 1GPU, Pascal VOC, COCO_20K, COCO-Stuff
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 80 Attention resolutions: 4 EMA: 0.9999 Number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 1 × A6000(45G) Conditioning embedding MLP layers: 2 Training Epochs: 800/800/400 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256 Context token number: 8 Context dim: 32
C.3 DATASET PREPARATION
The preparation of unbalanced dataset. There are 50,000 images in the validation set of ImageNet with 1,000 classes (50 instances for each). We index the class from 0 to 999, for each class ci, the instance of the class ci is bi× 50/1000c = bi/200c.
Pascal VOC. We use the standard split from (Siméoni et al., 2021). It has 12,031 training images. As there is no validation set for Pascal VOC dataset, therefore, we only evaluate FID on train set. We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO_20K. We follow the split from (Siméoni et al., 2021; Vo et al., 2020; Lin et al., 2014). COCO_20k is a subset of the COCO2014 trainval dataset, consisting of 19,817 randomly chosen images, used in unsupervised object discovery (Siméoni et al., 2021; Vo et al., 2020). We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO-Stuff. It has a train set of 49,629 images, validation set of 2,175 images, where the original classes are merged into 27 (15 stuff and 12 things) high-level categories. We use the dataset split following (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang et al., 2022), We sample 10,000 images and use 10,000 train/validation images as reference set for FID evaluation.
C.4 LOST, STEGO ALGORITHMS
LOST algorithm details. We conduct padding to make the original image can be patchified to be fed into the ViT architecture (Dosovitskiy et al., 2021), and feed the original padded image into the LOST architecture using official source code 1. LOST can also be utilized in a two-stage approach to provide multi-object, due to its complexity, we opt for only single-object discovery in this paper.
STEGO algorithm details. We follow the official source code 2, and apply padding to make the original image can be fed into the ViT architecture to extract the self-segmented guidance signal.
For COCO-Stuff dataset, we directly use the official pretrained weight. For Pascal VOC, we train STEGO ourselves using the official hyperparameters.
In STEGO’s pre-processing for the k-NN, the number of neighbors for k-NN is 7. The segmentation head of STEGO is composed of a two-layer MLP (with ReLU activation) and outputs a 70-dimension feature. The learning rate is 5e− 4, batch size is 64.
D MORE QUALITATIVE RESULTS
1https://github.com/valeoai/LOST 2https://github.com/mhamilton723/STEGO
Guidance signal from training set: | 1. What is the focus of the paper regarding diffusion models and feature representations?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its contribution, experimental scale, and baseline comparisons?
3. Do you have any concerns or questions about the analysis and interpretation of the results, especially regarding diversity measures and layout preservation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work studies diffusion models guided by feature representations, which are learned in a self-supervised fashion. As a result, the generative (reverse diffusion) process can be conditioned on an image to preserve its semantic category. Such guidance is shown to improve the generated image quality in terms of IS and FID, when compared to an unguided process, and even to surpass that of models using ground-truth class labels. Extensions of the labels to bounding boxes and pixel-level segmentation further improved the image quality and allow the model to preserve the layout of an imaged used for guidance.
Strengths And Weaknesses
Pros
The paper is written and presented well overall. Replacing the ground-truth class guidance in diffusion models with a self-supervised signal is a very compelling idea. I also appreciate the extensions of the image-level guidance to spatial representations (the bounding boxes and segments). The small-scale experiments provide encouraging results with the self-supervised guidance being useful (improving over a baseline without any guidance) and, under some conditions, being even more effective than the guidance from manual annotation.
Cons
The overall contribution (both technical and empirical) is quite limited and boils down to replacing the ground-truth label guidance with a representation from models pre-trained in a self-supervised way. The experiments are on a small scale, with image resolution varying only from 32x32 to 64x64.
Another problem is the lack of strong baselines. Ho & Salimans (2022) report a substantially lower FID (and a higher IS) on ImageNet 64x64. What explains this discrepancy? Is it due to using DDIM vs. DDPM, and if so, how would it change with DDPM?
The provided analysis is quite limited. For example, Tab. 1 indicates that different methods for self-supervised representation result in different levels of image quality. I wish there was more insight as to why this differences arise, as there is also little correlation w.r.t. their performance on downstream tasks for which they were pre-trained.
I am wondering why FID and IS are often referred to as diversity measures (Sec. 3). They are, at least, very limited. Besides, this is bewildering to claim improved diversity w.r.t. unguided or classifier-free models, since the guidance severely constrains the diversity. Fig. 4 also illustrates this with the pixel-level guidance: the layout remains the same, while the classifier-free model produces diverse layouts. I would be also curious to see Recall curves instead (Kynkäänniemi, 2019).
Fig. 2: Why don’t the self-guided and ground-truth processes have the same FID and IS if w=0? Both would become unconditional / unguided models, hence should be equivalent. In view of Eq. 6, why even increase w beyond 1?
Minor: Sec. 2.2: What is the representation of the self-boxed guidance? Is it HxW binary mask (w.g. 1 inside the box, 0 outside)? Sec. 3.1: “balanced”/“unbalanced” requires clarification (“balanced” w.r.t. what?).
Clarity, Quality, Novelty And Reproducibility
The presentation is clear and the paper is of good quality. However, both the technical and empricial novelty are modest. Technically, the class guidance is simply replaced by a feature representation from unsupervised models. For a significant empirical novelty the paper lacks experiments of sufficient scale and/or depth in the analysis of different feature representations used for guidance.
Reproducibility: There is sufficient implementation detail in the main text and the Appendix. The authors also promise to relase the code publicly. |
ICLR | Title
Self-Guided Diffusion Models
Abstract
Diffusion models have demonstrated remarkable progress in image generation quality, especially when guidance is used to control the generative process. However, guidance requires a large amount of image-annotation pairs for training and is thus dependent on their availability, correctness and unbiasedness. In this paper, we eliminate the need for such annotation by instead leveraging the flexibility of self-supervision signals to design a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our method provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments on single-label and multi-label image datasets demonstrate that self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels, especially on unbalanced data. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation. Self-guided diffusion is simple, flexible and expected to profit from deployment at scale.
1 INTRODUCTION
The image fidelity of diffusion models is spectacularly enhanced by conditioning on class labels (Dhariwal & Nichol, 2021). Classifier guidance goes a step further and offers control over the alignment with the class label, by using the classifier gradient to guide the image generation (Dhariwal & Nichol, 2021). Classifier-free guidance (Ho & Salimans, 2021) replaces the dedicated classifier with a diffusion model that is trained by randomly setting the condition to the special non-label class. This has proven a fruitful research line for several other condition modalities, such as text (Saharia et al., 2022; Ramesh et al., 2021), image layout (Rombach et al., 2022), visual neighbors (Ashual et al., 2022), and image features (Giannone et al., 2022). However, all these conditioning and guidance methods require ground-truth annotations. This is an unrealistic and too costly assumption in many domains. For example, medical images require domain experts to annotate very high-resolution data, which is infeasible to do exhaustively (Panteli et al., 2021). In this paper, we propose to remove the necessity of any ground-truth annotation for guidance diffusion models.
We are inspired by progress in self-supervised learning (Chen et al., 2020; Caron et al., 2021), which encodes data, and especially images, into semantically meaningful latent vectors without using any label information. It usually does so by solving a pretext task (Zhang et al., 2017; Gidaris et al., 2018; Asano et al., 2020; He et al., 2020) on image-level to remove the necessity of labels. This annotationfree paradigm enables the representation learning to upscale to larger and more diverse (image) datasets (Gao et al., 2021). Recently, the holistic image-level self-supervision has been extended to more expressive dense representations, including bounding boxes, e.g., (Siméoni et al., 2021; Melas-Kyriazi et al., 2022) and pixel-precise segmentation masks, e.g., (Hamilton et al., 2022; Ziegler & Asano, 2022). Some self-supervised learning methods even outperform supervised alternatives (He et al., 2020; Caron et al., 2021). We hypothesize that for diffusion models, self-supervision may also provide a flexible and competitive, possibly even stronger guidance signal than ground-truth labeled guidance.
In this paper, we propose self-guided diffusion, a framework for image generation using guided diffusion without the need for any annotated image-label pairs. The framework encompasses a feature extraction function and a self-annotation function, that are compatible with recent selfsupervised learning advances. Furthermore, we leverage the flexibility of self-supervised learning
to generalize the guidance signal from the holistic image level to (unsupervised) local bounding boxes and segmentation masks for more fine-grained guidance. We demonstrate the potential of our proposal on single-label and multi-label image datasets, where self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation.
2 APPROACH
Before detailing our self-guided diffusion framework, we provide a brief background on diffusion models and the classifier-free guidance technique.
2.1 BACKGROUND
Diffusion models. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) gradually add noise to an image x0 until the original signal is fully diminished. By learning to reverse this process one can turn random noise xT into images. This diffusion process is modeled as a Gaussian process with Markovian structure:
q(xt|xt−1) := N (xt; √ 1− βtxt−1, βtI), q(xt|x0) := N (xt; √ αtx0, (1− αt)I, (1)
where β1, . . . , βT is a fixed variance schedule on which we define αt := 1− βt and αt := ∏t s=1 αs. All latent variables have the same dimensionality as the image x0 and differ by the proportion of the retained signal and added noise.
Learning the reverse process reduces to learning a denoiser xt ∼ q(xt|x0) that recovers the original image as (xt − (1− αt) θ(xt, t))/ √ αt ≈ x0. Ho et al. (2020) optimize the network parameters θ by minimizing the error of the noise prediction:
L(θ) = E ,x,t [ || θ(xt, t)− ||22 ] , (2)
in which ∼ N (0, I), x ∈ D is a sample from the training dataset D and the noise prediction function θ(·) are encouraged to be as close as possible to .
The standard sampling (Ho et al., 2020) requires many neural function evaluations to get good quality samples. Instead, the faster Denoising Diffusion Implicit Models (DDIM) sampler (Song et al., 2021a) has a non-Markovian sampling process:
xt−1 = √ αt−1
( xt −
√ 1− αt θ(xt, t)√
αt
) + √ 1− αt−1 − σ2t · θ(xt, t) + σt , (3)
where ∼ N (0, I) is standard Gaussian noise independent of xt.
Classifier-free guidance. To trade off mode coverage and sample fidelity in a conditional diffusion model, Dhariwal & Nichol (2021) propose to guide the image generation process using the gradients of a classifier, with the additional cost of having to train the classifier on noisy images. Motivated by this drawback, Ho & Salimans (2021) introduce label-conditioned guidance that does not require a classifier. They obtain a combination of a conditional and unconditional network in a single model, by randomly dropping the guidance signal c during training. After training, it empowers the model with progressive control over the degree of alignment between the guidance signal and the sample by varying the guidance strength w:
̃θ(xt, t; c, w) = (1− w) θ(xt, t) + w θ(xt, t; c). (4)
A larger w leads to greater alignment with the guidance signal, and vice versa. Classifier-free guidance (Ho & Salimans, 2021) provides progressive control over the specific guidance direction at the expense of labor-consuming data annotation. In this paper, we propose to remove the necessity of data annotation using a self-guided principle based on self-supervised learning.
2.2 SELF-GUIDED DIFFUSION
The equations describing diffusion by classifier-free guidance implicitly assume dataset D and its images each come with a single manually annotated class label. We prefer to make the label requirement explicit. We denote the human annotation process as the function ξ(x;D, C) : D → C, where C defines the annotation taxonomy, and plug this into Equation (4):
̃θ(xt, t; ξ(x; D, C), w) = (1− w) θ(xt, t) + w θ(xt, t; ξ(x; D, C)). (5)
We propose to replace the supervised labeling process ξ with a self-supervised process that requires no human annotation:
̃θ(xt, t; fψ(gφ(x; D); D), w) = (1− w) θ(xt, t) + w θ(xt, t; fψ(gφ(x; D); D), (6)
where g is a self-supervised feature extraction function parameterized by φ that maps the input data to feature space H, g : x → gφ(x),∀x ∈ D, and f is a self-annotation function parameterized by ψ to map the raw feature representation to the ultimate guidance signal k, fψ : gφ(·;D) → k. The guidance signal k can be any form of annotation, e.g., label, box, pixel, that can be paired with an image, which we derive by k=fψ(gφ(x; D); D). The choice of the self-annotation function f can be non-parametric by heuristically searching over dataset D based on the extracted feature gφ(·; D), or parametric by fine-tuning on the feature map gφ(·; D).
For the noise prediction function θ(·), we adopt the traditional UNet network architecture (Ronneberger et al., 2015) due to its superior image generation performance, following (Ho et al., 2020; Song et al., 2021b; Ramesh et al., 2022; Saharia et al., 2022).
Stemming from this general framework, we present three methods working at different spatial granularities, all without relying on any ground-truth labels. Specifically, we cover image-level, box-level and pixel-level guidance by setting the feature extraction function gφ(·), self-annotation function fψ(·), and guidance signal k to an approximate form.
Self-labeled guidance. To achieve self-labeled guidance, we need a self-annotation function f that produces a representative guidance signal k ∈ RK . Firstly, we need an embedding function gφ(x),x ∈ D which provides semantically meaningful image-level guidance for the model. We obtain gφ(·) in a self-supervised manner by mapping from image space, gφ(·) : RW×H×3 → RC , where W and H are image width and height and C is the feature dimension. We may use any type of feature for the feature embedding function g, which we will vary and validate in the experiments. As the image-level feature gφ(·; D) is not compact enough for guidance, we further conduct a non-parametric clustering algorithm, e.g., k-means, as our self-annotation function f . For all features gφ(·), we obtain the self-labeled guidance via self-annotation function fψ(·) : RC → RK . Motivated by Rolfe (2017), we use a one-hot embedding k ∈ RK for each image to achieve a compact guidance.
We inject the guidance information into the noise prediction function θ by concatenating it with timestep embedding t and feed the concatenated information concat[t,k] into every block of the UNet. Thus, the noise prediction function θ is rewritten as:
θ(xt, t; k) = θ(xt,concat[t, k]), (7)
where k=fψ(gφ(x; D); D) is the self-annotated image-level guidance signal. For simplicity, we ignore the self-annotation function fψ(·) here and in the later text. Self-labeled guidance focuses on the image-level global guidance. Next we consider a more fine-grained spatial guidance.
Self-boxed guidance. Bounding boxes specify the location of an object in an image (Ren et al., 2015; Carion et al., 2020) and complements the content information of class labels. Our selfboxed guidance approach aims to attain this signal via self-supervised models. We represent the bounding box as a feature map (W ×H) rather than coordinates ([X,Y,W,H]). We propose the selfannotation function f that obtains a bounding box ks ∈ RW×H by mapping from feature space H to the bounding box space via fψ(·; D) : RW×H×C → RW×H , and inject the guidance signal by concatenating in the channel dimension: xt := concat[xt, ks] Usually in self-supervised learning, the derived bounding box is class-agnostic (Vo et al., 2020; 2021). To inject a self-supervised pseudo label to further enhance the guidance signal, we again resort to clustering to obtain k and concatenate
it with the time embedding t := concat[t,k]. To incorporate such guidance, we reformulate the noise prediction function θ as:
θ(xt, t; ks,k) = θ(concat[xt,ks],concat[t,k]), (8)
in which ks is the self-supervised box guidance obtained by self-annotation functions fψ, k is the self-supervised image-level guidance from clustering. The design of fψ is flexible as long as it obtains self-supervised bounding boxes by fψ(·; D) : RW×H×C → RW×H . Self-boxed guidance guides the diffusion model by boxes, which specifies the box area in which the object will be generated. Sometimes, we may need an even finer granularity, e.g., pixels, which we detail next.
Self-segmented guidance. Compared to a bounding box, a segmentation mask is a more finegrained signal. Additionally, a multichannel mask is more expressive than a binary foregroundbackground mask. Therefore, we propose a self-annotation function f that acts as a plug-in built on feature gφ(·; D) to extract the segmentation mask ks via function mapping fψ(·; D) : RW×H×C → RW×H×K , where K is the number of segmentation clusters.
To inject the self-segmented guidance into the noise prediction function θ, we consider two pathways for injection of such guidance. We first concatenate the segmentation mask to xt in the channel dimension, xt := concat[xt, ks], to retain the spatial inductive bias of the guidance signal. Secondly, we also incorporate the image-level guidance to further amplify the guidance signal along the channel dimension. As the segmentation mask from the self-annotation function fψ already contains image-level information, we do not apply the image-level clustering as before in our selflabeled guidance. Instead, we directly derive the image-level guidance from the self-annotation result fψ(·) via spatial maximum pooling: RW×H×K → RK , and feed the image-level guidance k̂ into the noise prediction function via concatenating it with the timestep embedding t:=concat[t, k̂]. The concatenated results will be sent to every block of the UNet. In the end, the overall noise prediction function for self-segmented guidance is formulated as:
θ(xt, t; ks, k̂) = θ(concat[xt,ks], concat[t, k̂]), (9)
in which ks is the spatial mask guidance obtained from self-annotation function f , k̂ is a multi-hot image-level guidance derived from the self-supervised learning mask ks.
We have described three variants of self-guidances by setting the feature extraction function gφ(·), self-annotation function fψ(·), guidance signal k to an approximate form. In the end, we arrive at three noise prediction functions θ, which we utilize for diffusion model training and sampling, following the standard guided (Ho & Salimans, 2021) diffusion approach as detailed in Section 2.1.
3 EXPERIMENTS
In this section, we aim to answer the overarching question: Can we substitute ground-truth annotations with self-annotations? First, we consider the image-label setting, in which we examine what kind of self-labeling is required to improve image fidelity. Next, we look at image-bounding box pairs. Finally, we examine whether it is possible to gain fine-grained control with self-labeled image-segmentation pairs. We first present the general settings relevant for all experiments.
Evaluation metric. We evaluate both diversity and fidelity of the generated images by the Fréchet Inception Distance (FID) (Heusel et al., 2017), as it is the de facto metric for the evaluation of generative methods, e.g., (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019; Saharia et al., 2022). It provides a symmetric measure of the distance between two distributions in the feature space of Inception-V3 (Szegedy et al., 2016). We use FID as our main metric for the sampling quality.
Baselines & implementation details. Throughout our experiments, we always use the diffusion model following (Ho et al., 2020). We train with timestep T=1000, guidance drop probability p=0.1 and the linear variance schedule. For sampling, we set the guidance strength w=2 and deploy a clipping operation in every sampling timestep. As the baseline for a guided diffusion model with ground-truth labels we follow classifier-free guidance (Ho & Salimans, 2021). We use DDIM (Song et al., 2021a) samplers with 250 steps, σt=0. All hyperparameter of our self-guided diffusion and the
baselines are the same, allowing us to compare methods under a fixed compute budget. For details of the learning rate, optimizer, and hyperparameters, we refer to Appendix C. All code will be released.
3.1 SELF-LABELED GUIDANCE
We use ImageNet32/64 (Deng et al., 2009) to validate the efficacy of self-labeled guidance. For better evaluation of sampling quality, we also adopt the Inception Score (IS) (Salimans et al., 2016), following the common practice on this dataset (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019). IS measures how well a model fits into the full ImageNet class distribution.
Choice of feature extraction function g. We first measure the influence of the feature extraction function g used before clustering. We consider two supervised feature backbones: ResNet50 (He et al., 2016) and ViTB/16 (Dosovitskiy et al., 2021), and four selfsupervised backbones: SimCLR (Chen et al., 2020), MAE (He et al., 2022), MSN (Assran et al., 2022) and DINO (Caron et al., 2021). To assure a fair comparison we use 10k clusters for all architectures. From the results in Table 1, we make the following observations. First, features from the supervised ResNet50, and ViT-B/16 lead to a satisfactory FID performance, at the expense of relatively limited diversity (low IS). However, they still require label
annotation, which we strive to avoid in our work. Second, among the self-supervised feature extraction functions, the MSN- and DINO-pretrained ViT backbones have the best trade-off in terms of both FID and IS. They even improve over the label-supervised backbones. This implies the label assignment for the guidance is not unique, pretext labels on top of self-supervised learning can still provide influential guidance signal in comparison with human annotated labels. Also, the diversity of label-supervised ViT-B/16 is much lower than self-supervised ViT-B/16 with an IS of 7.81 vs. 10.41, suggesting that self-supervised guidance leads to more unbiased representation in comparison to supervised guidance. From now on we pick the DINO ViT-B/16 architecture as our self-supervised feature extraction function g.
Effect of number of clusters. Next, we ablate the influence of the number of clusters on the overall sampling quality. We consider 1, 10, 100, 500, 1,000, 5,000 and 10,000 clusters on the extracted CLS token from the DINO ViT-B/16 feature. For efficient, yet uniform, comparison we only run 20 epochs on ImageNet32. To put our sampling results in perspective, we also provide FID and IS results for DDPM and classifier-free guidance. From the result in Figure 1 we first observe that when the cluster number is ranging from 1 to 5,000, our model’s performance monotonously increases and
always surpasses the DDPM model. Beyond 1,000 clusters, we are competitive with classifier-free guidance using ground-truth labels. For 5,000 clusters, there is a sweet spot where we outperform classifier-free guidance with an FID of 16.4 vs. 17.9 and an IS of 9.94 vs. 10.35, see also the generated images in Figure 1. The performance of FID starts to deteriorate from 5,000 to 10,000 clusters. We conclude that self-labeled guidance outperforms DDPM without any guidance beyond a single cluster, is competitive with classifier-free guidance beyond 1,000 clusters, and is even able to outperform guidance by ground-truth labels for 5,000 clusters. From now on we use 5,000 clusters for self-labeled guidance on ImageNet.
Varying guidance strength w. Next we consider the influence of the guidance strength w on our sampling results. As the validation set of ImageNet32 is strictly balanced, we also consider an unbalanced setting which is more similar to real-world deployment. Under both settings we compare the FID between our self-labeled guidance and ground-truth guidance. We train both models for 100 epochs. For the standard ImageNet32 validation setting in Figure 2a, our method achieves a 17.8% improvement for respective optimal guidance strength of the two methods. Self-labeled guidance is especially effective for lower values of w. We observe similar trends for the unbalanced setting in Figure 2b, be it that the overall FID results are slightly higher for both methods. The improvement increases to 18.7%. We conjecture this is due to the unbalanced nature of the k-means algorithm (Last et al., 2017), and clustering based on the statistics of the overall dataset can potentially lead to more robust performance in unbalanced setting.
Self-labeled comparisons on ImageNet32/64. We compare our self-labeled guidance with groundtruth labels guidance, which utilizes the technique of classifier-free guidance (Ho & Salimans, 2021). We train all experiments for 100 epochs which take about 6 days to converge on four RTX A5000 GPUs. All hyperparameters are the same between the two methods to make the comparison as fair as possible. Results on ImageNet32 and ImageNet64 are in Table 2. Similar to Dhariwal & Nichol (2021), we observe that any guidance setting improves considerably over the unconditional & no-guidance model. Surprisingly, our self-labeled model even outperforms the ground-truth labels by a large gap in terms of FID of 2.9 and 4.7 points respectively. We hypothesize that the ground-truth taxonomy might be suboptimal for learning generative models and the self-supervised clusters offer a better guidance signal due to better alignment with the visual similarity of the images. This suggests that the label-conditioned guidance from Ho & Salimans (2021) can be completely replaced by guidance from self-supervision, which would enable guided diffusion models to learn from even larger (unlabeled) datasets than feasible today.
3.2 SELF-BOXED GUIDANCE
We report on Pascal VOC and COCO_20K to validate the efficacy of self-boxed guidance. To obtain class-agnostic object bounding boxes, we use LOST (Siméoni et al., 2021) as our self-annotation function f . We report train FID for Pascal VOC and train/validation FID for COCO_20K. For
image-level clustering to attain the guidance signal, we empirically found k=100 works best on both datasets as those datasets are relatively small-scale in images and labels compared to ImageNet. We train our diffusion model for 800 epochs with input image size 64×64. See Appendix C for more details.
Self-boxed comparisons on Pascal VOC and COCO_20K. For the ground-truth labels guidance baseline, we condition on a class embedding. Since there are now multiple objects per image, we represent the ground-truth class with a multi-hot embedding. Aside from the class embedding which is multi-hot in our method, all other settings remain the same for a fair comparison. The results in Table 3, confirm that the multi-hot class embedding is indeed effective for multi-label datasets, improving over the no-guidance model by a large margin. This improvement comes at the cost of manually annotating multiple classes per image. Self-boxed guidance further improves upon this result, by reducing the FID by an additional 5.1 and 3.3 points respectively without using any groundtruth annotation. In Figure 3, we show our method generates diverse and semantically well-aligned images.
3.3 SELF-SEGMENTED GUIDANCE
Finally, we validate the efficacy of self-segmented guidance on Pascal VOC and COCO-Stuff. For COCO-Stuff we follow the split from (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang
et al., 2022), with a train set of 49,629 images and a validation set of 2,175 images. Classes are merged into 27 (15 stuff and 12 things) categories. For self-segmented guidance we apply STEGO (Hamilton et al., 2022) as our self-annotation function f . We set the cluster number to 27 for COCO-Stuff, and 21 for Pascal VOC, following STEGO. We train all models on images of size 64×64, for 800 epochs on Pascal VOC, and for 400 epochs on COCO-Stuff. We report the train FID for Pascal VOC and both train and validation FID for COCO-Stuff. More details on the dataset and experimental setup are provided in Appendix C.
Self-segmented comparisons on Pascal VOC and COCO-Stuff. We compare against both the ground-truth labels guidance baseline from the previous section and a model trained with ground-truth semantic masks guidance. The results in Table 4 demonstrate that our self-segmented guidance still outperforms the ground-truth labels guidance baseline on both datasets. The comparison between ground-truth labels and segmentation masks reveals an improvement in image quality when using the more fine-grained segmentation mask as the condition signal. But these segmentation masks are one of the most costly types of image annotations that require every pixel to be labeled. Our self-segmented approach avoids the necessity for annotations while narrowing the performance gap, and more importantly offering fine-grained control over the image layout. We demonstrate this controllability with examples in Figure 4. These examples further highlight a robustness against noise in the segmentation masks, which our method acquires naturally due to training with noisy segmentations.
4 RELATED WORK
Conditional generative models. Earlier works on generative adversarial networks (GANs) have already observed improvements in image quality by conditioning on ground-truth labels (Mirza & Osindero, 2014; Brock et al., 2019; Casanova et al., 2021). Recently, conditional diffusion models have reported similar improvements, while also offering a great amount of controllability via classifierfree guidance by training on images paired with textual descriptions (Ramesh et al., 2021; 2022; Saharia et al., 2022), semantic segmentations (Wang et al., 2022), or other modalities (Bordes et al., 2022; Yang et al., 2022; Song et al., 2022). Our work also aims to realize the benefits of conditioning and guidance, but instead of relying on additional human-generated supervision signals, we leverage the strength of pretrained self-supervised visual encoders.
Zhou et al. (2022) train a GAN for text-to-image generation without any image-text pairs, by leveraging the CLIP (Radford et al., 2021) model that was pretrained on a large collection of paired data. In this work, we do not assume any paired data for the generative models and rely purely on images. Additionally, image layouts are difficult to be expressed by text, thus our self-boxed and selfsegmented methods are complementary to text conditioning. Instance-Conditioned GAN (Casanova et al., 2021), Retrieval-augmented Diffusion (Blattmann et al., 2022) and KNN-diffusion (Ashual et al., 2022) are three recent methods that utilize nearest neighbors as guidance signals in generative models. Similar to our work, these methods rely on conditional guidance from an unsupervised source, we differ from them by further attempting to provide more diverse spatial guidance, including (self-supervised) bounding boxes and segmentation masks.
Self-supervised learning in generative models. Self-supervised learning (Caron et al., 2020; Chen et al., 2020; Asano et al., 2020; Caron et al., 2021) has shown great potential for representation learning in many downstream tasks. As a consequence, it is also commonly explored in GAN for evaluation and analysis (Morozov et al., 2020), conditioning (Casanova et al., 2021; Mangla et al., 2022), stabilizing training (Chen et al., 2019), reducing labeling costs (Lučić et al., 2019) and avoiding mode collapse (Armandpour et al., 2021). Our work focuses on translating the benefits of self-supervised methods to the generative domain and providing flexible guidance signals to diffusion models at various image granularities. In order to analyze the feature representation from self-supervised models, Bordes et al. (2022) condition on self-supervised features in their diffusion model for better visualization in data space. We instead condition on the compact clustering after the self-supervised feature, and further introduce the elasticity of self-supervised learning into diffusion models for multi-granular image generation.
5 CONCLUSION
We have explored the potential of self-supervision signals for diffusion models and propose a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our framework provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments indicate that self-supervision signals are an adequate replacement for existing guidance methods that generate images by relying on annotated image-label pairs during training. Furthermore, both self-boxed and self-segmented approaches demonstrate that we can acquire fine-grained control over the image content, without any ground-truth bounding boxes or segmentation masks. Due to limited computational resources, we restricted our experiments to images of a maximum size of 64×64. For future work, it will be of interest to verify our findings on larger image resolutions. Ultimately, our goal is to enable the benefits of self-guided diffusion for unlabeled and more diverse datasets at scale, wherein we believe this work is a promising first step.
1 Introduction 1
2 Approach 2
2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Self-Guided Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Experiments 4
3.1 Self-labeled Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Self-boxed Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Self-segmented Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Related Work 8
5 Conclusion 9
A Main framework 15
B More quantitative results 16
B.1 Correlation between NMI and FID in different feature backbones. . . . . . . . . . 16
B.2 Precision and Recall in ImageNet32/64 dataset . . . . . . . . . . . . . . . . . . . 16
B.3 Cluster number ablation in self-boxed guidance . . . . . . . . . . . . . . . . . . . 16
B.4 Trend visualization of training loss and validation FID . . . . . . . . . . . . . . . 17
C More experimental details 17
C.1 UNet structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
C.2 Training Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
C.3 Dataset preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C.4 LOST, STEGO algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
D More qualitative results 19
A MAIN FRAMEWORK
We illustrate the pipeline of our framework in Figure 5.
B MORE QUANTITATIVE RESULTS
B.1 CORRELATION BETWEEN NMI AND FID IN DIFFERENT FEATURE BACKBONES.
To assess the correlation between cluster quality and sample fidelity, we consider the Normalized Mutual Information (NMI), which is commonly adopted as a mutual information-derived metric to assess clustering quality based on provided ground truth labels. In Figure 6 we plot the connection between NMI and FID. For the label-supervised functions, the NMI is unrelated to the FID, but for the self-supervised functions, NMI and FID are negatively correlated, suggesting that NMI — a metric commonly applied to assess the quality of self-supervised methods — is also predictive of the model’s usefulness in our setting.
B.2 PRECISION AND RECALL IN IMAGENET32/64 DATASET
We show the extra results of ImageNet on precision and recall in Table 5. We follow the evaluation code of precision and recall from ICGAN (Casanova et al., 2021), our self-labeled guidance also outperforms ground-truth labels in precision and remains competitive in recall.
B.3 CLUSTER NUMBER ABLATION IN SELF-BOXED GUIDANCE
In table 6, we empirically evaluate the performance when we alter the cluster number in our self-boxed guidance. We find the performance will increase from k = 21 to k = 100, and saturated at k = 100.
B.4 TREND VISUALIZATION OF TRAINING LOSS AND VALIDATION FID
We visualize the trend of training loss and validation FID in Figure 7.
C MORE EXPERIMENTAL DETAILS
Training details. For our best results, we train 100 epochs on 4 GPUs of A5000 (24G) in ImageNet. We train 800/800/400 epochs on 1GPU of A6000 (48G) in Pascal VOC, COCO_20K, COCO-Stuff, respectively. All qualitative results in this paper are trained in the same setting as mentioned above. We train and evaluate the Pascal VOC, COCO_20K, COCO-Stuff in image size 64, and visualize them by bilinear up sampling to 256, following (Liu et al., 2022).
Sampling details. We sample the guidance signal from the distribution of training set in our all experiments. For each timestep, we need twice of Number of Forward Evaluation (NFE), we optimize them by concatenating the conditional and unconditional signal along the batch dimension so that we only need one time of NFE in every timestep.
Evaluation details. We use the common package Clean-FID (Parmar et al., 2022), torchfidelity (Obukhov et al., 2020) for FID, IS calculation, respectively. For IS, we use the standard 10-split setting, we only report IS on ImageNet, as it might be not an appropriate metric for non object-centric datasets (Barratt & Sharma, 2018). For checking point, we pick the checking point every 10 epochs by minimal FID between generated sample set and train set.
C.1 UNET STRUCTURE
Guidance signal injection. We describe the detail of guidance signal injection in Figure 8. The injection of self-labeled guidance and self-boxed/segmented guidance is slightly different. The common part is by concatenation between timestep embedding and noisy input, the concatenated feature will be sent to every block of the UNet. For the self-boxed/segmented guidance, we not only conduct the information fusion as above, but also incorporate the spatial inductive-bias by concatenating it with input, the concatenated result will be fed into the UNet.
Timestep embedding. We embed the raw timestep information by two-layer MLP: FC(512, 128)→SiLU→FC(128, 128).
Guidance embedding. The guidance is in the form of one/multi-hot embedding RK , we feed it into two-layer MLP: FC(K, 256)→SiLU→FC(256, 256), then feed those guidance signal into the UNet following in Figure 8.
Cross-attention. In training for non object-centric dataset, we also tokenize the guidance signal to several tokens following Imagen Saharia et al. (2022), we concatenate those tokens with image tokens (can be transposed to a token from typical feature map by RW×H×C → RC×WH ), the cross-attention (Rombach et al., 2022; Blattmann et al., 2022) is conducted by CA(m, concat[k,m]). Due to the quadratic complexity of transformer (Katharopoulos et al., 2020; Lu et al., 2021), we only apply the cross-attention in lower-resolution feature maps.
C.2 TRAINING PARAMETER
3×32×32 model,4GPU,ImageNet32
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 3e− 4 Blocks per resolution: 2 Batch size: 128 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 4GPU, ImageNet64
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 48 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 1GPU, Pascal VOC, COCO_20K, COCO-Stuff
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 80 Attention resolutions: 4 EMA: 0.9999 Number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 1 × A6000(45G) Conditioning embedding MLP layers: 2 Training Epochs: 800/800/400 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256 Context token number: 8 Context dim: 32
C.3 DATASET PREPARATION
The preparation of unbalanced dataset. There are 50,000 images in the validation set of ImageNet with 1,000 classes (50 instances for each). We index the class from 0 to 999, for each class ci, the instance of the class ci is bi× 50/1000c = bi/200c.
Pascal VOC. We use the standard split from (Siméoni et al., 2021). It has 12,031 training images. As there is no validation set for Pascal VOC dataset, therefore, we only evaluate FID on train set. We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO_20K. We follow the split from (Siméoni et al., 2021; Vo et al., 2020; Lin et al., 2014). COCO_20k is a subset of the COCO2014 trainval dataset, consisting of 19,817 randomly chosen images, used in unsupervised object discovery (Siméoni et al., 2021; Vo et al., 2020). We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO-Stuff. It has a train set of 49,629 images, validation set of 2,175 images, where the original classes are merged into 27 (15 stuff and 12 things) high-level categories. We use the dataset split following (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang et al., 2022), We sample 10,000 images and use 10,000 train/validation images as reference set for FID evaluation.
C.4 LOST, STEGO ALGORITHMS
LOST algorithm details. We conduct padding to make the original image can be patchified to be fed into the ViT architecture (Dosovitskiy et al., 2021), and feed the original padded image into the LOST architecture using official source code 1. LOST can also be utilized in a two-stage approach to provide multi-object, due to its complexity, we opt for only single-object discovery in this paper.
STEGO algorithm details. We follow the official source code 2, and apply padding to make the original image can be fed into the ViT architecture to extract the self-segmented guidance signal.
For COCO-Stuff dataset, we directly use the official pretrained weight. For Pascal VOC, we train STEGO ourselves using the official hyperparameters.
In STEGO’s pre-processing for the k-NN, the number of neighbors for k-NN is 7. The segmentation head of STEGO is composed of a two-layer MLP (with ReLU activation) and outputs a 70-dimension feature. The learning rate is 5e− 4, batch size is 64.
D MORE QUALITATIVE RESULTS
1https://github.com/valeoai/LOST 2https://github.com/mhamilton723/STEGO
Guidance signal from training set: | 1. What is the main contribution of the paper regarding diffusion models?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of evaluation and consistency?
3. Do you have any questions or concerns about the reported scores and experimental settings?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any potential extensions or modifications to the proposed method that could improve its performance or applicability? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a series of methods to incorporate self-guidance into diffusion models. Three types of self-guidance are explored: (self-labeled (classification), self-boxed (detection), and self-segmented (segmentation). The experiments show that with these self-guidance signals, the quality of generated images improves significantly (in terms of FID and IS), sometimes even better than using groundtruth annotations.
Strengths And Weaknesses
Strengths:
This paper is a natural extension of classifier-free diffusion guidance, by replacing the groundtruth class annotations with pseudo-labels generated from self-supervised learning. As the annotations are usually costly (esp. for detection and segmentation), self-guidance will greatly facilitate fine-grained diffusion generation.
The writing is good and the paper is easy to follow.
Weaknesses: (A few important concerns about the evaluation)
On Imagenet32/64, it's natural that with more clusters (bigger K), the self-labeled guidance will yield better images. In the extreme case, when K is close to the number of training images, only a few images are in each cluster, and the model could simply memorize an image in each cluster and output it. In this sense, when K>>1000 (the number of groundtruth classes), it's natural that the model achieves lower FID than using the groundtruth labels. In my eyes, this lower FID doesn't indicate real advantages of self-guidance, but is merely a result of more (pseudo-)classes.
Following up on 1., however, in Figure 1, when K changes from 5000 to 10000, FID increases (quality drops), which I find counterintuitive.
I found some inconsistencies in the reported scores (correct me if I got it wrong): 1) The FID/IS scores in Table 1 never reappear in any other tables. Are they obtained using a unique setting for choosing the feature extraction function? 2) Inconsistency between Fig.2 and Table 2. The FID of self-labeled guidance is around 7 in Fig. 2a, consistent with Table 2. But the FID of ground-truth guidance is apparently below 10 in Fig 2a, and is 10.2 in Table 2.
Clarity, Quality, Novelty And Reproducibility
Some missing details:
How are the "train", "val" scores in Table 4 computed? For example the "val" score, is it obtained by training the model on the "train" split and computing the FID using the "val" split?
In the last paragraph of Page 3, a bounding box
k
s
∈
R
W
×
H
, I wonder how the bounding box is represented as a
W
×
H
array? What if there are multiple objects (multiple bounding boxes)? Later, clustering is done on the bounding box -- how? it's weird to do clustering on many such W*H arrays.
Questions:
In applications, more natural segmentation mask guidance seems to be instance segmentation mask instead of semantic segmentation (e.g., the artist draws two sheep contours using different colors). Is it easy to extend the current framework to incorporate instance segmentation masks?
The choice of how to incorporate the supervision embeddings k_s seems a bit arbitrary. For self-labeled and self-boxed guidance, k can be concatenated (could be added to avoid doubling the model size) with x_t as well (after broadcasting k to the whole HW plane). Are the two choices performing similarly?
Writing: Overall, the writing is good and the paper is easy to follow. |
ICLR | Title
Self-Guided Diffusion Models
Abstract
Diffusion models have demonstrated remarkable progress in image generation quality, especially when guidance is used to control the generative process. However, guidance requires a large amount of image-annotation pairs for training and is thus dependent on their availability, correctness and unbiasedness. In this paper, we eliminate the need for such annotation by instead leveraging the flexibility of self-supervision signals to design a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our method provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments on single-label and multi-label image datasets demonstrate that self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels, especially on unbalanced data. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation. Self-guided diffusion is simple, flexible and expected to profit from deployment at scale.
1 INTRODUCTION
The image fidelity of diffusion models is spectacularly enhanced by conditioning on class labels (Dhariwal & Nichol, 2021). Classifier guidance goes a step further and offers control over the alignment with the class label, by using the classifier gradient to guide the image generation (Dhariwal & Nichol, 2021). Classifier-free guidance (Ho & Salimans, 2021) replaces the dedicated classifier with a diffusion model that is trained by randomly setting the condition to the special non-label class. This has proven a fruitful research line for several other condition modalities, such as text (Saharia et al., 2022; Ramesh et al., 2021), image layout (Rombach et al., 2022), visual neighbors (Ashual et al., 2022), and image features (Giannone et al., 2022). However, all these conditioning and guidance methods require ground-truth annotations. This is an unrealistic and too costly assumption in many domains. For example, medical images require domain experts to annotate very high-resolution data, which is infeasible to do exhaustively (Panteli et al., 2021). In this paper, we propose to remove the necessity of any ground-truth annotation for guidance diffusion models.
We are inspired by progress in self-supervised learning (Chen et al., 2020; Caron et al., 2021), which encodes data, and especially images, into semantically meaningful latent vectors without using any label information. It usually does so by solving a pretext task (Zhang et al., 2017; Gidaris et al., 2018; Asano et al., 2020; He et al., 2020) on image-level to remove the necessity of labels. This annotationfree paradigm enables the representation learning to upscale to larger and more diverse (image) datasets (Gao et al., 2021). Recently, the holistic image-level self-supervision has been extended to more expressive dense representations, including bounding boxes, e.g., (Siméoni et al., 2021; Melas-Kyriazi et al., 2022) and pixel-precise segmentation masks, e.g., (Hamilton et al., 2022; Ziegler & Asano, 2022). Some self-supervised learning methods even outperform supervised alternatives (He et al., 2020; Caron et al., 2021). We hypothesize that for diffusion models, self-supervision may also provide a flexible and competitive, possibly even stronger guidance signal than ground-truth labeled guidance.
In this paper, we propose self-guided diffusion, a framework for image generation using guided diffusion without the need for any annotated image-label pairs. The framework encompasses a feature extraction function and a self-annotation function, that are compatible with recent selfsupervised learning advances. Furthermore, we leverage the flexibility of self-supervised learning
to generalize the guidance signal from the holistic image level to (unsupervised) local bounding boxes and segmentation masks for more fine-grained guidance. We demonstrate the potential of our proposal on single-label and multi-label image datasets, where self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation.
2 APPROACH
Before detailing our self-guided diffusion framework, we provide a brief background on diffusion models and the classifier-free guidance technique.
2.1 BACKGROUND
Diffusion models. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) gradually add noise to an image x0 until the original signal is fully diminished. By learning to reverse this process one can turn random noise xT into images. This diffusion process is modeled as a Gaussian process with Markovian structure:
q(xt|xt−1) := N (xt; √ 1− βtxt−1, βtI), q(xt|x0) := N (xt; √ αtx0, (1− αt)I, (1)
where β1, . . . , βT is a fixed variance schedule on which we define αt := 1− βt and αt := ∏t s=1 αs. All latent variables have the same dimensionality as the image x0 and differ by the proportion of the retained signal and added noise.
Learning the reverse process reduces to learning a denoiser xt ∼ q(xt|x0) that recovers the original image as (xt − (1− αt) θ(xt, t))/ √ αt ≈ x0. Ho et al. (2020) optimize the network parameters θ by minimizing the error of the noise prediction:
L(θ) = E ,x,t [ || θ(xt, t)− ||22 ] , (2)
in which ∼ N (0, I), x ∈ D is a sample from the training dataset D and the noise prediction function θ(·) are encouraged to be as close as possible to .
The standard sampling (Ho et al., 2020) requires many neural function evaluations to get good quality samples. Instead, the faster Denoising Diffusion Implicit Models (DDIM) sampler (Song et al., 2021a) has a non-Markovian sampling process:
xt−1 = √ αt−1
( xt −
√ 1− αt θ(xt, t)√
αt
) + √ 1− αt−1 − σ2t · θ(xt, t) + σt , (3)
where ∼ N (0, I) is standard Gaussian noise independent of xt.
Classifier-free guidance. To trade off mode coverage and sample fidelity in a conditional diffusion model, Dhariwal & Nichol (2021) propose to guide the image generation process using the gradients of a classifier, with the additional cost of having to train the classifier on noisy images. Motivated by this drawback, Ho & Salimans (2021) introduce label-conditioned guidance that does not require a classifier. They obtain a combination of a conditional and unconditional network in a single model, by randomly dropping the guidance signal c during training. After training, it empowers the model with progressive control over the degree of alignment between the guidance signal and the sample by varying the guidance strength w:
̃θ(xt, t; c, w) = (1− w) θ(xt, t) + w θ(xt, t; c). (4)
A larger w leads to greater alignment with the guidance signal, and vice versa. Classifier-free guidance (Ho & Salimans, 2021) provides progressive control over the specific guidance direction at the expense of labor-consuming data annotation. In this paper, we propose to remove the necessity of data annotation using a self-guided principle based on self-supervised learning.
2.2 SELF-GUIDED DIFFUSION
The equations describing diffusion by classifier-free guidance implicitly assume dataset D and its images each come with a single manually annotated class label. We prefer to make the label requirement explicit. We denote the human annotation process as the function ξ(x;D, C) : D → C, where C defines the annotation taxonomy, and plug this into Equation (4):
̃θ(xt, t; ξ(x; D, C), w) = (1− w) θ(xt, t) + w θ(xt, t; ξ(x; D, C)). (5)
We propose to replace the supervised labeling process ξ with a self-supervised process that requires no human annotation:
̃θ(xt, t; fψ(gφ(x; D); D), w) = (1− w) θ(xt, t) + w θ(xt, t; fψ(gφ(x; D); D), (6)
where g is a self-supervised feature extraction function parameterized by φ that maps the input data to feature space H, g : x → gφ(x),∀x ∈ D, and f is a self-annotation function parameterized by ψ to map the raw feature representation to the ultimate guidance signal k, fψ : gφ(·;D) → k. The guidance signal k can be any form of annotation, e.g., label, box, pixel, that can be paired with an image, which we derive by k=fψ(gφ(x; D); D). The choice of the self-annotation function f can be non-parametric by heuristically searching over dataset D based on the extracted feature gφ(·; D), or parametric by fine-tuning on the feature map gφ(·; D).
For the noise prediction function θ(·), we adopt the traditional UNet network architecture (Ronneberger et al., 2015) due to its superior image generation performance, following (Ho et al., 2020; Song et al., 2021b; Ramesh et al., 2022; Saharia et al., 2022).
Stemming from this general framework, we present three methods working at different spatial granularities, all without relying on any ground-truth labels. Specifically, we cover image-level, box-level and pixel-level guidance by setting the feature extraction function gφ(·), self-annotation function fψ(·), and guidance signal k to an approximate form.
Self-labeled guidance. To achieve self-labeled guidance, we need a self-annotation function f that produces a representative guidance signal k ∈ RK . Firstly, we need an embedding function gφ(x),x ∈ D which provides semantically meaningful image-level guidance for the model. We obtain gφ(·) in a self-supervised manner by mapping from image space, gφ(·) : RW×H×3 → RC , where W and H are image width and height and C is the feature dimension. We may use any type of feature for the feature embedding function g, which we will vary and validate in the experiments. As the image-level feature gφ(·; D) is not compact enough for guidance, we further conduct a non-parametric clustering algorithm, e.g., k-means, as our self-annotation function f . For all features gφ(·), we obtain the self-labeled guidance via self-annotation function fψ(·) : RC → RK . Motivated by Rolfe (2017), we use a one-hot embedding k ∈ RK for each image to achieve a compact guidance.
We inject the guidance information into the noise prediction function θ by concatenating it with timestep embedding t and feed the concatenated information concat[t,k] into every block of the UNet. Thus, the noise prediction function θ is rewritten as:
θ(xt, t; k) = θ(xt,concat[t, k]), (7)
where k=fψ(gφ(x; D); D) is the self-annotated image-level guidance signal. For simplicity, we ignore the self-annotation function fψ(·) here and in the later text. Self-labeled guidance focuses on the image-level global guidance. Next we consider a more fine-grained spatial guidance.
Self-boxed guidance. Bounding boxes specify the location of an object in an image (Ren et al., 2015; Carion et al., 2020) and complements the content information of class labels. Our selfboxed guidance approach aims to attain this signal via self-supervised models. We represent the bounding box as a feature map (W ×H) rather than coordinates ([X,Y,W,H]). We propose the selfannotation function f that obtains a bounding box ks ∈ RW×H by mapping from feature space H to the bounding box space via fψ(·; D) : RW×H×C → RW×H , and inject the guidance signal by concatenating in the channel dimension: xt := concat[xt, ks] Usually in self-supervised learning, the derived bounding box is class-agnostic (Vo et al., 2020; 2021). To inject a self-supervised pseudo label to further enhance the guidance signal, we again resort to clustering to obtain k and concatenate
it with the time embedding t := concat[t,k]. To incorporate such guidance, we reformulate the noise prediction function θ as:
θ(xt, t; ks,k) = θ(concat[xt,ks],concat[t,k]), (8)
in which ks is the self-supervised box guidance obtained by self-annotation functions fψ, k is the self-supervised image-level guidance from clustering. The design of fψ is flexible as long as it obtains self-supervised bounding boxes by fψ(·; D) : RW×H×C → RW×H . Self-boxed guidance guides the diffusion model by boxes, which specifies the box area in which the object will be generated. Sometimes, we may need an even finer granularity, e.g., pixels, which we detail next.
Self-segmented guidance. Compared to a bounding box, a segmentation mask is a more finegrained signal. Additionally, a multichannel mask is more expressive than a binary foregroundbackground mask. Therefore, we propose a self-annotation function f that acts as a plug-in built on feature gφ(·; D) to extract the segmentation mask ks via function mapping fψ(·; D) : RW×H×C → RW×H×K , where K is the number of segmentation clusters.
To inject the self-segmented guidance into the noise prediction function θ, we consider two pathways for injection of such guidance. We first concatenate the segmentation mask to xt in the channel dimension, xt := concat[xt, ks], to retain the spatial inductive bias of the guidance signal. Secondly, we also incorporate the image-level guidance to further amplify the guidance signal along the channel dimension. As the segmentation mask from the self-annotation function fψ already contains image-level information, we do not apply the image-level clustering as before in our selflabeled guidance. Instead, we directly derive the image-level guidance from the self-annotation result fψ(·) via spatial maximum pooling: RW×H×K → RK , and feed the image-level guidance k̂ into the noise prediction function via concatenating it with the timestep embedding t:=concat[t, k̂]. The concatenated results will be sent to every block of the UNet. In the end, the overall noise prediction function for self-segmented guidance is formulated as:
θ(xt, t; ks, k̂) = θ(concat[xt,ks], concat[t, k̂]), (9)
in which ks is the spatial mask guidance obtained from self-annotation function f , k̂ is a multi-hot image-level guidance derived from the self-supervised learning mask ks.
We have described three variants of self-guidances by setting the feature extraction function gφ(·), self-annotation function fψ(·), guidance signal k to an approximate form. In the end, we arrive at three noise prediction functions θ, which we utilize for diffusion model training and sampling, following the standard guided (Ho & Salimans, 2021) diffusion approach as detailed in Section 2.1.
3 EXPERIMENTS
In this section, we aim to answer the overarching question: Can we substitute ground-truth annotations with self-annotations? First, we consider the image-label setting, in which we examine what kind of self-labeling is required to improve image fidelity. Next, we look at image-bounding box pairs. Finally, we examine whether it is possible to gain fine-grained control with self-labeled image-segmentation pairs. We first present the general settings relevant for all experiments.
Evaluation metric. We evaluate both diversity and fidelity of the generated images by the Fréchet Inception Distance (FID) (Heusel et al., 2017), as it is the de facto metric for the evaluation of generative methods, e.g., (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019; Saharia et al., 2022). It provides a symmetric measure of the distance between two distributions in the feature space of Inception-V3 (Szegedy et al., 2016). We use FID as our main metric for the sampling quality.
Baselines & implementation details. Throughout our experiments, we always use the diffusion model following (Ho et al., 2020). We train with timestep T=1000, guidance drop probability p=0.1 and the linear variance schedule. For sampling, we set the guidance strength w=2 and deploy a clipping operation in every sampling timestep. As the baseline for a guided diffusion model with ground-truth labels we follow classifier-free guidance (Ho & Salimans, 2021). We use DDIM (Song et al., 2021a) samplers with 250 steps, σt=0. All hyperparameter of our self-guided diffusion and the
baselines are the same, allowing us to compare methods under a fixed compute budget. For details of the learning rate, optimizer, and hyperparameters, we refer to Appendix C. All code will be released.
3.1 SELF-LABELED GUIDANCE
We use ImageNet32/64 (Deng et al., 2009) to validate the efficacy of self-labeled guidance. For better evaluation of sampling quality, we also adopt the Inception Score (IS) (Salimans et al., 2016), following the common practice on this dataset (Dhariwal & Nichol, 2021; Karras et al., 2019; Brock et al., 2019). IS measures how well a model fits into the full ImageNet class distribution.
Choice of feature extraction function g. We first measure the influence of the feature extraction function g used before clustering. We consider two supervised feature backbones: ResNet50 (He et al., 2016) and ViTB/16 (Dosovitskiy et al., 2021), and four selfsupervised backbones: SimCLR (Chen et al., 2020), MAE (He et al., 2022), MSN (Assran et al., 2022) and DINO (Caron et al., 2021). To assure a fair comparison we use 10k clusters for all architectures. From the results in Table 1, we make the following observations. First, features from the supervised ResNet50, and ViT-B/16 lead to a satisfactory FID performance, at the expense of relatively limited diversity (low IS). However, they still require label
annotation, which we strive to avoid in our work. Second, among the self-supervised feature extraction functions, the MSN- and DINO-pretrained ViT backbones have the best trade-off in terms of both FID and IS. They even improve over the label-supervised backbones. This implies the label assignment for the guidance is not unique, pretext labels on top of self-supervised learning can still provide influential guidance signal in comparison with human annotated labels. Also, the diversity of label-supervised ViT-B/16 is much lower than self-supervised ViT-B/16 with an IS of 7.81 vs. 10.41, suggesting that self-supervised guidance leads to more unbiased representation in comparison to supervised guidance. From now on we pick the DINO ViT-B/16 architecture as our self-supervised feature extraction function g.
Effect of number of clusters. Next, we ablate the influence of the number of clusters on the overall sampling quality. We consider 1, 10, 100, 500, 1,000, 5,000 and 10,000 clusters on the extracted CLS token from the DINO ViT-B/16 feature. For efficient, yet uniform, comparison we only run 20 epochs on ImageNet32. To put our sampling results in perspective, we also provide FID and IS results for DDPM and classifier-free guidance. From the result in Figure 1 we first observe that when the cluster number is ranging from 1 to 5,000, our model’s performance monotonously increases and
always surpasses the DDPM model. Beyond 1,000 clusters, we are competitive with classifier-free guidance using ground-truth labels. For 5,000 clusters, there is a sweet spot where we outperform classifier-free guidance with an FID of 16.4 vs. 17.9 and an IS of 9.94 vs. 10.35, see also the generated images in Figure 1. The performance of FID starts to deteriorate from 5,000 to 10,000 clusters. We conclude that self-labeled guidance outperforms DDPM without any guidance beyond a single cluster, is competitive with classifier-free guidance beyond 1,000 clusters, and is even able to outperform guidance by ground-truth labels for 5,000 clusters. From now on we use 5,000 clusters for self-labeled guidance on ImageNet.
Varying guidance strength w. Next we consider the influence of the guidance strength w on our sampling results. As the validation set of ImageNet32 is strictly balanced, we also consider an unbalanced setting which is more similar to real-world deployment. Under both settings we compare the FID between our self-labeled guidance and ground-truth guidance. We train both models for 100 epochs. For the standard ImageNet32 validation setting in Figure 2a, our method achieves a 17.8% improvement for respective optimal guidance strength of the two methods. Self-labeled guidance is especially effective for lower values of w. We observe similar trends for the unbalanced setting in Figure 2b, be it that the overall FID results are slightly higher for both methods. The improvement increases to 18.7%. We conjecture this is due to the unbalanced nature of the k-means algorithm (Last et al., 2017), and clustering based on the statistics of the overall dataset can potentially lead to more robust performance in unbalanced setting.
Self-labeled comparisons on ImageNet32/64. We compare our self-labeled guidance with groundtruth labels guidance, which utilizes the technique of classifier-free guidance (Ho & Salimans, 2021). We train all experiments for 100 epochs which take about 6 days to converge on four RTX A5000 GPUs. All hyperparameters are the same between the two methods to make the comparison as fair as possible. Results on ImageNet32 and ImageNet64 are in Table 2. Similar to Dhariwal & Nichol (2021), we observe that any guidance setting improves considerably over the unconditional & no-guidance model. Surprisingly, our self-labeled model even outperforms the ground-truth labels by a large gap in terms of FID of 2.9 and 4.7 points respectively. We hypothesize that the ground-truth taxonomy might be suboptimal for learning generative models and the self-supervised clusters offer a better guidance signal due to better alignment with the visual similarity of the images. This suggests that the label-conditioned guidance from Ho & Salimans (2021) can be completely replaced by guidance from self-supervision, which would enable guided diffusion models to learn from even larger (unlabeled) datasets than feasible today.
3.2 SELF-BOXED GUIDANCE
We report on Pascal VOC and COCO_20K to validate the efficacy of self-boxed guidance. To obtain class-agnostic object bounding boxes, we use LOST (Siméoni et al., 2021) as our self-annotation function f . We report train FID for Pascal VOC and train/validation FID for COCO_20K. For
image-level clustering to attain the guidance signal, we empirically found k=100 works best on both datasets as those datasets are relatively small-scale in images and labels compared to ImageNet. We train our diffusion model for 800 epochs with input image size 64×64. See Appendix C for more details.
Self-boxed comparisons on Pascal VOC and COCO_20K. For the ground-truth labels guidance baseline, we condition on a class embedding. Since there are now multiple objects per image, we represent the ground-truth class with a multi-hot embedding. Aside from the class embedding which is multi-hot in our method, all other settings remain the same for a fair comparison. The results in Table 3, confirm that the multi-hot class embedding is indeed effective for multi-label datasets, improving over the no-guidance model by a large margin. This improvement comes at the cost of manually annotating multiple classes per image. Self-boxed guidance further improves upon this result, by reducing the FID by an additional 5.1 and 3.3 points respectively without using any groundtruth annotation. In Figure 3, we show our method generates diverse and semantically well-aligned images.
3.3 SELF-SEGMENTED GUIDANCE
Finally, we validate the efficacy of self-segmented guidance on Pascal VOC and COCO-Stuff. For COCO-Stuff we follow the split from (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang
et al., 2022), with a train set of 49,629 images and a validation set of 2,175 images. Classes are merged into 27 (15 stuff and 12 things) categories. For self-segmented guidance we apply STEGO (Hamilton et al., 2022) as our self-annotation function f . We set the cluster number to 27 for COCO-Stuff, and 21 for Pascal VOC, following STEGO. We train all models on images of size 64×64, for 800 epochs on Pascal VOC, and for 400 epochs on COCO-Stuff. We report the train FID for Pascal VOC and both train and validation FID for COCO-Stuff. More details on the dataset and experimental setup are provided in Appendix C.
Self-segmented comparisons on Pascal VOC and COCO-Stuff. We compare against both the ground-truth labels guidance baseline from the previous section and a model trained with ground-truth semantic masks guidance. The results in Table 4 demonstrate that our self-segmented guidance still outperforms the ground-truth labels guidance baseline on both datasets. The comparison between ground-truth labels and segmentation masks reveals an improvement in image quality when using the more fine-grained segmentation mask as the condition signal. But these segmentation masks are one of the most costly types of image annotations that require every pixel to be labeled. Our self-segmented approach avoids the necessity for annotations while narrowing the performance gap, and more importantly offering fine-grained control over the image layout. We demonstrate this controllability with examples in Figure 4. These examples further highlight a robustness against noise in the segmentation masks, which our method acquires naturally due to training with noisy segmentations.
4 RELATED WORK
Conditional generative models. Earlier works on generative adversarial networks (GANs) have already observed improvements in image quality by conditioning on ground-truth labels (Mirza & Osindero, 2014; Brock et al., 2019; Casanova et al., 2021). Recently, conditional diffusion models have reported similar improvements, while also offering a great amount of controllability via classifierfree guidance by training on images paired with textual descriptions (Ramesh et al., 2021; 2022; Saharia et al., 2022), semantic segmentations (Wang et al., 2022), or other modalities (Bordes et al., 2022; Yang et al., 2022; Song et al., 2022). Our work also aims to realize the benefits of conditioning and guidance, but instead of relying on additional human-generated supervision signals, we leverage the strength of pretrained self-supervised visual encoders.
Zhou et al. (2022) train a GAN for text-to-image generation without any image-text pairs, by leveraging the CLIP (Radford et al., 2021) model that was pretrained on a large collection of paired data. In this work, we do not assume any paired data for the generative models and rely purely on images. Additionally, image layouts are difficult to be expressed by text, thus our self-boxed and selfsegmented methods are complementary to text conditioning. Instance-Conditioned GAN (Casanova et al., 2021), Retrieval-augmented Diffusion (Blattmann et al., 2022) and KNN-diffusion (Ashual et al., 2022) are three recent methods that utilize nearest neighbors as guidance signals in generative models. Similar to our work, these methods rely on conditional guidance from an unsupervised source, we differ from them by further attempting to provide more diverse spatial guidance, including (self-supervised) bounding boxes and segmentation masks.
Self-supervised learning in generative models. Self-supervised learning (Caron et al., 2020; Chen et al., 2020; Asano et al., 2020; Caron et al., 2021) has shown great potential for representation learning in many downstream tasks. As a consequence, it is also commonly explored in GAN for evaluation and analysis (Morozov et al., 2020), conditioning (Casanova et al., 2021; Mangla et al., 2022), stabilizing training (Chen et al., 2019), reducing labeling costs (Lučić et al., 2019) and avoiding mode collapse (Armandpour et al., 2021). Our work focuses on translating the benefits of self-supervised methods to the generative domain and providing flexible guidance signals to diffusion models at various image granularities. In order to analyze the feature representation from self-supervised models, Bordes et al. (2022) condition on self-supervised features in their diffusion model for better visualization in data space. We instead condition on the compact clustering after the self-supervised feature, and further introduce the elasticity of self-supervised learning into diffusion models for multi-granular image generation.
5 CONCLUSION
We have explored the potential of self-supervision signals for diffusion models and propose a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our framework provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments indicate that self-supervision signals are an adequate replacement for existing guidance methods that generate images by relying on annotated image-label pairs during training. Furthermore, both self-boxed and self-segmented approaches demonstrate that we can acquire fine-grained control over the image content, without any ground-truth bounding boxes or segmentation masks. Due to limited computational resources, we restricted our experiments to images of a maximum size of 64×64. For future work, it will be of interest to verify our findings on larger image resolutions. Ultimately, our goal is to enable the benefits of self-guided diffusion for unlabeled and more diverse datasets at scale, wherein we believe this work is a promising first step.
1 Introduction 1
2 Approach 2
2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Self-Guided Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Experiments 4
3.1 Self-labeled Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Self-boxed Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Self-segmented Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Related Work 8
5 Conclusion 9
A Main framework 15
B More quantitative results 16
B.1 Correlation between NMI and FID in different feature backbones. . . . . . . . . . 16
B.2 Precision and Recall in ImageNet32/64 dataset . . . . . . . . . . . . . . . . . . . 16
B.3 Cluster number ablation in self-boxed guidance . . . . . . . . . . . . . . . . . . . 16
B.4 Trend visualization of training loss and validation FID . . . . . . . . . . . . . . . 17
C More experimental details 17
C.1 UNet structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
C.2 Training Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
C.3 Dataset preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
C.4 LOST, STEGO algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
D More qualitative results 19
A MAIN FRAMEWORK
We illustrate the pipeline of our framework in Figure 5.
B MORE QUANTITATIVE RESULTS
B.1 CORRELATION BETWEEN NMI AND FID IN DIFFERENT FEATURE BACKBONES.
To assess the correlation between cluster quality and sample fidelity, we consider the Normalized Mutual Information (NMI), which is commonly adopted as a mutual information-derived metric to assess clustering quality based on provided ground truth labels. In Figure 6 we plot the connection between NMI and FID. For the label-supervised functions, the NMI is unrelated to the FID, but for the self-supervised functions, NMI and FID are negatively correlated, suggesting that NMI — a metric commonly applied to assess the quality of self-supervised methods — is also predictive of the model’s usefulness in our setting.
B.2 PRECISION AND RECALL IN IMAGENET32/64 DATASET
We show the extra results of ImageNet on precision and recall in Table 5. We follow the evaluation code of precision and recall from ICGAN (Casanova et al., 2021), our self-labeled guidance also outperforms ground-truth labels in precision and remains competitive in recall.
B.3 CLUSTER NUMBER ABLATION IN SELF-BOXED GUIDANCE
In table 6, we empirically evaluate the performance when we alter the cluster number in our self-boxed guidance. We find the performance will increase from k = 21 to k = 100, and saturated at k = 100.
B.4 TREND VISUALIZATION OF TRAINING LOSS AND VALIDATION FID
We visualize the trend of training loss and validation FID in Figure 7.
C MORE EXPERIMENTAL DETAILS
Training details. For our best results, we train 100 epochs on 4 GPUs of A5000 (24G) in ImageNet. We train 800/800/400 epochs on 1GPU of A6000 (48G) in Pascal VOC, COCO_20K, COCO-Stuff, respectively. All qualitative results in this paper are trained in the same setting as mentioned above. We train and evaluate the Pascal VOC, COCO_20K, COCO-Stuff in image size 64, and visualize them by bilinear up sampling to 256, following (Liu et al., 2022).
Sampling details. We sample the guidance signal from the distribution of training set in our all experiments. For each timestep, we need twice of Number of Forward Evaluation (NFE), we optimize them by concatenating the conditional and unconditional signal along the batch dimension so that we only need one time of NFE in every timestep.
Evaluation details. We use the common package Clean-FID (Parmar et al., 2022), torchfidelity (Obukhov et al., 2020) for FID, IS calculation, respectively. For IS, we use the standard 10-split setting, we only report IS on ImageNet, as it might be not an appropriate metric for non object-centric datasets (Barratt & Sharma, 2018). For checking point, we pick the checking point every 10 epochs by minimal FID between generated sample set and train set.
C.1 UNET STRUCTURE
Guidance signal injection. We describe the detail of guidance signal injection in Figure 8. The injection of self-labeled guidance and self-boxed/segmented guidance is slightly different. The common part is by concatenation between timestep embedding and noisy input, the concatenated feature will be sent to every block of the UNet. For the self-boxed/segmented guidance, we not only conduct the information fusion as above, but also incorporate the spatial inductive-bias by concatenating it with input, the concatenated result will be fed into the UNet.
Timestep embedding. We embed the raw timestep information by two-layer MLP: FC(512, 128)→SiLU→FC(128, 128).
Guidance embedding. The guidance is in the form of one/multi-hot embedding RK , we feed it into two-layer MLP: FC(K, 256)→SiLU→FC(256, 256), then feed those guidance signal into the UNet following in Figure 8.
Cross-attention. In training for non object-centric dataset, we also tokenize the guidance signal to several tokens following Imagen Saharia et al. (2022), we concatenate those tokens with image tokens (can be transposed to a token from typical feature map by RW×H×C → RC×WH ), the cross-attention (Rombach et al., 2022; Blattmann et al., 2022) is conducted by CA(m, concat[k,m]). Due to the quadratic complexity of transformer (Katharopoulos et al., 2020; Lu et al., 2021), we only apply the cross-attention in lower-resolution feature maps.
C.2 TRAINING PARAMETER
3×32×32 model,4GPU,ImageNet32
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 3e− 4 Blocks per resolution: 2 Batch size: 128 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 4GPU, ImageNet64
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 48 Attention resolutions: 4 EMA: 0.9999 number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 4 × A5000(24G) Conditioning embedding MLP layers: 2 Training Epochs: 100 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256
3×64×64 model, 1GPU, Pascal VOC, COCO_20K, COCO-Stuff
Base channels: 128 Optimizer: AdamW Channel multipliers: 1, 2, 4 Learning rate: 1e− 4 Blocks per resolution: 2 Batch size: 80 Attention resolutions: 4 EMA: 0.9999 Number of head: 8 Dropout: 0.0 Conditioning embedding dimension: 256 Training hardware: 1 × A6000(45G) Conditioning embedding MLP layers: 2 Training Epochs: 800/800/400 Diffusion noise schedule: linear Weight decay: 0.01 Sampling timesteps: 256 Context token number: 8 Context dim: 32
C.3 DATASET PREPARATION
The preparation of unbalanced dataset. There are 50,000 images in the validation set of ImageNet with 1,000 classes (50 instances for each). We index the class from 0 to 999, for each class ci, the instance of the class ci is bi× 50/1000c = bi/200c.
Pascal VOC. We use the standard split from (Siméoni et al., 2021). It has 12,031 training images. As there is no validation set for Pascal VOC dataset, therefore, we only evaluate FID on train set. We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO_20K. We follow the split from (Siméoni et al., 2021; Vo et al., 2020; Lin et al., 2014). COCO_20k is a subset of the COCO2014 trainval dataset, consisting of 19,817 randomly chosen images, used in unsupervised object discovery (Siméoni et al., 2021; Vo et al., 2020). We sample 10,000 images and use 10,000 random-croped 64-sized train images as reference set for FID evaluation.
COCO-Stuff. It has a train set of 49,629 images, validation set of 2,175 images, where the original classes are merged into 27 (15 stuff and 12 things) high-level categories. We use the dataset split following (Hamilton et al., 2022; Ji et al., 2019; Cho et al., 2021; Zhang et al., 2022), We sample 10,000 images and use 10,000 train/validation images as reference set for FID evaluation.
C.4 LOST, STEGO ALGORITHMS
LOST algorithm details. We conduct padding to make the original image can be patchified to be fed into the ViT architecture (Dosovitskiy et al., 2021), and feed the original padded image into the LOST architecture using official source code 1. LOST can also be utilized in a two-stage approach to provide multi-object, due to its complexity, we opt for only single-object discovery in this paper.
STEGO algorithm details. We follow the official source code 2, and apply padding to make the original image can be fed into the ViT architecture to extract the self-segmented guidance signal.
For COCO-Stuff dataset, we directly use the official pretrained weight. For Pascal VOC, we train STEGO ourselves using the official hyperparameters.
In STEGO’s pre-processing for the k-NN, the number of neighbors for k-NN is 7. The segmentation head of STEGO is composed of a two-layer MLP (with ReLU activation) and outputs a 70-dimension feature. The learning rate is 5e− 4, batch size is 64.
D MORE QUALITATIVE RESULTS
1https://github.com/valeoai/LOST 2https://github.com/mhamilton723/STEGO
Guidance signal from training set: | 1. What is the focus and contribution of the paper regarding diffusion models?
2. What are the strengths of the proposed approach, particularly in terms of its performance across various tasks and self-supervision schemes?
3. Do you have any concerns or questions about the clustering algorithm's learning process and its impact on performance?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper builds on recent efforts to improve the quality of samples in diffusion models (Ho et al [1]) through conditioning signal. The conditioning (analogous to classifier conditioning) here comes from self-supervision and clustering. The scheme is applied to three disparate tasks with conditioning from - self-labeling, self-boxing and self-segmentation. In each of these tasks, the features extracted using a different self-supervised scheme (e.g. in self-labeling, DINO or SimCLR can be used). Metrics - FID and IS - are calculated to show the proposed scheme works better than other prevailing approaches (no-guidance, ground-truth guidance, etc.).
[1] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, 2020.
Strengths And Weaknesses
Empirically, the approach seems to outperform others significantly, and this seems to hold for different tasks and different self-supervision schemes.
There is very little intuition on what the clustering algorithm learns. The performance degrades after a certain (large) number of clusters. Could the authors shed some insight on what happens during the clustering process?
I would have appreciated a theoretical analysis through a toy problem on why/how the self-supervision+clustering process works.
Clarity, Quality, Novelty And Reproducibility
The paper is written well and reads clearly. The quality of the work is good, and the approach of using self-supervised features and clustering seems to be relatively novel in this context, with strong results. While code was not submitted, the results can be easily reproduced given the recipes. |
ICLR | Title
SemSup-XC: Semantic Supervision for Extreme Classification
Abstract
Extreme classification (XC) considers the scenario of predicting over a very large number of classes (thousands to millions), with real-world applications including serving search engine results, e-commerce product tagging, and news article classification. The zero-shot version of this task involves the addition of new categories at test time, requiring models to generalize to novel classes without additional training data (e.g. one may add a new class “fidget spinner” for ecommerce product tagging). In this paper, we develop SEMSUP-XC, a model that achieves state-of-the-art zero-shot (ZS) and few-shot (FS) performance on three extreme classification benchmarks spanning the domains of law, e-commerce, and Wikipedia. SEMSUP-XC builds upon the recently proposed framework of semantic supervision that uses semantic label descriptions to represent and generalize to classes (e.g., “fidget spinner” described as “A popular spinning toy intended as a stress reliever”). Specifically, we use a combination of contrastive learning, a hybrid lexico-semantic similarity module and automated description collection to train SEMSUP-XC efficiently over extremely large class spaces. SEMSUP-XC significantly outperforms baselines and state-of-the-art models on all three datasets, by up to 6-10 precision@1 points on zero-shot classification and >10 precision points on few-shot classification, with similar gains for recall@10 (3 for zero-shot and 2 for few-shot). Our ablation studies and qualitative analyses demonstrate the relative importance of our various improvements and show that SEMSUP-XC’s automated pipeline offers a consistently efficient method for extreme classification.
1 INTRODUCTION
Extreme classification (XC) studies multi-class and multi-label classification problems with a large number of classes, ranging from thousands to millions (Bengio et al., 2019; Bhatia et al., 2015; Chang et al., 2019; Lin et al., 2014; Jiang et al., 2021). The paradigm has multiple real-world applications including movie and product recommendation, search-engines, and e-commerce product tagging. Moreover, in practical scenarios where XC is deployed, environments are constantly changing, with new classes with zero or few labeled examples being added. Recent work such as ZestXML (Gupta et al., 2021), MACLR (Xiong et al., 2022), LightXML (Jiang et al., 2021), and GROOV (Simig et al., 2022), has explored zero-shot and few-shot extreme classification (ZS-XC and FS-XC). These setups are challenging because of (1) the presence of a large number of fine-grained classes which are often not mutually exclusive, (2) limited or no labeled data per class, (3) and increased computational expense and model size because of the large label space. While the aforementioned works have tried to tackle the latter two issues, they have not attempted to build a semantically rich representation of classes for improved classification, using only class names to represent them.
A large fine-grained label space necessitates capturing the semantics of different attributes of classes. To this end, we leverage semantic supervision (SEMSUP) (Hanjie et al., 2022), a recently proposed framework that represents classes using diverse descriptions to better capture their semantics. This design choice allows SEMSUP to better generalize to novel classes by using corresponding descriptions, compared to standard classifiers. However, SEMSUP as designed in (Hanjie et al., 2022) cannot be naively applied to XC due to several reasons: (1) SEMSUP performs full cross-entropy learning which is computationally intractable for large label spaces (2) it uses only semantic similarity between the instance and label description to measure compatibility, thus ignoring lexically similar common terms like “soccer” and “football”. (3) it uses a semi-automatic pipeline for collecting label descriptions for
Input: Tagore wrote a gem
classes that requires a small amount of human intervention, which is expensive for large label spaces we are dealing with.
We remedy these deficiencies by developing a new model SEMSUP-XC that scales to large class spaces in XC using three innovations. First, SEMSUP-XC employs a contrastive learning objective (Hadsell et al., 2006) which samples a fixed number of negative label descriptions, improving computation speed by as much as 99.9%. Second, we use a novel hybrid lexical-semantic similarity model called Relaxed-COIL (based on COIL (Gao et al., 2021b)) that combines semantic similarity of sentences with soft matching between all token pairs. And finally, we propose SEMSUP-WEB which is a fully automatic pipeline with precise heuristics to scrape high-quality descriptions.
SEMSUP-XC achieves state-of-the-art performance on three diverse XC datasets based on legal (EURLex), e-commerce (AmazonCat), and wiki (Wikipedia) domains, across three settings (zeroshot, generalized zero-shot and few-shot extreme classification – ZS-XC, GZS-XC, FS-XC). For example, on ZS-XC, SEMSUP-XC outperforms the next best baseline by 5 to 19 Precision@1 points on different datasets, and maintains the advantage across all metrics. On FS-XC, SEMSUP-XC consistently outperforms baselines by over 10 Precision@1 points on 5, 10, 20 shot classification. Interestingly, SEMSUP-XC also outperforms larger models like T5 and Sentence Transformers (e.g., by over 30 P@1 points on EURLex) which are pre-trained on web-scale corpora, which shows the importance of contrastive learning to adapt to a specific domain. We perform several ablation studies to dissect the importance of each component in SEMSUP-XC, and also provide a qualitative error analysis of the model.
2 RELATED WORK
Extreme classification Extreme classification (XC) (Bengio et al., 2019) studies multi-class and multi-label classification problems over large label spaces. Traditionally, studies have used sparsefeatures extracted from the bag-of-words representation of input documents (Bhatia et al., 2015; Chang et al., 2019; Lin et al., 2014), and have also explored one-versus-all binary classifiers (Babbar & Schölkopf, 2017; Yen et al., 2017; Jain et al., 2019; Dahiya et al., 2021a) and tree-based methods which utilize the label hierarchy (Prabhu et al., 2018; Wydmuch et al., 2018; Khandagale et al., 2020). Recently, neural-network (NN) based dense-feature methods have demonstrated improved accuracies due to their ability to generate semantically rich and contextual representations of text. Different studies have experimented with architectures like convolutional neural networks (Liu et al., 2017), Transformers (Chang et al., 2020; Jiang et al., 2021; Zhang et al., 2021), attention-based
networks (You et al., 2019) and shallow networks (Medini et al., 2019; Mittal et al., 2021; Dahiya et al., 2021b). While the aforementioned works show impressive performance when the labels during training and evaluation are the same, they do not consider the practical zero-shot classification scenario with unseen labels during evaluation.
Zero-shot extreme classification (ZS-XC) Zero-shot classification (ZS) (Larochelle et al., 2008) aims to predict unseen classes not encountered during training by utilizing auxiliary information like the class name or a prototype. Multiple works have attempted to improve performance for the text domain (Dauphin et al., 2014; Nam et al., 2016; Wang et al., 2018; Pappas & Henderson, 2019; Hanjie et al., 2022), however, given the large label space of XC, these cannot be easily be extended because of computational expense and performance degradation. ZestXML (Gupta et al., 2021) was the first study to attempt ZS extreme classification by projecting bag-of-words input features close to corresponding label features using a sparsified linear transformation, but this limits them to using non-contextual text representations. Subsequent works have used neural-networks to generate contextual text representations (Xiong et al., 2022; Simig et al., 2022; Zhang et al., 2022; Rios & Kavuluru, 2018), with MACLR (Xiong et al., 2022) using an inverse cloze pre-training step and GROOV (Simig et al., 2022) using a sequence-to-sequence generative model to predict novel labels. However, all these works have the shortcoming that they use only label names (e.g., the word “cat”) which lack semantic information to represent classes. In this work, we adapt the recently proposed method of semantic supervision (Hanjie et al., 2022) that uses semantically rich and diverse descriptions to represent classes. SEMSUP underperforms out-of-the-box, and we propose several training and modeling changes (§ 3) to achieve state-of-the-art performance on ZS XC tasks.
3 METHODOLOGY
3.1 BACKGROUND
Zero and Few-shot Extreme Classification Extreme classification (XC) contains classification problems with label spaces (thousands to millions classes). Zero-shot extreme classification (ZS-XC) is a version of XC where a model is evaluated on unseen classes not encountered during training. We consider two settings, (1) Zero-shot (ZS), where the model is tested only on unseen classes not containing any train classes, and (2) Generalized zero-shot (G-ZS), where the model is tested on a combined set of train and unseen classes.
Background: Semantic supervision Semantic supervision (SEMSUP) (Hanjie et al., 2022) is a framework for zero-shot classification that represents classes using rich textual descriptions (e.g., “A form of competitive physical activity or game” for the class sports), instead of discrete IDs (e.g., Class-1, Class-2). This allows a trained model to generalize to new classes as long as their corresponding descriptions are provided. In addition to using an input encoder (fIE), SEMSUP also has an output encoder (gOE) to encode label descriptions, and makes a class prediction by measuring the compatibility of the input representation of the instance and output representation of the label description corresponding to a class.
Formally, let C be the number of classes, d be the dimensionality of the input representation, xi be the input document, D = (d1, . . . , dC) be descriptions corresponding to the classes, fIE (xi) ∈ Rd be the input representation, and gOE (dj) ∈ Rd be the output representation of the jth class. We operate under the multi-label classification setting, which is the default for the extreme classification benchmarks we consider. Then, we have the probability of picking the jth class as:
SEMSUP := P (yj = 1|xi) = σ (gOE (dj)⊺ · fIE (xi)) (1)
SEMSUP is trained using the binary cross-entropy (BCE) loss between the predicted probability and gold answer, where N is the number of instances in the dataset.
LSEMSUP = 1 N · |C| ∑
(xi,Yi)
C∑ j=1 LBCE (P (yij = 1|xi) , yij) (2)
where Yi = {yi1, . . . , yiC |yij ∈ {0, 1}} is the set of the labels for instance xi, with xi belonging to class j if and only if yij = 1.
SEMSUP relies on multiple high-quality label descriptions of classes that contain semantic information, which are semi-automatically scraped from the web and filtered by an expert in (Hanjie et al., 2022). During training, diverse descriptions are randomly sampled, endowing the model with a semantically rich representation since they contain information on different class attributes. For example, the class sports can have a definition (“A form of competitive physical activity or game”), examples (“Examples are football, cricket, and hockey”), or etymology (“Derived from a French word meaning leisure”) in each description, among other attributes.
Shortcomings for large class spaces While (Hanjie et al., 2022) show improved performance of SEMSUP on zero-shot classification, their vanilla method cannot be directly applied to extreme classification due to several reasons. First, they use the binary cross-entropy loss over all the classes in the dataset (Eq 2), which involves encoding C label descriptions for each batch. This is computationally intractable for large label spaces because of GPU memory constraints. Second, they use a simple bi-encoder model which measures the semantic similarity between the input instance and the label description. However, instances and descriptions often share lexical terms with the same or similar lemma (e.g., wrote and written), which are not directly exploited by their method. And third, although their label description collection pipeline is semi-automatic, human intervention of any form is not feasible for extreme classification datasets, especially ones with labels in the order of millions. Our method SEMSUP-XC (Section 3.2) addresses the above contraints by using: (1) contrastive learning with negative samples for improved computational speed, (2) a novel hybrid lexical-semantic similarity model for improved performance, and (3) a completely automatic description scraping pipeline with accurate heuristics for filtering poor descriptions.
3.2 SEMSUP-XC: EFFICIENT GENERALIZATION FOR ZERO-SHOT EXTREME CLASSIFICATION
We provide an overview of our method in Figure 1 and explain it in detail below.
Training using contrastive learning For datasets with a large label space (large |C|), we improve SEMSUP’s computational speed by sampling negative classes for each instance rather than encoding the label descriptions of all classes. For instance xi, consider two partitions of the labels Yi = {yi1, . . . , yiC |yij ∈ {0, 1}}, with Y +i containing the positive classes (yij = 1) and Y − i containing the negative classes (yij = 0). SEMSUP-XC is trained using all positive classes (|Y +i |), but drawing inspiration from contrastive learning (Hadsell et al., 2006), we sample K − |Y +i | negative classes from Y −i instead of C − |Y + i |, with K ≪ |C|. We refer readers to appendix C for exact details. Intuitively, our training objective incentivizes the representations of the instance and label descriptions of the positive classes to be similar while simultaneously increasing the distance with respect to representations of the negative classes. Furthermore, rather than picking negative labels at random, we sample hard negatives that are lexically similar to positive labels. This allows for more explicit separation of embeddings of closely related labels. A typical dataset we consider (AmazonCat) contains |C| = 13, 000 and K ≈ 1000, which leads to SEMSUP-XC being 1200013000 = 92.3% faster than SEMSUP. Mathematically, the following is the training objective:
LSEMSUP-XC = 1 N ·K ∑ i ∑ yk∈Y + i LBCE (P (yk = 1|xi) , yk) + K−|Y +i |∑ l=1,yl Y − i LBCE (P (yl = 1|xi), yl) (3) We follow a similar procedure for inference and we refer readers to Appendix C for further details.
Hybrid lexico-semantic similarity model SEMSUP uses a bi-encoder architecture with two different BERT (Devlin et al., 2019b) models as the input and output encoder respectively. They use the final-layer representation corresponding to the [CLS] token to encode the instance and label description, with the inner product measuring the semantic similarity of the input instance and output class. Drawing inspiration from recent IR models like COIL (Gao et al., 2021b) and ColBERT (Khattab & Zaharia, 2020), we note that BERT’s semantic similarity can ignore lexical matching of words present in the input and output text which exhibit strong evidence of compatibility, thus leading to
performance degradation. COIL is a bi-encoder architecture which alleviates this by incorporating both semantic and lexical similarity. Apart from the dot product between [CLS] vectors, they also include an exact lexical match scoring function which is based on the dot product of representations corresponding to tokens with exact matches in the two pieces of text considered (e.g., input text: “Capture the best momemts in high quality pictures.” and label description “A camera is used to take photos.” If there are multiple occurrences of a common token type, the maximum similarity score is chosen, and the scores are then aggregated over all token types that are present in both sentences. Let xi = (xi1, xi2, . . . , xin) be the input instance with n tokens, dj = (dj1, dj2, . . . , djm) be a label description of class j with m tokens, vxicls and v dj cls be the [CLS] representations of the input and label description, and vxik and v dj l be the token representation of the k
th and lth token of xi and dj respectively. Mathematically, the following would be the probability of picking class j for both vanilla SEMSUP and SEMSUP + COIL:
SEMSUP := P (yj = 1|xi) = σ ( vxicls ⊺ · vdjcls )
SEMSUP + COIL := P (yj = 1|xi) = σ vxicls⊺ · vdjcls + ∑ w∈xi∩dj max w=xik=djl ( vxik ⊺v dj l ) (4) where exact lexical match is used to get the list of common tokens – w ∈ xi ∩ dj . As a result of the exact match, COIL has the drawback that semantically similar tokens (e.g., pictures and photos in the above sentence pair) and words with the same lemma (e.g., walk and walking) are treated as dissimilar tokens. To avoid this, we propose the use of soft lexical matching based on token clustering and lemmatization in our model Relaxed-COIL. We create clusters of tokens based on two characteristics: 1) the BERT token-embedding similarity (Reimers & Gurevych, 2019) and 2) the lemma of the word. Let CL(w) denote the cluster membership of the token w, with CL(wi) = CL(wj) if and only if they have a high token-embedding similarity or if they share the same lemma. Instead of using the exact lexical match in COIL, we use a soft lexical match which only checks if two tokens belong to the same cluster, thus allowing the model to exploit semantic similarity of tokens. Mathematically, Relaxed-COIL computes the probability as follows, where CL(xi) = {CL (xi1) , . . . , CL (xin)} denotes the set of cluster memberships of tokens in xi.
Relaxed-COIL :=P (yj = 1|xi) =
σ vxicls⊺ · vdjcls + ∑ idx∈CL(xi)∩CL(dj) max idx=CL(xik)=CL(djl) ( vxik ⊺v dj l ) (5) Automatically collecting high-quality descriptions SEMSUP uses a semi-automatic pipeline for collecting multiple descriptions for classes, with an expert required for filtering irrelevant ones. However, in our case, large label spaces (e.g., 1 million for Wikipedia) make any degree of human involvment infeasible. To alleviate this, we create a completely automatic pipeline for collecting descriptions which includes heuristics for removing spam, advertisements, and irrelevant descriptions, and we detail the list of heuristics used in Appendix B. In addition to web-scraped label descriptions, we utilize label-hierarchy information if provided by the dataset(EURLex and AmazonCat), which allows us to encode properties about parent and children classes wherever present. Further details are present in Appendix B.2 As we show in the ablation study (§ 5.3), both label descriptions that we collect and the label hierarchy provide significant performance boosts.
4 EXPERIMENTAL SETUP
Datasets We evaluate our model on three diverse public benchmark datasets. They are, EURLex4.3K (Chalkidis et al., 2019) which is legal document classification dataset with 4.3K classes, AmazonCat-13K (McAuley & Leskovec, 2013) which is an e-commerce product tagging dataset including Amazon product descriptions and titles with 13K categories, and Wikipedia-1M (Gupta et al., 2021) which is an article classification dataset made up of over 5 million Wikipedia articles
with 1 million categories. We provide detailed statistics about the number of instances and classes in train and test set in Table 1. We refer readers to Appendix A.2 for additional details.
Baselines We perform extensive experiments with six diverse baselines. 1) TF-IDF performs a nearest neighbour match between the sparse tf-idf features of the input and label description. 2) T5 (Raffel et al., 2019) is a large sequence-to-sequence model which has been pre-trained on 750GB unsupervised data and further fine-tuned on MNLI (Williams et al., 2018) to allow us to check if a label description entails an input instance. Ranking is done on top 50 labels predicted by TF-IDF 3) Sentence Transformer (Reimers & Gurevych, 2019) is a semantic text similarity model fine-tuned using a contrastive learning objective on over 1 billion sentence pairs. We rank the labels based on similarity of their output embeddings with document’s embeddings. The latter two baselines use significantly more data than SEMSUP-XC and T5 has 9× the parameters. The aforementioned baselines are unsupervised and not fine-tuned on our datasets. The following baselines are previously proposed supervised models which are fine-tuned on the datasets we consider. 4) ZestXML (Gupta et al., 2021) learns a highly sparsified linear transformation between sparse input and label features. 5) MACLR (Xiong et al., 2022) is a bi-encoder based model pre-trained on two self-supervised learning tasks to improve extreme classification—Inverse Cloze Task (Lee et al., 2019) and SimCSE (Gao et al., 2021c), and we fine-tune it on the datasets considered. 6) GROOV (Simig et al., 2022) is a T5 model that learns to generate both seen and unseen labels given an input document. 7) SPLADE (Formal et al., 2021) is a sparse neural retreival model that learns label/document sparse expansion via a Bert masked language modelling head. It is one of the current state of the art in information retrieval in out-of-domain tasks. To make comparisons with SEMSUP-XC fair, we fine-tune and re-evaluate the above models on the datasets we consider while including label descriptions and label hierarchy information. We refer readers to Appendix A.2 for additional details.
SEMSUP-XC implementation details We use the Bert-base model (Devlin et al., 2019a) as the backbone for the input encoder and Bert-small model (Turc et al., 2019) for the output encoder. SEMSUP-XC follows the model architecture described in Section 3.2 and we use contrastive learning (Hadsell et al., 2006) to train our models. During training, we randomly sample 1000−p negatives for each instance, where p is the number of positive labels for the instance. At inference, to improve computational efficiency, we precompute the output representations of label descriptions. We use the AdamW optimizer (Loshchilov & Hutter, 2019) and tune our hyperparameters using grid search on the respective validation set. We provide further details in Appendix A.1.
Evaluation setting and metrics We evaluate all models on three different settings: Zero-shot classification (ZS) on a set of unseen classes, generalized zero-shot classification (G-ZS) on a combined set of seen and unseen classes, and few-shot classification (FS) on a set of classes with minimal amounts of supervised data (1 to 20 examples per class). We use Precision@K and Recall@K (with multiple values of K) as our evaluation metrics, as is standard practice. Precision@K measures how accurate the top-K predictions of the model are, and Recall@K measures what fraction of correct labels are present in the top-K predictions, and they are mathematically defined as P@k = 1 k ∑ i∈rankk(ŷ) yi and R@k = 1∑ i yi ∑ i∈rankk(ŷ) yi, where rankk(ŷ) is the set of top-K predictions.
5 RESULTS
5.1 ZERO-SHOT EXTREME CLASSIFICATION
We consider two variants of baselines: with and without descriptions. We provide label hierarchy as output supervision in both cases. Table 2 shows that SEMSUP-XC significantly outperforms baselines on all datasets and metrics, under both zero-shot (ZS-XC) and generalized zero-shot (GZS-XC) settings. On ZS-XC, SEMSUP-XC outperforms MACLR by over 20, 13, and 15 P@1 points on the three datasets, respectively, even though MACLR uses XC specific pre-training (Inverse Cloze Task and SimCSE) , while SEMSUP-XC does not. SEMSUP-XC also outperforms GROOV (e.g., over 45 P@1 points on EURLex) which uses a T5 seq2seq model pre-trained on significantly more data than BERT, and this is likely because GROOV’s output space is unconstrained. SEMSUP-XC’s semantic understanding of instances and labels stands out against ZestXML which uses sparse non-contextual features with the former consistently scoring twice as higher compared to the latter. Interestingly TF-IDF performs better than all other baselines for EURLex ZS. This is because sparse methods often perform better than dense bi-encoders in zero-shot settings (Thakur et al., 2021), as the latter fail to capture fine-grained information. However, due to the introduction of Relaxed COIL in our method, SEMSUP-XC can perform fine-grained lexical matching in descriptions along with capturing deep semantic information, thus resulting in superior ZS and GZS scores. SEMSUP-XC also outperforms the other unsupervised baselines of T5, and Sentence-Transformer, even though the latter two are pre-trained on significantly larger amounts of data than BERT (T5 use 50× compared our base model).
In addition, SEMSUP-XC achieves higher recall on all datasets, beating the best performing baselines by 6, 18, and 5.2R@10 points on the three datasets, respectively. Since GZS-XC includes labels seen during training while evaluation, all methods have higher scores than ZS-XC and the gaps between different models are smaller, but we see both on precision and recall metrics that SEMSUP-XC again outperforms all the baselines considered by margins of 1-2 precision@1 points. Table 5 in Appendix D contains additional results with more methods and metrics.
Further, while SEMSUP-XC improves from the inclusion of descriptions, other methods have little to no advantage. This is because web-scraped descriptions are often noisy and need suitable architecture to make use of them. Unlike other methods, SEMSUP-XC has a hybrid lexical matching module, which improves from the inclusion of descriptions. This demonstrates the combined advantage of our proposed architectural changes and the use of web-scraped descriptions.
5.2 FEW-SHOT EXTREME CLASSIFICATION
We now consider the FS-XC setup, where new classes added at evaluation time have a small number of labeled instances each (K ∈ {1, 5, 10, 20}). For the sake of completeness, we also include zero-shot performance (ZS-XC, K = 0) and report results in Figure 2. Detailed results for other metrics are in appendix E (showing the same trend as P@1) and implementation details regarding creation of the few-shot splits are in appendix E. Similar to the ZS-XC case, SEMSUP-XC outperforms the baselines for all values of K considered. As expected, SEMSUP-XC’s performance increases with K because of access to more labeled data, but crucially, it continues to outperform baselines by the same margins. Interestingly, SEMSUP-XC’s zero-shot performance is higher than even the few-shot scores of baselines that have access up to K = 20 labeled samples on AmazonCat, which further strengthens the model’s applicability to the XC paradigm. We also note that adding a few labeled examples seems to be more effective in EURLex than AmazonCat, with the performance difference between K = 1 and K = 20 being 21 and 6 P@1 points respectively. Combined with the fact that performance seems to plateau for both datasets, we believe that the larger label space with rich descriptions for AmazonCat has allowed SEMSUP-XC to learn label semantics better than for EURLex.
5.3 ABLATIONS
We analyze the performance of SEMSUP-XC by conducting ablation studies and qualitative analysis on EURLex and AmazonCat for the zero-shot extreme classification setting (ZS-XC) in the following sections.
Analyzing components of SEMSUP-XC SEMSUP-XC’s use of the Relaxed-COIL model and semantically rich descriptions enables it to outperform all baselines considered, and we analyze the importance of each component in Table 3. As our base model (first row) we consider the SEMSUP-XC without ensembling it with TF-IDF. We note that the SEMSUP-XC base model is the best performing variant for both datasets and on all metrics other than P@1, for which it is only 0.5 points lower. Web scraped label descriptions are important because removing them decreases both precision and recall scores (e.g., P@1 is lower by 4 points on AmazonCat) on all settings considered. We see bigger improvements with AmazonCat, which is the dataset with larger number of classes (13K), which substantiates the need for semantically rich descriptions when dealing with fine-grained classes. Label hierarchy information is similarly crucial, with large performance drops on both datasets in its absence (e.g., 26 P@1 points on AmazonCat), thus showing that access to structured hierarchy information leads to better semantic representations of labels.
On the modeling side, we observe that both exact and soft lexical matching are important for RelaxedCOIL, with their absence leading to 11 and 4 point P@1 degradation on AmazonCat. While exact lexical-matching is significantly more important for EURLex which is the smaller dataset, we see that both types of matching are important for AmazonCat which tends to have classes which are more related.
Automatically Augmenting Label Descriptions The previous result showed the importance of web scraped descriptions, and we explore the effect of augmenting label descriptions to increase their number, and hence SEMSUP-XC’s understanding of the class, and report the results in Table 4. We use the widely used Easy Data Augmentation(EDA) (Wei & Zou, 2019) method for descriptions augmentations. Specifically, we apply random word deletion, random word swapping, random insertion, and synonym replacement each with a probabilty 0.5 on the description. We notice that augmentation improves performance on EURLex by 2, 1, and 2 P@1, P@5, and R@10 points respectively, suggesting that augmentation can be a viable way to increase the quantity of descriptions. But results on AmazonCat show that augmentation does not improve and actually slightly hurts the performance (e.g., 0.8 P@1 points). Given that AmazonCat has 3× the number of labels compared to EURLex, we believe that this shows SEMSUP-XC’s effectiveness in capturing the label semantics in the presence of larger number of classes, thus making data augmentation redundant. However, we believe that data augmentation might be a simple tool to boost performance on smaller datasets with lesser labels or descriptions.
6 CONCLUSION
We tackle the task of extreme classification (XC) (Bengio et al., 2019), which involves very large label spaces, using the framework of semantic supervision (Hanjie et al., 2022) that uses class descriptions instead of label IDs. Our method SEMSUP-XC innovates using a combination of contrastive learning, hybrid lexico-semantic matching and automated description collection to train effectively for XC. We achieve state-of-the-art results on three standard XC benchmarks and significantly outperform prior work, while also providing several ablation studies and qualitative analyses demonstrate the relative importance of our various modeling choices. Future work can further improve description quality and use stronger models for input-output similarity to further push the boundaries on this practical task with real-world applications.
APPENDICES
A TRAINING DETAILS
A.1 HYPERPARAMETER TUNING
We tune the learning rate, batch_size using grid search. For the EURLex dataset, we use the standard validation split for choosing the best parameters. We set the input and output encoder’s learning rate at 5e−5 and 1e−4, respectively. We use the same learning rate for the other two datasets. We use batch_size of 16 on EURLex and 32 on AmazonCat and Wikipedia. For Eurlex, we train our zero-shot model for fixed 2 epochs and the generalized zero-shot model for 10 epochs. For the other 2 datasets, we train for a fixed 1 epoch. For baselines, we use the default settings as used in respective papers.
Training
All of our models are trained end-to-end. We use the pretrained BERT model (Devlin et al., 2019b) for encoding input documents, and Bert-Small model (Turc et al., 2019) for encoding output descriptions. For efficiency in training, we freeze the first two layers of the output encoder. We use contrastive learning to train our models and sample hard negatives based on TF-IDF features. All implementation was done in PyTorch and Huggingface transformer and experiments were run NVIDIA RTX2080 and NVIDIA RTX3090 gpus.
A.2 BASELINES
We use the code provided by ZestXML, MACLR and GROOV for running the supervised baselines. We employ the exact implementation of TF-IDF as used in ZestXML. We evaluate T5 as an NLI task (Xue et al., 2021). We separately pass the names of each of the top 100 labels predicted by TF-IDF, and rank labels based on the likelihood of entailment. We evaluate Sentence-Transformer by comparing the similarity between the emeddings of input document and the names of the top 100 labels predicted by TF-IDF. Splade is a sparse neural retreival model that learns label/document sparse expansion via a Bert masked language modelling head. We use the code provided by authors for running the baselines. We experiment with various variations and pretrained models, and find splade_max_CoCodenser pretrained model with low sparsity(λd = 1e − 6 & λq = 1e − 6) to be performing the best.
B LABEL DESCRIPTIONS FROM THE WEB
B.1 AUTOMATICALLY SCRAPING LABEL DESCRIPTIONS FROM THE WEB
We mine label descriptions from web in an automated end-to-end pipeline. We make query of the form ‘what is <class_name>’(or component name in case of Wikipedia) on duckduckgo search engine. Region is set to United States(English), and advertisements are turned off, with safe search set to moderate. We set time range from 1990 uptil June 2019. On average top 50 descriptions are scraped for each query. To further improve the scraped descriptions, we apply a series of heuristics:
• We remove any incomplete sentences. Incomplete sentences do not end in a period or do not have more than one noun, verb or auxiliary verb in them. Eg: Label = Adhesives ; Removed Sentence = What is the best glue or gel for applying
• Statements with lot of punctuation such as semi-colon were found to be non-informative. Descriptions with more than 10 non-period punctuations were removed. Eg: Label = Plant Cages & Supports ; Removed Description = Plant Cages & Supports. My Account; Register; Login; Wish List (0) Shopping Cart; Checkout $ USD $ AUD THB; R$ BRL $ CAD $ CLP $ . . .
• We used regex search to identify urls and currencies in the text. Most of such descriptions were spam and were removed. Eg: Label = Accordion Accessories ; Removed Description = Buy Accordion Accessories
Online, with Buy Now & Pay Later and Rental Options. Free Shipping on most orders over $250. Start Playing Accordion Accessories Today!
• Descriptions with small sentences(<5 words) were removed. Eg: Label = Boats ; Removed Description = Boats for Sale. Buy A Boat; Sell A Boat; Boat Buyers Guide; Boat Insurance; Boat Financing ...
• Descriptions with more than 2 interrogative sentences were filtered out. Eg: Label = Shower Curtains ; Removed Description = So you’re interested...why? you’re starting a company that makes shower curtains? or are you just fooling around? Wiki User 2010-04
• We mined top frequent n-grams from a sample of scraped descriptions, and based on it identified n-grams which were commonly used in advertisements. Examples include: ‘find great deals’, ‘shipped by’. Label = Boat compasses ; Removed Description = Shop and read reviews about Compasses at West Marine. Get free shipping on all orders to any West Marine Store near you today.
• We further remove obscene words from the datasets using an open-source library (Friedland, 2013).
• We also run a spam detection model (Grandury, 2021) on the descriptions and remove those with a confidence threshold above 0.9. Eg: Label = Phones ; Removed Description = Check out the Phones page at <company_name> — the world’s leading music technology and instrument retailer!
• Additionally, most of the sentences in first person, were found to be advertisements, and undetected by previous model. We remove descriptions with more than 3 first person words (such as I, me, mine) were removed. Eg: Label = Alarm Clocks ; Removed Description = We selected the best alarm clocks by taking the necessary, well, time. We tested products with our families, waded our way through expert and real-world user opinions, and determined what models lived up to manufacturers’ claims. . . .
B.2 POST-PROCESSING
We further add hierarchy information in a natural language format to the label descriptions for AmazonCat and EURLex datasets. Precisely, we follow the format of ‘key is value.’ with each key, value pair represented in new line. Here key belongs to the set { ‘Description’, ‘Label’, ‘Alternate Label Names’, ‘Parents’, ‘Children’ }, and the value corresponds to comma separated list of corresponding information from the hierarchy or scraped web description. For example, consider the label ‘video surveillance’ from EURLex dataset. We pass the text: ‘Label is video surveillance. Description is <web_scraped_description>. Parents are video communications. Alternate Label Names are camera surveillance, security camera surveillance.’ to the output encoder. For Wikipedia, label hierarchy is not present, so we only pass the description along with the name of label.
B.3 WIKIPEDIA DESCRIPTIONS
When labels are fine-grained, as in the Wikipedia dataset, making queries for the full label name is not possible. For example, consider the label ‘Fencers at the 1984 Summer Olympics’ from Wikipedia categories; querying for it would link to the same category on Wikipedia itself. Instead, we break the label names into separate constituents using a dependency parser. Then for each constituent(‘Fencers’ and ‘Summer Olympics’), we scrape descriptions. No descriptions are scraped for constituents labelled by Named-Entity Recognition(‘1984’), and their NER tag is directly used. Finally, all the scraped descriptions are concatenated in a proper format and passed to the output encoder.
B.4 DE-DUPLICATION
To ensure no overlap between our descriptions and input documents, we used SuffixArray-based exact match algorithm (Lee et al., 2022) with a minimum threshold of 60 characters and removed the matched descriptions.
C CONTRASTIVE LEARNING
During training, for both EURLex and AmazonCat, we randomly sample 1000−|Y +i | negative labels for each input document. For Wikipedia, we precompute the top 1000 labels for each input based on TF-IDF scores. We then randomly sample 1000− |Y +i | negative labels for each document. At inference time, we evaluate our models on all labels for both EURLex and AmazonCat. However, even evaluation on millions of labels in Wikipedia is not computationally tractable. Therefore, we evaluate only on top 1000 labels predicted by TF-IDF for each input.
D FULL RESULTS FOR ZERO-SHOT CLASSIFICATION
D.1 SPLIT CREATION
For EURLex, and AmazonCat, we follow the same procedure as detailed in GROOV (Simig et al., 2022). We randomly sample k labels from all the labels present in train set, and consider the remaining labels as unseen. For EURLex we have roughly 25%(1057 labels) and for AmazonCat roughly 50%(6500 labels) as unseen. For Wikipedia, we use the standard splits as proposed in ZestXML (Gupta et al., 2021).
D.2 RESULTS
Table 5 contains complete results for ZS-XCacross the three datasets, including additional baselines and metrics.
E FULL RESULTS FOR FEW-SHOT CLASSIFICATION
E.1 SPLIT CREATION
We iteratively select k instances of each label in train documents. If a label has more than k documents associated with it, we drop the label from training(such labels are not sampled as either positives or negatives) for the extra documents. We refer to these labels as neutral labels for convenience.
E.2 MODELS
We use MACLR, GROOV, Light XML as baselines. We initialize the weights from the corresponding pre-trained models in the GZSL setting. We use the default hyperparameters for baselines and SEMSUP models. As discussed in the previous section, neutral labels are not provided at train time for MACLR and GROOV baselines. However, since Light XML uses a final fully-connected classification layer, we cannot selectively remove them for a particular input. Therefore, we mask the loss for labels which are neutral to the documents. We additionally include scores for TF-IDF, but since it is a fully unsupervised method, only zero-shot numbers are included.
E.3 RESULTS
The full results for few-shot classification are present in Table 6.
F COMPUTATIONAL EFFICIENCY
Extreme Classification necessitates that the models scale well in terms of time and memory efficiency with labels at both train and test times. SEMSUP-XC uses contrastive learning for efficiency at train
time. During inference, SEMSUP-XC predicts on top 1000 shortlists by TF-IDF, thereby achieving sub-linear time. Further, contextualized tokens for label descriptions are computed only once and stored in memory-mapped files, thus decreasing computational time significantly. Overall, our computational complexity can be represented by O(TIE ∗N + TOE ∗ |Y |+ k ∗N ∗ Tlex), where TIE , TOE represent the time taken by input encoder and output encoder respectively, N is the total number of input documents, |Y| is the number of all labels, k indicates the shortlist size and |Tlex| denotes the time in soft-lexical computation between contextualized tokens of documents and labels. In our experiments, TIE ∗N >> TOE ∗ |Y | and TIE ≈ Tlex ∗ k. Thus effectively, computational complexity is approximately equal to O(TIE ∗ N ), which is in comparison to other SOTA extreme classification methods.
Table 7 shows that SEMSUP-XC when compared to other XC state-of-the-art baselines, is computationally efficient in terms of speed while demonstrating much better performance. In terms of storage we utilize almost 4 times storage as compared to MACLR, as we need to store contextualized token embeddings of each label. However the overall storage overhead(≈ 17.9GB) is small in comparison to significant improvement in performance and comparable speed. We provide a more detailed analysis of our method in Appendix F
G QUALITATIVE ANALYSIS
We now perform a qualitative analysis of SEMSUP-XC’s predictions and present representative examples in Table 8, and compare them to MACLR which is the next best performing model. Correct predictions are in bold. In the first example, even from the short text in the document SEMSUPXC is able to figure out that it is not just a book, but a textbook. While MACLR predicts five labels which are all similar, SEMSUP-XC is able to predict diverse labels while getting the correct label in five predictions. In the second example, SEMSUP-XC smartly realizes the content of the document is a story and hence predictions literature & fiction, whereas MACLR tries to predict labels for the contents of the story instead. This shows the nuanced understanding of the label space that SEMSUP-XC has learned. The third example portrays the semantic understanding of the SEMSUP-XC’s label space. While MACLR tries to predict labels like powered mixers because of the presence of the word mixer, SEMSUP-XC is able to understand the text at a high level and predict labels like studio recording equipment even though the document has no explicit mention of the words studio, recording or equipment). These qualitative examples show that SEMSUP-XC’s understanding of how different fine-grained classes are related and how documents refer to them is better than the baselines considered. We list more such examples in Appendix G. | 1. What is the focus of the paper regarding extreme zero/few-shot multi-label classification?
2. What are the strengths and weaknesses of the proposed approach compared to other algorithms?
3. Do you have any concerns or questions regarding the methodology, experimental setup, or results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or recommendations for improving the paper or its contributions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a new algorithm for extreme zero/few-shot multi-label classification. The algorithm is an extension of the SEMSUP (Hanjie et al., 2022) algorithm previously introduced for the task of zero/few-shot learning, in which the main idea is to estimate a conditional probability of the label as a product of instances embedding (produced by a decoder, here BERT decoder) and label description embedding (produced by the second decoder) passed through sigmoid function. There are three main changes introduced in this work:
application of the popular technique of negative sampling to improve the computational performance of the algorithm,
addition of a soft lexico-semantic similarity model, similar to COIL (Gao et al., 2021b),
introduction of the protocol for scrapping search engine results to obtain labels description on which the algorithm relies.
The modified algorithm is named SEMSUP-XC and it's empirically compared with some simple unsupervised baselines, as well as 3 recently introduced algorithms for zero-shot learning in the extreme classification setting on 3 datasets. The proposed approach outperforms other algorithms on both zero-shot and few-shot learning tasks. The authors also provide an ablation study of introduced modifications.
Strengths And Weaknesses
Strengths:
The proposed method significantly outperforms other algorithms considered in the comparison on unseen labels.
The ablation study is a nice addition in the context of extending the SEMSUP algorithm.
Weaknesses:
The modifications to the SEMSUM algorithms are minor.
Vague descriptions in many places, especially in the experimental section, that force a reader to guess what the method is.
The introduced method uses auxiliary information, while other methods, if I remember correctly, rely only on label names and don't use data not included in the benchmark. We can also see that scrapping a web for additional description is similar to using other sources to generate some new instances with labels for the problem. I think that for few-shot learning, including scrapped descriptions as learning examples for other methods would be a more fair comparison.
While during training, the negative sampling is used to speed-up training, there is no description of inference. I assume it's performed in linear time by calculating scores for all the labels. Since inference performance is important for XMLC, comment on that and how to speed up inference in the case of this method would be a great addition.
The authors mention they utilize labels hierarchy, it's impact is also shown in the ablation study, but I don't see a clear description of how it is used in the paper. Is the label description generated by adding the names of parent labels?
The authors include the addition of web-scraped descriptions in the ablation study, but it is not clear what the variant is without them. What is the description in this case? Just a name of the class? The name of the label and its parents, if the hierarchy is available?
It's not entirely clear how T5 and Sentence Transformer baselines work. I guess they use the nearest neighbor between embedding of the instance and labels' descriptions, as in the case of tf-idf baseline?
Clarity, Quality, Novelty And Reproducibility
Clarity and quality: Not all details are clear from the text.
Novelty: I find the paper not very novel. Introduce method is minor modifications to the SEMSUP algorithm (which is also simple) to adjust it for the setting of extreme zero/few-shot learning classification.
Reproducibility: The paper doesn't consist of all the necessary details to guarantee the reproducibility of the results. On top of the lack of details in the method description, there is also very little information about dataset creation and hyperparameters. |
ICLR | Title
SemSup-XC: Semantic Supervision for Extreme Classification
Abstract
Extreme classification (XC) considers the scenario of predicting over a very large number of classes (thousands to millions), with real-world applications including serving search engine results, e-commerce product tagging, and news article classification. The zero-shot version of this task involves the addition of new categories at test time, requiring models to generalize to novel classes without additional training data (e.g. one may add a new class “fidget spinner” for ecommerce product tagging). In this paper, we develop SEMSUP-XC, a model that achieves state-of-the-art zero-shot (ZS) and few-shot (FS) performance on three extreme classification benchmarks spanning the domains of law, e-commerce, and Wikipedia. SEMSUP-XC builds upon the recently proposed framework of semantic supervision that uses semantic label descriptions to represent and generalize to classes (e.g., “fidget spinner” described as “A popular spinning toy intended as a stress reliever”). Specifically, we use a combination of contrastive learning, a hybrid lexico-semantic similarity module and automated description collection to train SEMSUP-XC efficiently over extremely large class spaces. SEMSUP-XC significantly outperforms baselines and state-of-the-art models on all three datasets, by up to 6-10 precision@1 points on zero-shot classification and >10 precision points on few-shot classification, with similar gains for recall@10 (3 for zero-shot and 2 for few-shot). Our ablation studies and qualitative analyses demonstrate the relative importance of our various improvements and show that SEMSUP-XC’s automated pipeline offers a consistently efficient method for extreme classification.
1 INTRODUCTION
Extreme classification (XC) studies multi-class and multi-label classification problems with a large number of classes, ranging from thousands to millions (Bengio et al., 2019; Bhatia et al., 2015; Chang et al., 2019; Lin et al., 2014; Jiang et al., 2021). The paradigm has multiple real-world applications including movie and product recommendation, search-engines, and e-commerce product tagging. Moreover, in practical scenarios where XC is deployed, environments are constantly changing, with new classes with zero or few labeled examples being added. Recent work such as ZestXML (Gupta et al., 2021), MACLR (Xiong et al., 2022), LightXML (Jiang et al., 2021), and GROOV (Simig et al., 2022), has explored zero-shot and few-shot extreme classification (ZS-XC and FS-XC). These setups are challenging because of (1) the presence of a large number of fine-grained classes which are often not mutually exclusive, (2) limited or no labeled data per class, (3) and increased computational expense and model size because of the large label space. While the aforementioned works have tried to tackle the latter two issues, they have not attempted to build a semantically rich representation of classes for improved classification, using only class names to represent them.
A large fine-grained label space necessitates capturing the semantics of different attributes of classes. To this end, we leverage semantic supervision (SEMSUP) (Hanjie et al., 2022), a recently proposed framework that represents classes using diverse descriptions to better capture their semantics. This design choice allows SEMSUP to better generalize to novel classes by using corresponding descriptions, compared to standard classifiers. However, SEMSUP as designed in (Hanjie et al., 2022) cannot be naively applied to XC due to several reasons: (1) SEMSUP performs full cross-entropy learning which is computationally intractable for large label spaces (2) it uses only semantic similarity between the instance and label description to measure compatibility, thus ignoring lexically similar common terms like “soccer” and “football”. (3) it uses a semi-automatic pipeline for collecting label descriptions for
Input: Tagore wrote a gem
classes that requires a small amount of human intervention, which is expensive for large label spaces we are dealing with.
We remedy these deficiencies by developing a new model SEMSUP-XC that scales to large class spaces in XC using three innovations. First, SEMSUP-XC employs a contrastive learning objective (Hadsell et al., 2006) which samples a fixed number of negative label descriptions, improving computation speed by as much as 99.9%. Second, we use a novel hybrid lexical-semantic similarity model called Relaxed-COIL (based on COIL (Gao et al., 2021b)) that combines semantic similarity of sentences with soft matching between all token pairs. And finally, we propose SEMSUP-WEB which is a fully automatic pipeline with precise heuristics to scrape high-quality descriptions.
SEMSUP-XC achieves state-of-the-art performance on three diverse XC datasets based on legal (EURLex), e-commerce (AmazonCat), and wiki (Wikipedia) domains, across three settings (zeroshot, generalized zero-shot and few-shot extreme classification – ZS-XC, GZS-XC, FS-XC). For example, on ZS-XC, SEMSUP-XC outperforms the next best baseline by 5 to 19 Precision@1 points on different datasets, and maintains the advantage across all metrics. On FS-XC, SEMSUP-XC consistently outperforms baselines by over 10 Precision@1 points on 5, 10, 20 shot classification. Interestingly, SEMSUP-XC also outperforms larger models like T5 and Sentence Transformers (e.g., by over 30 P@1 points on EURLex) which are pre-trained on web-scale corpora, which shows the importance of contrastive learning to adapt to a specific domain. We perform several ablation studies to dissect the importance of each component in SEMSUP-XC, and also provide a qualitative error analysis of the model.
2 RELATED WORK
Extreme classification Extreme classification (XC) (Bengio et al., 2019) studies multi-class and multi-label classification problems over large label spaces. Traditionally, studies have used sparsefeatures extracted from the bag-of-words representation of input documents (Bhatia et al., 2015; Chang et al., 2019; Lin et al., 2014), and have also explored one-versus-all binary classifiers (Babbar & Schölkopf, 2017; Yen et al., 2017; Jain et al., 2019; Dahiya et al., 2021a) and tree-based methods which utilize the label hierarchy (Prabhu et al., 2018; Wydmuch et al., 2018; Khandagale et al., 2020). Recently, neural-network (NN) based dense-feature methods have demonstrated improved accuracies due to their ability to generate semantically rich and contextual representations of text. Different studies have experimented with architectures like convolutional neural networks (Liu et al., 2017), Transformers (Chang et al., 2020; Jiang et al., 2021; Zhang et al., 2021), attention-based
networks (You et al., 2019) and shallow networks (Medini et al., 2019; Mittal et al., 2021; Dahiya et al., 2021b). While the aforementioned works show impressive performance when the labels during training and evaluation are the same, they do not consider the practical zero-shot classification scenario with unseen labels during evaluation.
Zero-shot extreme classification (ZS-XC) Zero-shot classification (ZS) (Larochelle et al., 2008) aims to predict unseen classes not encountered during training by utilizing auxiliary information like the class name or a prototype. Multiple works have attempted to improve performance for the text domain (Dauphin et al., 2014; Nam et al., 2016; Wang et al., 2018; Pappas & Henderson, 2019; Hanjie et al., 2022), however, given the large label space of XC, these cannot be easily be extended because of computational expense and performance degradation. ZestXML (Gupta et al., 2021) was the first study to attempt ZS extreme classification by projecting bag-of-words input features close to corresponding label features using a sparsified linear transformation, but this limits them to using non-contextual text representations. Subsequent works have used neural-networks to generate contextual text representations (Xiong et al., 2022; Simig et al., 2022; Zhang et al., 2022; Rios & Kavuluru, 2018), with MACLR (Xiong et al., 2022) using an inverse cloze pre-training step and GROOV (Simig et al., 2022) using a sequence-to-sequence generative model to predict novel labels. However, all these works have the shortcoming that they use only label names (e.g., the word “cat”) which lack semantic information to represent classes. In this work, we adapt the recently proposed method of semantic supervision (Hanjie et al., 2022) that uses semantically rich and diverse descriptions to represent classes. SEMSUP underperforms out-of-the-box, and we propose several training and modeling changes (§ 3) to achieve state-of-the-art performance on ZS XC tasks.
3 METHODOLOGY
3.1 BACKGROUND
Zero and Few-shot Extreme Classification Extreme classification (XC) contains classification problems with label spaces (thousands to millions classes). Zero-shot extreme classification (ZS-XC) is a version of XC where a model is evaluated on unseen classes not encountered during training. We consider two settings, (1) Zero-shot (ZS), where the model is tested only on unseen classes not containing any train classes, and (2) Generalized zero-shot (G-ZS), where the model is tested on a combined set of train and unseen classes.
Background: Semantic supervision Semantic supervision (SEMSUP) (Hanjie et al., 2022) is a framework for zero-shot classification that represents classes using rich textual descriptions (e.g., “A form of competitive physical activity or game” for the class sports), instead of discrete IDs (e.g., Class-1, Class-2). This allows a trained model to generalize to new classes as long as their corresponding descriptions are provided. In addition to using an input encoder (fIE), SEMSUP also has an output encoder (gOE) to encode label descriptions, and makes a class prediction by measuring the compatibility of the input representation of the instance and output representation of the label description corresponding to a class.
Formally, let C be the number of classes, d be the dimensionality of the input representation, xi be the input document, D = (d1, . . . , dC) be descriptions corresponding to the classes, fIE (xi) ∈ Rd be the input representation, and gOE (dj) ∈ Rd be the output representation of the jth class. We operate under the multi-label classification setting, which is the default for the extreme classification benchmarks we consider. Then, we have the probability of picking the jth class as:
SEMSUP := P (yj = 1|xi) = σ (gOE (dj)⊺ · fIE (xi)) (1)
SEMSUP is trained using the binary cross-entropy (BCE) loss between the predicted probability and gold answer, where N is the number of instances in the dataset.
LSEMSUP = 1 N · |C| ∑
(xi,Yi)
C∑ j=1 LBCE (P (yij = 1|xi) , yij) (2)
where Yi = {yi1, . . . , yiC |yij ∈ {0, 1}} is the set of the labels for instance xi, with xi belonging to class j if and only if yij = 1.
SEMSUP relies on multiple high-quality label descriptions of classes that contain semantic information, which are semi-automatically scraped from the web and filtered by an expert in (Hanjie et al., 2022). During training, diverse descriptions are randomly sampled, endowing the model with a semantically rich representation since they contain information on different class attributes. For example, the class sports can have a definition (“A form of competitive physical activity or game”), examples (“Examples are football, cricket, and hockey”), or etymology (“Derived from a French word meaning leisure”) in each description, among other attributes.
Shortcomings for large class spaces While (Hanjie et al., 2022) show improved performance of SEMSUP on zero-shot classification, their vanilla method cannot be directly applied to extreme classification due to several reasons. First, they use the binary cross-entropy loss over all the classes in the dataset (Eq 2), which involves encoding C label descriptions for each batch. This is computationally intractable for large label spaces because of GPU memory constraints. Second, they use a simple bi-encoder model which measures the semantic similarity between the input instance and the label description. However, instances and descriptions often share lexical terms with the same or similar lemma (e.g., wrote and written), which are not directly exploited by their method. And third, although their label description collection pipeline is semi-automatic, human intervention of any form is not feasible for extreme classification datasets, especially ones with labels in the order of millions. Our method SEMSUP-XC (Section 3.2) addresses the above contraints by using: (1) contrastive learning with negative samples for improved computational speed, (2) a novel hybrid lexical-semantic similarity model for improved performance, and (3) a completely automatic description scraping pipeline with accurate heuristics for filtering poor descriptions.
3.2 SEMSUP-XC: EFFICIENT GENERALIZATION FOR ZERO-SHOT EXTREME CLASSIFICATION
We provide an overview of our method in Figure 1 and explain it in detail below.
Training using contrastive learning For datasets with a large label space (large |C|), we improve SEMSUP’s computational speed by sampling negative classes for each instance rather than encoding the label descriptions of all classes. For instance xi, consider two partitions of the labels Yi = {yi1, . . . , yiC |yij ∈ {0, 1}}, with Y +i containing the positive classes (yij = 1) and Y − i containing the negative classes (yij = 0). SEMSUP-XC is trained using all positive classes (|Y +i |), but drawing inspiration from contrastive learning (Hadsell et al., 2006), we sample K − |Y +i | negative classes from Y −i instead of C − |Y + i |, with K ≪ |C|. We refer readers to appendix C for exact details. Intuitively, our training objective incentivizes the representations of the instance and label descriptions of the positive classes to be similar while simultaneously increasing the distance with respect to representations of the negative classes. Furthermore, rather than picking negative labels at random, we sample hard negatives that are lexically similar to positive labels. This allows for more explicit separation of embeddings of closely related labels. A typical dataset we consider (AmazonCat) contains |C| = 13, 000 and K ≈ 1000, which leads to SEMSUP-XC being 1200013000 = 92.3% faster than SEMSUP. Mathematically, the following is the training objective:
LSEMSUP-XC = 1 N ·K ∑ i ∑ yk∈Y + i LBCE (P (yk = 1|xi) , yk) + K−|Y +i |∑ l=1,yl Y − i LBCE (P (yl = 1|xi), yl) (3) We follow a similar procedure for inference and we refer readers to Appendix C for further details.
Hybrid lexico-semantic similarity model SEMSUP uses a bi-encoder architecture with two different BERT (Devlin et al., 2019b) models as the input and output encoder respectively. They use the final-layer representation corresponding to the [CLS] token to encode the instance and label description, with the inner product measuring the semantic similarity of the input instance and output class. Drawing inspiration from recent IR models like COIL (Gao et al., 2021b) and ColBERT (Khattab & Zaharia, 2020), we note that BERT’s semantic similarity can ignore lexical matching of words present in the input and output text which exhibit strong evidence of compatibility, thus leading to
performance degradation. COIL is a bi-encoder architecture which alleviates this by incorporating both semantic and lexical similarity. Apart from the dot product between [CLS] vectors, they also include an exact lexical match scoring function which is based on the dot product of representations corresponding to tokens with exact matches in the two pieces of text considered (e.g., input text: “Capture the best momemts in high quality pictures.” and label description “A camera is used to take photos.” If there are multiple occurrences of a common token type, the maximum similarity score is chosen, and the scores are then aggregated over all token types that are present in both sentences. Let xi = (xi1, xi2, . . . , xin) be the input instance with n tokens, dj = (dj1, dj2, . . . , djm) be a label description of class j with m tokens, vxicls and v dj cls be the [CLS] representations of the input and label description, and vxik and v dj l be the token representation of the k
th and lth token of xi and dj respectively. Mathematically, the following would be the probability of picking class j for both vanilla SEMSUP and SEMSUP + COIL:
SEMSUP := P (yj = 1|xi) = σ ( vxicls ⊺ · vdjcls )
SEMSUP + COIL := P (yj = 1|xi) = σ vxicls⊺ · vdjcls + ∑ w∈xi∩dj max w=xik=djl ( vxik ⊺v dj l ) (4) where exact lexical match is used to get the list of common tokens – w ∈ xi ∩ dj . As a result of the exact match, COIL has the drawback that semantically similar tokens (e.g., pictures and photos in the above sentence pair) and words with the same lemma (e.g., walk and walking) are treated as dissimilar tokens. To avoid this, we propose the use of soft lexical matching based on token clustering and lemmatization in our model Relaxed-COIL. We create clusters of tokens based on two characteristics: 1) the BERT token-embedding similarity (Reimers & Gurevych, 2019) and 2) the lemma of the word. Let CL(w) denote the cluster membership of the token w, with CL(wi) = CL(wj) if and only if they have a high token-embedding similarity or if they share the same lemma. Instead of using the exact lexical match in COIL, we use a soft lexical match which only checks if two tokens belong to the same cluster, thus allowing the model to exploit semantic similarity of tokens. Mathematically, Relaxed-COIL computes the probability as follows, where CL(xi) = {CL (xi1) , . . . , CL (xin)} denotes the set of cluster memberships of tokens in xi.
Relaxed-COIL :=P (yj = 1|xi) =
σ vxicls⊺ · vdjcls + ∑ idx∈CL(xi)∩CL(dj) max idx=CL(xik)=CL(djl) ( vxik ⊺v dj l ) (5) Automatically collecting high-quality descriptions SEMSUP uses a semi-automatic pipeline for collecting multiple descriptions for classes, with an expert required for filtering irrelevant ones. However, in our case, large label spaces (e.g., 1 million for Wikipedia) make any degree of human involvment infeasible. To alleviate this, we create a completely automatic pipeline for collecting descriptions which includes heuristics for removing spam, advertisements, and irrelevant descriptions, and we detail the list of heuristics used in Appendix B. In addition to web-scraped label descriptions, we utilize label-hierarchy information if provided by the dataset(EURLex and AmazonCat), which allows us to encode properties about parent and children classes wherever present. Further details are present in Appendix B.2 As we show in the ablation study (§ 5.3), both label descriptions that we collect and the label hierarchy provide significant performance boosts.
4 EXPERIMENTAL SETUP
Datasets We evaluate our model on three diverse public benchmark datasets. They are, EURLex4.3K (Chalkidis et al., 2019) which is legal document classification dataset with 4.3K classes, AmazonCat-13K (McAuley & Leskovec, 2013) which is an e-commerce product tagging dataset including Amazon product descriptions and titles with 13K categories, and Wikipedia-1M (Gupta et al., 2021) which is an article classification dataset made up of over 5 million Wikipedia articles
with 1 million categories. We provide detailed statistics about the number of instances and classes in train and test set in Table 1. We refer readers to Appendix A.2 for additional details.
Baselines We perform extensive experiments with six diverse baselines. 1) TF-IDF performs a nearest neighbour match between the sparse tf-idf features of the input and label description. 2) T5 (Raffel et al., 2019) is a large sequence-to-sequence model which has been pre-trained on 750GB unsupervised data and further fine-tuned on MNLI (Williams et al., 2018) to allow us to check if a label description entails an input instance. Ranking is done on top 50 labels predicted by TF-IDF 3) Sentence Transformer (Reimers & Gurevych, 2019) is a semantic text similarity model fine-tuned using a contrastive learning objective on over 1 billion sentence pairs. We rank the labels based on similarity of their output embeddings with document’s embeddings. The latter two baselines use significantly more data than SEMSUP-XC and T5 has 9× the parameters. The aforementioned baselines are unsupervised and not fine-tuned on our datasets. The following baselines are previously proposed supervised models which are fine-tuned on the datasets we consider. 4) ZestXML (Gupta et al., 2021) learns a highly sparsified linear transformation between sparse input and label features. 5) MACLR (Xiong et al., 2022) is a bi-encoder based model pre-trained on two self-supervised learning tasks to improve extreme classification—Inverse Cloze Task (Lee et al., 2019) and SimCSE (Gao et al., 2021c), and we fine-tune it on the datasets considered. 6) GROOV (Simig et al., 2022) is a T5 model that learns to generate both seen and unseen labels given an input document. 7) SPLADE (Formal et al., 2021) is a sparse neural retreival model that learns label/document sparse expansion via a Bert masked language modelling head. It is one of the current state of the art in information retrieval in out-of-domain tasks. To make comparisons with SEMSUP-XC fair, we fine-tune and re-evaluate the above models on the datasets we consider while including label descriptions and label hierarchy information. We refer readers to Appendix A.2 for additional details.
SEMSUP-XC implementation details We use the Bert-base model (Devlin et al., 2019a) as the backbone for the input encoder and Bert-small model (Turc et al., 2019) for the output encoder. SEMSUP-XC follows the model architecture described in Section 3.2 and we use contrastive learning (Hadsell et al., 2006) to train our models. During training, we randomly sample 1000−p negatives for each instance, where p is the number of positive labels for the instance. At inference, to improve computational efficiency, we precompute the output representations of label descriptions. We use the AdamW optimizer (Loshchilov & Hutter, 2019) and tune our hyperparameters using grid search on the respective validation set. We provide further details in Appendix A.1.
Evaluation setting and metrics We evaluate all models on three different settings: Zero-shot classification (ZS) on a set of unseen classes, generalized zero-shot classification (G-ZS) on a combined set of seen and unseen classes, and few-shot classification (FS) on a set of classes with minimal amounts of supervised data (1 to 20 examples per class). We use Precision@K and Recall@K (with multiple values of K) as our evaluation metrics, as is standard practice. Precision@K measures how accurate the top-K predictions of the model are, and Recall@K measures what fraction of correct labels are present in the top-K predictions, and they are mathematically defined as P@k = 1 k ∑ i∈rankk(ŷ) yi and R@k = 1∑ i yi ∑ i∈rankk(ŷ) yi, where rankk(ŷ) is the set of top-K predictions.
5 RESULTS
5.1 ZERO-SHOT EXTREME CLASSIFICATION
We consider two variants of baselines: with and without descriptions. We provide label hierarchy as output supervision in both cases. Table 2 shows that SEMSUP-XC significantly outperforms baselines on all datasets and metrics, under both zero-shot (ZS-XC) and generalized zero-shot (GZS-XC) settings. On ZS-XC, SEMSUP-XC outperforms MACLR by over 20, 13, and 15 P@1 points on the three datasets, respectively, even though MACLR uses XC specific pre-training (Inverse Cloze Task and SimCSE) , while SEMSUP-XC does not. SEMSUP-XC also outperforms GROOV (e.g., over 45 P@1 points on EURLex) which uses a T5 seq2seq model pre-trained on significantly more data than BERT, and this is likely because GROOV’s output space is unconstrained. SEMSUP-XC’s semantic understanding of instances and labels stands out against ZestXML which uses sparse non-contextual features with the former consistently scoring twice as higher compared to the latter. Interestingly TF-IDF performs better than all other baselines for EURLex ZS. This is because sparse methods often perform better than dense bi-encoders in zero-shot settings (Thakur et al., 2021), as the latter fail to capture fine-grained information. However, due to the introduction of Relaxed COIL in our method, SEMSUP-XC can perform fine-grained lexical matching in descriptions along with capturing deep semantic information, thus resulting in superior ZS and GZS scores. SEMSUP-XC also outperforms the other unsupervised baselines of T5, and Sentence-Transformer, even though the latter two are pre-trained on significantly larger amounts of data than BERT (T5 use 50× compared our base model).
In addition, SEMSUP-XC achieves higher recall on all datasets, beating the best performing baselines by 6, 18, and 5.2R@10 points on the three datasets, respectively. Since GZS-XC includes labels seen during training while evaluation, all methods have higher scores than ZS-XC and the gaps between different models are smaller, but we see both on precision and recall metrics that SEMSUP-XC again outperforms all the baselines considered by margins of 1-2 precision@1 points. Table 5 in Appendix D contains additional results with more methods and metrics.
Further, while SEMSUP-XC improves from the inclusion of descriptions, other methods have little to no advantage. This is because web-scraped descriptions are often noisy and need suitable architecture to make use of them. Unlike other methods, SEMSUP-XC has a hybrid lexical matching module, which improves from the inclusion of descriptions. This demonstrates the combined advantage of our proposed architectural changes and the use of web-scraped descriptions.
5.2 FEW-SHOT EXTREME CLASSIFICATION
We now consider the FS-XC setup, where new classes added at evaluation time have a small number of labeled instances each (K ∈ {1, 5, 10, 20}). For the sake of completeness, we also include zero-shot performance (ZS-XC, K = 0) and report results in Figure 2. Detailed results for other metrics are in appendix E (showing the same trend as P@1) and implementation details regarding creation of the few-shot splits are in appendix E. Similar to the ZS-XC case, SEMSUP-XC outperforms the baselines for all values of K considered. As expected, SEMSUP-XC’s performance increases with K because of access to more labeled data, but crucially, it continues to outperform baselines by the same margins. Interestingly, SEMSUP-XC’s zero-shot performance is higher than even the few-shot scores of baselines that have access up to K = 20 labeled samples on AmazonCat, which further strengthens the model’s applicability to the XC paradigm. We also note that adding a few labeled examples seems to be more effective in EURLex than AmazonCat, with the performance difference between K = 1 and K = 20 being 21 and 6 P@1 points respectively. Combined with the fact that performance seems to plateau for both datasets, we believe that the larger label space with rich descriptions for AmazonCat has allowed SEMSUP-XC to learn label semantics better than for EURLex.
5.3 ABLATIONS
We analyze the performance of SEMSUP-XC by conducting ablation studies and qualitative analysis on EURLex and AmazonCat for the zero-shot extreme classification setting (ZS-XC) in the following sections.
Analyzing components of SEMSUP-XC SEMSUP-XC’s use of the Relaxed-COIL model and semantically rich descriptions enables it to outperform all baselines considered, and we analyze the importance of each component in Table 3. As our base model (first row) we consider the SEMSUP-XC without ensembling it with TF-IDF. We note that the SEMSUP-XC base model is the best performing variant for both datasets and on all metrics other than P@1, for which it is only 0.5 points lower. Web scraped label descriptions are important because removing them decreases both precision and recall scores (e.g., P@1 is lower by 4 points on AmazonCat) on all settings considered. We see bigger improvements with AmazonCat, which is the dataset with larger number of classes (13K), which substantiates the need for semantically rich descriptions when dealing with fine-grained classes. Label hierarchy information is similarly crucial, with large performance drops on both datasets in its absence (e.g., 26 P@1 points on AmazonCat), thus showing that access to structured hierarchy information leads to better semantic representations of labels.
On the modeling side, we observe that both exact and soft lexical matching are important for RelaxedCOIL, with their absence leading to 11 and 4 point P@1 degradation on AmazonCat. While exact lexical-matching is significantly more important for EURLex which is the smaller dataset, we see that both types of matching are important for AmazonCat which tends to have classes which are more related.
Automatically Augmenting Label Descriptions The previous result showed the importance of web scraped descriptions, and we explore the effect of augmenting label descriptions to increase their number, and hence SEMSUP-XC’s understanding of the class, and report the results in Table 4. We use the widely used Easy Data Augmentation(EDA) (Wei & Zou, 2019) method for descriptions augmentations. Specifically, we apply random word deletion, random word swapping, random insertion, and synonym replacement each with a probabilty 0.5 on the description. We notice that augmentation improves performance on EURLex by 2, 1, and 2 P@1, P@5, and R@10 points respectively, suggesting that augmentation can be a viable way to increase the quantity of descriptions. But results on AmazonCat show that augmentation does not improve and actually slightly hurts the performance (e.g., 0.8 P@1 points). Given that AmazonCat has 3× the number of labels compared to EURLex, we believe that this shows SEMSUP-XC’s effectiveness in capturing the label semantics in the presence of larger number of classes, thus making data augmentation redundant. However, we believe that data augmentation might be a simple tool to boost performance on smaller datasets with lesser labels or descriptions.
6 CONCLUSION
We tackle the task of extreme classification (XC) (Bengio et al., 2019), which involves very large label spaces, using the framework of semantic supervision (Hanjie et al., 2022) that uses class descriptions instead of label IDs. Our method SEMSUP-XC innovates using a combination of contrastive learning, hybrid lexico-semantic matching and automated description collection to train effectively for XC. We achieve state-of-the-art results on three standard XC benchmarks and significantly outperform prior work, while also providing several ablation studies and qualitative analyses demonstrate the relative importance of our various modeling choices. Future work can further improve description quality and use stronger models for input-output similarity to further push the boundaries on this practical task with real-world applications.
APPENDICES
A TRAINING DETAILS
A.1 HYPERPARAMETER TUNING
We tune the learning rate, batch_size using grid search. For the EURLex dataset, we use the standard validation split for choosing the best parameters. We set the input and output encoder’s learning rate at 5e−5 and 1e−4, respectively. We use the same learning rate for the other two datasets. We use batch_size of 16 on EURLex and 32 on AmazonCat and Wikipedia. For Eurlex, we train our zero-shot model for fixed 2 epochs and the generalized zero-shot model for 10 epochs. For the other 2 datasets, we train for a fixed 1 epoch. For baselines, we use the default settings as used in respective papers.
Training
All of our models are trained end-to-end. We use the pretrained BERT model (Devlin et al., 2019b) for encoding input documents, and Bert-Small model (Turc et al., 2019) for encoding output descriptions. For efficiency in training, we freeze the first two layers of the output encoder. We use contrastive learning to train our models and sample hard negatives based on TF-IDF features. All implementation was done in PyTorch and Huggingface transformer and experiments were run NVIDIA RTX2080 and NVIDIA RTX3090 gpus.
A.2 BASELINES
We use the code provided by ZestXML, MACLR and GROOV for running the supervised baselines. We employ the exact implementation of TF-IDF as used in ZestXML. We evaluate T5 as an NLI task (Xue et al., 2021). We separately pass the names of each of the top 100 labels predicted by TF-IDF, and rank labels based on the likelihood of entailment. We evaluate Sentence-Transformer by comparing the similarity between the emeddings of input document and the names of the top 100 labels predicted by TF-IDF. Splade is a sparse neural retreival model that learns label/document sparse expansion via a Bert masked language modelling head. We use the code provided by authors for running the baselines. We experiment with various variations and pretrained models, and find splade_max_CoCodenser pretrained model with low sparsity(λd = 1e − 6 & λq = 1e − 6) to be performing the best.
B LABEL DESCRIPTIONS FROM THE WEB
B.1 AUTOMATICALLY SCRAPING LABEL DESCRIPTIONS FROM THE WEB
We mine label descriptions from web in an automated end-to-end pipeline. We make query of the form ‘what is <class_name>’(or component name in case of Wikipedia) on duckduckgo search engine. Region is set to United States(English), and advertisements are turned off, with safe search set to moderate. We set time range from 1990 uptil June 2019. On average top 50 descriptions are scraped for each query. To further improve the scraped descriptions, we apply a series of heuristics:
• We remove any incomplete sentences. Incomplete sentences do not end in a period or do not have more than one noun, verb or auxiliary verb in them. Eg: Label = Adhesives ; Removed Sentence = What is the best glue or gel for applying
• Statements with lot of punctuation such as semi-colon were found to be non-informative. Descriptions with more than 10 non-period punctuations were removed. Eg: Label = Plant Cages & Supports ; Removed Description = Plant Cages & Supports. My Account; Register; Login; Wish List (0) Shopping Cart; Checkout $ USD $ AUD THB; R$ BRL $ CAD $ CLP $ . . .
• We used regex search to identify urls and currencies in the text. Most of such descriptions were spam and were removed. Eg: Label = Accordion Accessories ; Removed Description = Buy Accordion Accessories
Online, with Buy Now & Pay Later and Rental Options. Free Shipping on most orders over $250. Start Playing Accordion Accessories Today!
• Descriptions with small sentences(<5 words) were removed. Eg: Label = Boats ; Removed Description = Boats for Sale. Buy A Boat; Sell A Boat; Boat Buyers Guide; Boat Insurance; Boat Financing ...
• Descriptions with more than 2 interrogative sentences were filtered out. Eg: Label = Shower Curtains ; Removed Description = So you’re interested...why? you’re starting a company that makes shower curtains? or are you just fooling around? Wiki User 2010-04
• We mined top frequent n-grams from a sample of scraped descriptions, and based on it identified n-grams which were commonly used in advertisements. Examples include: ‘find great deals’, ‘shipped by’. Label = Boat compasses ; Removed Description = Shop and read reviews about Compasses at West Marine. Get free shipping on all orders to any West Marine Store near you today.
• We further remove obscene words from the datasets using an open-source library (Friedland, 2013).
• We also run a spam detection model (Grandury, 2021) on the descriptions and remove those with a confidence threshold above 0.9. Eg: Label = Phones ; Removed Description = Check out the Phones page at <company_name> — the world’s leading music technology and instrument retailer!
• Additionally, most of the sentences in first person, were found to be advertisements, and undetected by previous model. We remove descriptions with more than 3 first person words (such as I, me, mine) were removed. Eg: Label = Alarm Clocks ; Removed Description = We selected the best alarm clocks by taking the necessary, well, time. We tested products with our families, waded our way through expert and real-world user opinions, and determined what models lived up to manufacturers’ claims. . . .
B.2 POST-PROCESSING
We further add hierarchy information in a natural language format to the label descriptions for AmazonCat and EURLex datasets. Precisely, we follow the format of ‘key is value.’ with each key, value pair represented in new line. Here key belongs to the set { ‘Description’, ‘Label’, ‘Alternate Label Names’, ‘Parents’, ‘Children’ }, and the value corresponds to comma separated list of corresponding information from the hierarchy or scraped web description. For example, consider the label ‘video surveillance’ from EURLex dataset. We pass the text: ‘Label is video surveillance. Description is <web_scraped_description>. Parents are video communications. Alternate Label Names are camera surveillance, security camera surveillance.’ to the output encoder. For Wikipedia, label hierarchy is not present, so we only pass the description along with the name of label.
B.3 WIKIPEDIA DESCRIPTIONS
When labels are fine-grained, as in the Wikipedia dataset, making queries for the full label name is not possible. For example, consider the label ‘Fencers at the 1984 Summer Olympics’ from Wikipedia categories; querying for it would link to the same category on Wikipedia itself. Instead, we break the label names into separate constituents using a dependency parser. Then for each constituent(‘Fencers’ and ‘Summer Olympics’), we scrape descriptions. No descriptions are scraped for constituents labelled by Named-Entity Recognition(‘1984’), and their NER tag is directly used. Finally, all the scraped descriptions are concatenated in a proper format and passed to the output encoder.
B.4 DE-DUPLICATION
To ensure no overlap between our descriptions and input documents, we used SuffixArray-based exact match algorithm (Lee et al., 2022) with a minimum threshold of 60 characters and removed the matched descriptions.
C CONTRASTIVE LEARNING
During training, for both EURLex and AmazonCat, we randomly sample 1000−|Y +i | negative labels for each input document. For Wikipedia, we precompute the top 1000 labels for each input based on TF-IDF scores. We then randomly sample 1000− |Y +i | negative labels for each document. At inference time, we evaluate our models on all labels for both EURLex and AmazonCat. However, even evaluation on millions of labels in Wikipedia is not computationally tractable. Therefore, we evaluate only on top 1000 labels predicted by TF-IDF for each input.
D FULL RESULTS FOR ZERO-SHOT CLASSIFICATION
D.1 SPLIT CREATION
For EURLex, and AmazonCat, we follow the same procedure as detailed in GROOV (Simig et al., 2022). We randomly sample k labels from all the labels present in train set, and consider the remaining labels as unseen. For EURLex we have roughly 25%(1057 labels) and for AmazonCat roughly 50%(6500 labels) as unseen. For Wikipedia, we use the standard splits as proposed in ZestXML (Gupta et al., 2021).
D.2 RESULTS
Table 5 contains complete results for ZS-XCacross the three datasets, including additional baselines and metrics.
E FULL RESULTS FOR FEW-SHOT CLASSIFICATION
E.1 SPLIT CREATION
We iteratively select k instances of each label in train documents. If a label has more than k documents associated with it, we drop the label from training(such labels are not sampled as either positives or negatives) for the extra documents. We refer to these labels as neutral labels for convenience.
E.2 MODELS
We use MACLR, GROOV, Light XML as baselines. We initialize the weights from the corresponding pre-trained models in the GZSL setting. We use the default hyperparameters for baselines and SEMSUP models. As discussed in the previous section, neutral labels are not provided at train time for MACLR and GROOV baselines. However, since Light XML uses a final fully-connected classification layer, we cannot selectively remove them for a particular input. Therefore, we mask the loss for labels which are neutral to the documents. We additionally include scores for TF-IDF, but since it is a fully unsupervised method, only zero-shot numbers are included.
E.3 RESULTS
The full results for few-shot classification are present in Table 6.
F COMPUTATIONAL EFFICIENCY
Extreme Classification necessitates that the models scale well in terms of time and memory efficiency with labels at both train and test times. SEMSUP-XC uses contrastive learning for efficiency at train
time. During inference, SEMSUP-XC predicts on top 1000 shortlists by TF-IDF, thereby achieving sub-linear time. Further, contextualized tokens for label descriptions are computed only once and stored in memory-mapped files, thus decreasing computational time significantly. Overall, our computational complexity can be represented by O(TIE ∗N + TOE ∗ |Y |+ k ∗N ∗ Tlex), where TIE , TOE represent the time taken by input encoder and output encoder respectively, N is the total number of input documents, |Y| is the number of all labels, k indicates the shortlist size and |Tlex| denotes the time in soft-lexical computation between contextualized tokens of documents and labels. In our experiments, TIE ∗N >> TOE ∗ |Y | and TIE ≈ Tlex ∗ k. Thus effectively, computational complexity is approximately equal to O(TIE ∗ N ), which is in comparison to other SOTA extreme classification methods.
Table 7 shows that SEMSUP-XC when compared to other XC state-of-the-art baselines, is computationally efficient in terms of speed while demonstrating much better performance. In terms of storage we utilize almost 4 times storage as compared to MACLR, as we need to store contextualized token embeddings of each label. However the overall storage overhead(≈ 17.9GB) is small in comparison to significant improvement in performance and comparable speed. We provide a more detailed analysis of our method in Appendix F
G QUALITATIVE ANALYSIS
We now perform a qualitative analysis of SEMSUP-XC’s predictions and present representative examples in Table 8, and compare them to MACLR which is the next best performing model. Correct predictions are in bold. In the first example, even from the short text in the document SEMSUPXC is able to figure out that it is not just a book, but a textbook. While MACLR predicts five labels which are all similar, SEMSUP-XC is able to predict diverse labels while getting the correct label in five predictions. In the second example, SEMSUP-XC smartly realizes the content of the document is a story and hence predictions literature & fiction, whereas MACLR tries to predict labels for the contents of the story instead. This shows the nuanced understanding of the label space that SEMSUP-XC has learned. The third example portrays the semantic understanding of the SEMSUP-XC’s label space. While MACLR tries to predict labels like powered mixers because of the presence of the word mixer, SEMSUP-XC is able to understand the text at a high level and predict labels like studio recording equipment even though the document has no explicit mention of the words studio, recording or equipment). These qualitative examples show that SEMSUP-XC’s understanding of how different fine-grained classes are related and how documents refer to them is better than the baselines considered. We list more such examples in Appendix G. | 1. What is the focus and contribution of the paper regarding zero-shot extreme classification?
2. What are the strengths and weaknesses of the proposed approach, particularly in its reliance on earlier works and lack of novelty?
3. How does the reviewer assess the presentation and experimental setup of the paper, including the problem description and comparisons with other works?
4. Do you have any concerns regarding the inference speed and reliance on external sources for description fetching?
5. How does the reviewer evaluate the paper's clarity, quality, novelty, and reproducibility? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents an approach for zero shot extreme classification. It is based on an earlier idea (SEMSUP) in computer vision, which uses the using the textual descriptions of labels to capture their semantics. This previous idea is improved for the task of extreme classification by using contrastive learning, and exploiting external sources of data from the web to scrape item/label descriptions. It is argued that the proposed method leads to improvements on benchmark extreme classification datasets.
Strengths And Weaknesses
Strengths :
The paper addresses an important problem which is gaining more traction within the domain of extreme classification, and approach seems quite reasonable.
The experimental results are quite good, and significant improvement over state-of-the-art is demonstrated. However, for an experimental paper, it would have been better if the code was also provided.
Weaknesses :
The novelty in the paper seems rather limited, as it is based on the earlier works, SEMSUP, and COIL for similarity matching. The improvements for scaling to extreme classification (i) contrastive learning with negative sampling, and (ii) soft lexical matching in equation (5), relaxed-coil, are somewhat standard.
In terms of presentation, the paper lacks a concrete problem description in the beginning of section 3. It jumps straight into the Semantic Supervision, and ignoring the zero-shot extreme classification setup. I think it is important to describe the problem setup precisely to not confuse with other XMC settings such as long/short text, with or without label features, what part of training/test data is available etc.
In terms of experimental comparison, this seems rather unfair to MACLR, as they don't use any label descriptions of any external source of information. An ablation could include replacing the label description, which are sourced externally, by label texts only as in MACLR. Also, I am a bit unclear about the nearest neighbor results with sparse tf-idf data representation. Typically, nearest neighbor with sparse data has not been very promising.
It seems that the inference can be slow that for an unseen label, the description needs to be fetched by search the web. This part is missing in the paper, and no inference timings/comparisons are given.
The paper could also benefit from more detailed literature review of various existing extreme classification settings, and positioning itself accordingly.
Clarity, Quality, Novelty And Reproducibility
While the results seem quite good, the shortcomings of the paper w.r.t clarity, quality, and novelty are given above. The submission does not provide any code, and relies on external knowledge base. |
ICLR | Title
SemSup-XC: Semantic Supervision for Extreme Classification
Abstract
Extreme classification (XC) considers the scenario of predicting over a very large number of classes (thousands to millions), with real-world applications including serving search engine results, e-commerce product tagging, and news article classification. The zero-shot version of this task involves the addition of new categories at test time, requiring models to generalize to novel classes without additional training data (e.g. one may add a new class “fidget spinner” for ecommerce product tagging). In this paper, we develop SEMSUP-XC, a model that achieves state-of-the-art zero-shot (ZS) and few-shot (FS) performance on three extreme classification benchmarks spanning the domains of law, e-commerce, and Wikipedia. SEMSUP-XC builds upon the recently proposed framework of semantic supervision that uses semantic label descriptions to represent and generalize to classes (e.g., “fidget spinner” described as “A popular spinning toy intended as a stress reliever”). Specifically, we use a combination of contrastive learning, a hybrid lexico-semantic similarity module and automated description collection to train SEMSUP-XC efficiently over extremely large class spaces. SEMSUP-XC significantly outperforms baselines and state-of-the-art models on all three datasets, by up to 6-10 precision@1 points on zero-shot classification and >10 precision points on few-shot classification, with similar gains for recall@10 (3 for zero-shot and 2 for few-shot). Our ablation studies and qualitative analyses demonstrate the relative importance of our various improvements and show that SEMSUP-XC’s automated pipeline offers a consistently efficient method for extreme classification.
1 INTRODUCTION
Extreme classification (XC) studies multi-class and multi-label classification problems with a large number of classes, ranging from thousands to millions (Bengio et al., 2019; Bhatia et al., 2015; Chang et al., 2019; Lin et al., 2014; Jiang et al., 2021). The paradigm has multiple real-world applications including movie and product recommendation, search-engines, and e-commerce product tagging. Moreover, in practical scenarios where XC is deployed, environments are constantly changing, with new classes with zero or few labeled examples being added. Recent work such as ZestXML (Gupta et al., 2021), MACLR (Xiong et al., 2022), LightXML (Jiang et al., 2021), and GROOV (Simig et al., 2022), has explored zero-shot and few-shot extreme classification (ZS-XC and FS-XC). These setups are challenging because of (1) the presence of a large number of fine-grained classes which are often not mutually exclusive, (2) limited or no labeled data per class, (3) and increased computational expense and model size because of the large label space. While the aforementioned works have tried to tackle the latter two issues, they have not attempted to build a semantically rich representation of classes for improved classification, using only class names to represent them.
A large fine-grained label space necessitates capturing the semantics of different attributes of classes. To this end, we leverage semantic supervision (SEMSUP) (Hanjie et al., 2022), a recently proposed framework that represents classes using diverse descriptions to better capture their semantics. This design choice allows SEMSUP to better generalize to novel classes by using corresponding descriptions, compared to standard classifiers. However, SEMSUP as designed in (Hanjie et al., 2022) cannot be naively applied to XC due to several reasons: (1) SEMSUP performs full cross-entropy learning which is computationally intractable for large label spaces (2) it uses only semantic similarity between the instance and label description to measure compatibility, thus ignoring lexically similar common terms like “soccer” and “football”. (3) it uses a semi-automatic pipeline for collecting label descriptions for
Input: Tagore wrote a gem
classes that requires a small amount of human intervention, which is expensive for large label spaces we are dealing with.
We remedy these deficiencies by developing a new model SEMSUP-XC that scales to large class spaces in XC using three innovations. First, SEMSUP-XC employs a contrastive learning objective (Hadsell et al., 2006) which samples a fixed number of negative label descriptions, improving computation speed by as much as 99.9%. Second, we use a novel hybrid lexical-semantic similarity model called Relaxed-COIL (based on COIL (Gao et al., 2021b)) that combines semantic similarity of sentences with soft matching between all token pairs. And finally, we propose SEMSUP-WEB which is a fully automatic pipeline with precise heuristics to scrape high-quality descriptions.
SEMSUP-XC achieves state-of-the-art performance on three diverse XC datasets based on legal (EURLex), e-commerce (AmazonCat), and wiki (Wikipedia) domains, across three settings (zeroshot, generalized zero-shot and few-shot extreme classification – ZS-XC, GZS-XC, FS-XC). For example, on ZS-XC, SEMSUP-XC outperforms the next best baseline by 5 to 19 Precision@1 points on different datasets, and maintains the advantage across all metrics. On FS-XC, SEMSUP-XC consistently outperforms baselines by over 10 Precision@1 points on 5, 10, 20 shot classification. Interestingly, SEMSUP-XC also outperforms larger models like T5 and Sentence Transformers (e.g., by over 30 P@1 points on EURLex) which are pre-trained on web-scale corpora, which shows the importance of contrastive learning to adapt to a specific domain. We perform several ablation studies to dissect the importance of each component in SEMSUP-XC, and also provide a qualitative error analysis of the model.
2 RELATED WORK
Extreme classification Extreme classification (XC) (Bengio et al., 2019) studies multi-class and multi-label classification problems over large label spaces. Traditionally, studies have used sparsefeatures extracted from the bag-of-words representation of input documents (Bhatia et al., 2015; Chang et al., 2019; Lin et al., 2014), and have also explored one-versus-all binary classifiers (Babbar & Schölkopf, 2017; Yen et al., 2017; Jain et al., 2019; Dahiya et al., 2021a) and tree-based methods which utilize the label hierarchy (Prabhu et al., 2018; Wydmuch et al., 2018; Khandagale et al., 2020). Recently, neural-network (NN) based dense-feature methods have demonstrated improved accuracies due to their ability to generate semantically rich and contextual representations of text. Different studies have experimented with architectures like convolutional neural networks (Liu et al., 2017), Transformers (Chang et al., 2020; Jiang et al., 2021; Zhang et al., 2021), attention-based
networks (You et al., 2019) and shallow networks (Medini et al., 2019; Mittal et al., 2021; Dahiya et al., 2021b). While the aforementioned works show impressive performance when the labels during training and evaluation are the same, they do not consider the practical zero-shot classification scenario with unseen labels during evaluation.
Zero-shot extreme classification (ZS-XC) Zero-shot classification (ZS) (Larochelle et al., 2008) aims to predict unseen classes not encountered during training by utilizing auxiliary information like the class name or a prototype. Multiple works have attempted to improve performance for the text domain (Dauphin et al., 2014; Nam et al., 2016; Wang et al., 2018; Pappas & Henderson, 2019; Hanjie et al., 2022), however, given the large label space of XC, these cannot be easily be extended because of computational expense and performance degradation. ZestXML (Gupta et al., 2021) was the first study to attempt ZS extreme classification by projecting bag-of-words input features close to corresponding label features using a sparsified linear transformation, but this limits them to using non-contextual text representations. Subsequent works have used neural-networks to generate contextual text representations (Xiong et al., 2022; Simig et al., 2022; Zhang et al., 2022; Rios & Kavuluru, 2018), with MACLR (Xiong et al., 2022) using an inverse cloze pre-training step and GROOV (Simig et al., 2022) using a sequence-to-sequence generative model to predict novel labels. However, all these works have the shortcoming that they use only label names (e.g., the word “cat”) which lack semantic information to represent classes. In this work, we adapt the recently proposed method of semantic supervision (Hanjie et al., 2022) that uses semantically rich and diverse descriptions to represent classes. SEMSUP underperforms out-of-the-box, and we propose several training and modeling changes (§ 3) to achieve state-of-the-art performance on ZS XC tasks.
3 METHODOLOGY
3.1 BACKGROUND
Zero and Few-shot Extreme Classification Extreme classification (XC) contains classification problems with label spaces (thousands to millions classes). Zero-shot extreme classification (ZS-XC) is a version of XC where a model is evaluated on unseen classes not encountered during training. We consider two settings, (1) Zero-shot (ZS), where the model is tested only on unseen classes not containing any train classes, and (2) Generalized zero-shot (G-ZS), where the model is tested on a combined set of train and unseen classes.
Background: Semantic supervision Semantic supervision (SEMSUP) (Hanjie et al., 2022) is a framework for zero-shot classification that represents classes using rich textual descriptions (e.g., “A form of competitive physical activity or game” for the class sports), instead of discrete IDs (e.g., Class-1, Class-2). This allows a trained model to generalize to new classes as long as their corresponding descriptions are provided. In addition to using an input encoder (fIE), SEMSUP also has an output encoder (gOE) to encode label descriptions, and makes a class prediction by measuring the compatibility of the input representation of the instance and output representation of the label description corresponding to a class.
Formally, let C be the number of classes, d be the dimensionality of the input representation, xi be the input document, D = (d1, . . . , dC) be descriptions corresponding to the classes, fIE (xi) ∈ Rd be the input representation, and gOE (dj) ∈ Rd be the output representation of the jth class. We operate under the multi-label classification setting, which is the default for the extreme classification benchmarks we consider. Then, we have the probability of picking the jth class as:
SEMSUP := P (yj = 1|xi) = σ (gOE (dj)⊺ · fIE (xi)) (1)
SEMSUP is trained using the binary cross-entropy (BCE) loss between the predicted probability and gold answer, where N is the number of instances in the dataset.
LSEMSUP = 1 N · |C| ∑
(xi,Yi)
C∑ j=1 LBCE (P (yij = 1|xi) , yij) (2)
where Yi = {yi1, . . . , yiC |yij ∈ {0, 1}} is the set of the labels for instance xi, with xi belonging to class j if and only if yij = 1.
SEMSUP relies on multiple high-quality label descriptions of classes that contain semantic information, which are semi-automatically scraped from the web and filtered by an expert in (Hanjie et al., 2022). During training, diverse descriptions are randomly sampled, endowing the model with a semantically rich representation since they contain information on different class attributes. For example, the class sports can have a definition (“A form of competitive physical activity or game”), examples (“Examples are football, cricket, and hockey”), or etymology (“Derived from a French word meaning leisure”) in each description, among other attributes.
Shortcomings for large class spaces While (Hanjie et al., 2022) show improved performance of SEMSUP on zero-shot classification, their vanilla method cannot be directly applied to extreme classification due to several reasons. First, they use the binary cross-entropy loss over all the classes in the dataset (Eq 2), which involves encoding C label descriptions for each batch. This is computationally intractable for large label spaces because of GPU memory constraints. Second, they use a simple bi-encoder model which measures the semantic similarity between the input instance and the label description. However, instances and descriptions often share lexical terms with the same or similar lemma (e.g., wrote and written), which are not directly exploited by their method. And third, although their label description collection pipeline is semi-automatic, human intervention of any form is not feasible for extreme classification datasets, especially ones with labels in the order of millions. Our method SEMSUP-XC (Section 3.2) addresses the above contraints by using: (1) contrastive learning with negative samples for improved computational speed, (2) a novel hybrid lexical-semantic similarity model for improved performance, and (3) a completely automatic description scraping pipeline with accurate heuristics for filtering poor descriptions.
3.2 SEMSUP-XC: EFFICIENT GENERALIZATION FOR ZERO-SHOT EXTREME CLASSIFICATION
We provide an overview of our method in Figure 1 and explain it in detail below.
Training using contrastive learning For datasets with a large label space (large |C|), we improve SEMSUP’s computational speed by sampling negative classes for each instance rather than encoding the label descriptions of all classes. For instance xi, consider two partitions of the labels Yi = {yi1, . . . , yiC |yij ∈ {0, 1}}, with Y +i containing the positive classes (yij = 1) and Y − i containing the negative classes (yij = 0). SEMSUP-XC is trained using all positive classes (|Y +i |), but drawing inspiration from contrastive learning (Hadsell et al., 2006), we sample K − |Y +i | negative classes from Y −i instead of C − |Y + i |, with K ≪ |C|. We refer readers to appendix C for exact details. Intuitively, our training objective incentivizes the representations of the instance and label descriptions of the positive classes to be similar while simultaneously increasing the distance with respect to representations of the negative classes. Furthermore, rather than picking negative labels at random, we sample hard negatives that are lexically similar to positive labels. This allows for more explicit separation of embeddings of closely related labels. A typical dataset we consider (AmazonCat) contains |C| = 13, 000 and K ≈ 1000, which leads to SEMSUP-XC being 1200013000 = 92.3% faster than SEMSUP. Mathematically, the following is the training objective:
LSEMSUP-XC = 1 N ·K ∑ i ∑ yk∈Y + i LBCE (P (yk = 1|xi) , yk) + K−|Y +i |∑ l=1,yl Y − i LBCE (P (yl = 1|xi), yl) (3) We follow a similar procedure for inference and we refer readers to Appendix C for further details.
Hybrid lexico-semantic similarity model SEMSUP uses a bi-encoder architecture with two different BERT (Devlin et al., 2019b) models as the input and output encoder respectively. They use the final-layer representation corresponding to the [CLS] token to encode the instance and label description, with the inner product measuring the semantic similarity of the input instance and output class. Drawing inspiration from recent IR models like COIL (Gao et al., 2021b) and ColBERT (Khattab & Zaharia, 2020), we note that BERT’s semantic similarity can ignore lexical matching of words present in the input and output text which exhibit strong evidence of compatibility, thus leading to
performance degradation. COIL is a bi-encoder architecture which alleviates this by incorporating both semantic and lexical similarity. Apart from the dot product between [CLS] vectors, they also include an exact lexical match scoring function which is based on the dot product of representations corresponding to tokens with exact matches in the two pieces of text considered (e.g., input text: “Capture the best momemts in high quality pictures.” and label description “A camera is used to take photos.” If there are multiple occurrences of a common token type, the maximum similarity score is chosen, and the scores are then aggregated over all token types that are present in both sentences. Let xi = (xi1, xi2, . . . , xin) be the input instance with n tokens, dj = (dj1, dj2, . . . , djm) be a label description of class j with m tokens, vxicls and v dj cls be the [CLS] representations of the input and label description, and vxik and v dj l be the token representation of the k
th and lth token of xi and dj respectively. Mathematically, the following would be the probability of picking class j for both vanilla SEMSUP and SEMSUP + COIL:
SEMSUP := P (yj = 1|xi) = σ ( vxicls ⊺ · vdjcls )
SEMSUP + COIL := P (yj = 1|xi) = σ vxicls⊺ · vdjcls + ∑ w∈xi∩dj max w=xik=djl ( vxik ⊺v dj l ) (4) where exact lexical match is used to get the list of common tokens – w ∈ xi ∩ dj . As a result of the exact match, COIL has the drawback that semantically similar tokens (e.g., pictures and photos in the above sentence pair) and words with the same lemma (e.g., walk and walking) are treated as dissimilar tokens. To avoid this, we propose the use of soft lexical matching based on token clustering and lemmatization in our model Relaxed-COIL. We create clusters of tokens based on two characteristics: 1) the BERT token-embedding similarity (Reimers & Gurevych, 2019) and 2) the lemma of the word. Let CL(w) denote the cluster membership of the token w, with CL(wi) = CL(wj) if and only if they have a high token-embedding similarity or if they share the same lemma. Instead of using the exact lexical match in COIL, we use a soft lexical match which only checks if two tokens belong to the same cluster, thus allowing the model to exploit semantic similarity of tokens. Mathematically, Relaxed-COIL computes the probability as follows, where CL(xi) = {CL (xi1) , . . . , CL (xin)} denotes the set of cluster memberships of tokens in xi.
Relaxed-COIL :=P (yj = 1|xi) =
σ vxicls⊺ · vdjcls + ∑ idx∈CL(xi)∩CL(dj) max idx=CL(xik)=CL(djl) ( vxik ⊺v dj l ) (5) Automatically collecting high-quality descriptions SEMSUP uses a semi-automatic pipeline for collecting multiple descriptions for classes, with an expert required for filtering irrelevant ones. However, in our case, large label spaces (e.g., 1 million for Wikipedia) make any degree of human involvment infeasible. To alleviate this, we create a completely automatic pipeline for collecting descriptions which includes heuristics for removing spam, advertisements, and irrelevant descriptions, and we detail the list of heuristics used in Appendix B. In addition to web-scraped label descriptions, we utilize label-hierarchy information if provided by the dataset(EURLex and AmazonCat), which allows us to encode properties about parent and children classes wherever present. Further details are present in Appendix B.2 As we show in the ablation study (§ 5.3), both label descriptions that we collect and the label hierarchy provide significant performance boosts.
4 EXPERIMENTAL SETUP
Datasets We evaluate our model on three diverse public benchmark datasets. They are, EURLex4.3K (Chalkidis et al., 2019) which is legal document classification dataset with 4.3K classes, AmazonCat-13K (McAuley & Leskovec, 2013) which is an e-commerce product tagging dataset including Amazon product descriptions and titles with 13K categories, and Wikipedia-1M (Gupta et al., 2021) which is an article classification dataset made up of over 5 million Wikipedia articles
with 1 million categories. We provide detailed statistics about the number of instances and classes in train and test set in Table 1. We refer readers to Appendix A.2 for additional details.
Baselines We perform extensive experiments with six diverse baselines. 1) TF-IDF performs a nearest neighbour match between the sparse tf-idf features of the input and label description. 2) T5 (Raffel et al., 2019) is a large sequence-to-sequence model which has been pre-trained on 750GB unsupervised data and further fine-tuned on MNLI (Williams et al., 2018) to allow us to check if a label description entails an input instance. Ranking is done on top 50 labels predicted by TF-IDF 3) Sentence Transformer (Reimers & Gurevych, 2019) is a semantic text similarity model fine-tuned using a contrastive learning objective on over 1 billion sentence pairs. We rank the labels based on similarity of their output embeddings with document’s embeddings. The latter two baselines use significantly more data than SEMSUP-XC and T5 has 9× the parameters. The aforementioned baselines are unsupervised and not fine-tuned on our datasets. The following baselines are previously proposed supervised models which are fine-tuned on the datasets we consider. 4) ZestXML (Gupta et al., 2021) learns a highly sparsified linear transformation between sparse input and label features. 5) MACLR (Xiong et al., 2022) is a bi-encoder based model pre-trained on two self-supervised learning tasks to improve extreme classification—Inverse Cloze Task (Lee et al., 2019) and SimCSE (Gao et al., 2021c), and we fine-tune it on the datasets considered. 6) GROOV (Simig et al., 2022) is a T5 model that learns to generate both seen and unseen labels given an input document. 7) SPLADE (Formal et al., 2021) is a sparse neural retreival model that learns label/document sparse expansion via a Bert masked language modelling head. It is one of the current state of the art in information retrieval in out-of-domain tasks. To make comparisons with SEMSUP-XC fair, we fine-tune and re-evaluate the above models on the datasets we consider while including label descriptions and label hierarchy information. We refer readers to Appendix A.2 for additional details.
SEMSUP-XC implementation details We use the Bert-base model (Devlin et al., 2019a) as the backbone for the input encoder and Bert-small model (Turc et al., 2019) for the output encoder. SEMSUP-XC follows the model architecture described in Section 3.2 and we use contrastive learning (Hadsell et al., 2006) to train our models. During training, we randomly sample 1000−p negatives for each instance, where p is the number of positive labels for the instance. At inference, to improve computational efficiency, we precompute the output representations of label descriptions. We use the AdamW optimizer (Loshchilov & Hutter, 2019) and tune our hyperparameters using grid search on the respective validation set. We provide further details in Appendix A.1.
Evaluation setting and metrics We evaluate all models on three different settings: Zero-shot classification (ZS) on a set of unseen classes, generalized zero-shot classification (G-ZS) on a combined set of seen and unseen classes, and few-shot classification (FS) on a set of classes with minimal amounts of supervised data (1 to 20 examples per class). We use Precision@K and Recall@K (with multiple values of K) as our evaluation metrics, as is standard practice. Precision@K measures how accurate the top-K predictions of the model are, and Recall@K measures what fraction of correct labels are present in the top-K predictions, and they are mathematically defined as P@k = 1 k ∑ i∈rankk(ŷ) yi and R@k = 1∑ i yi ∑ i∈rankk(ŷ) yi, where rankk(ŷ) is the set of top-K predictions.
5 RESULTS
5.1 ZERO-SHOT EXTREME CLASSIFICATION
We consider two variants of baselines: with and without descriptions. We provide label hierarchy as output supervision in both cases. Table 2 shows that SEMSUP-XC significantly outperforms baselines on all datasets and metrics, under both zero-shot (ZS-XC) and generalized zero-shot (GZS-XC) settings. On ZS-XC, SEMSUP-XC outperforms MACLR by over 20, 13, and 15 P@1 points on the three datasets, respectively, even though MACLR uses XC specific pre-training (Inverse Cloze Task and SimCSE) , while SEMSUP-XC does not. SEMSUP-XC also outperforms GROOV (e.g., over 45 P@1 points on EURLex) which uses a T5 seq2seq model pre-trained on significantly more data than BERT, and this is likely because GROOV’s output space is unconstrained. SEMSUP-XC’s semantic understanding of instances and labels stands out against ZestXML which uses sparse non-contextual features with the former consistently scoring twice as higher compared to the latter. Interestingly TF-IDF performs better than all other baselines for EURLex ZS. This is because sparse methods often perform better than dense bi-encoders in zero-shot settings (Thakur et al., 2021), as the latter fail to capture fine-grained information. However, due to the introduction of Relaxed COIL in our method, SEMSUP-XC can perform fine-grained lexical matching in descriptions along with capturing deep semantic information, thus resulting in superior ZS and GZS scores. SEMSUP-XC also outperforms the other unsupervised baselines of T5, and Sentence-Transformer, even though the latter two are pre-trained on significantly larger amounts of data than BERT (T5 use 50× compared our base model).
In addition, SEMSUP-XC achieves higher recall on all datasets, beating the best performing baselines by 6, 18, and 5.2R@10 points on the three datasets, respectively. Since GZS-XC includes labels seen during training while evaluation, all methods have higher scores than ZS-XC and the gaps between different models are smaller, but we see both on precision and recall metrics that SEMSUP-XC again outperforms all the baselines considered by margins of 1-2 precision@1 points. Table 5 in Appendix D contains additional results with more methods and metrics.
Further, while SEMSUP-XC improves from the inclusion of descriptions, other methods have little to no advantage. This is because web-scraped descriptions are often noisy and need suitable architecture to make use of them. Unlike other methods, SEMSUP-XC has a hybrid lexical matching module, which improves from the inclusion of descriptions. This demonstrates the combined advantage of our proposed architectural changes and the use of web-scraped descriptions.
5.2 FEW-SHOT EXTREME CLASSIFICATION
We now consider the FS-XC setup, where new classes added at evaluation time have a small number of labeled instances each (K ∈ {1, 5, 10, 20}). For the sake of completeness, we also include zero-shot performance (ZS-XC, K = 0) and report results in Figure 2. Detailed results for other metrics are in appendix E (showing the same trend as P@1) and implementation details regarding creation of the few-shot splits are in appendix E. Similar to the ZS-XC case, SEMSUP-XC outperforms the baselines for all values of K considered. As expected, SEMSUP-XC’s performance increases with K because of access to more labeled data, but crucially, it continues to outperform baselines by the same margins. Interestingly, SEMSUP-XC’s zero-shot performance is higher than even the few-shot scores of baselines that have access up to K = 20 labeled samples on AmazonCat, which further strengthens the model’s applicability to the XC paradigm. We also note that adding a few labeled examples seems to be more effective in EURLex than AmazonCat, with the performance difference between K = 1 and K = 20 being 21 and 6 P@1 points respectively. Combined with the fact that performance seems to plateau for both datasets, we believe that the larger label space with rich descriptions for AmazonCat has allowed SEMSUP-XC to learn label semantics better than for EURLex.
5.3 ABLATIONS
We analyze the performance of SEMSUP-XC by conducting ablation studies and qualitative analysis on EURLex and AmazonCat for the zero-shot extreme classification setting (ZS-XC) in the following sections.
Analyzing components of SEMSUP-XC SEMSUP-XC’s use of the Relaxed-COIL model and semantically rich descriptions enables it to outperform all baselines considered, and we analyze the importance of each component in Table 3. As our base model (first row) we consider the SEMSUP-XC without ensembling it with TF-IDF. We note that the SEMSUP-XC base model is the best performing variant for both datasets and on all metrics other than P@1, for which it is only 0.5 points lower. Web scraped label descriptions are important because removing them decreases both precision and recall scores (e.g., P@1 is lower by 4 points on AmazonCat) on all settings considered. We see bigger improvements with AmazonCat, which is the dataset with larger number of classes (13K), which substantiates the need for semantically rich descriptions when dealing with fine-grained classes. Label hierarchy information is similarly crucial, with large performance drops on both datasets in its absence (e.g., 26 P@1 points on AmazonCat), thus showing that access to structured hierarchy information leads to better semantic representations of labels.
On the modeling side, we observe that both exact and soft lexical matching are important for RelaxedCOIL, with their absence leading to 11 and 4 point P@1 degradation on AmazonCat. While exact lexical-matching is significantly more important for EURLex which is the smaller dataset, we see that both types of matching are important for AmazonCat which tends to have classes which are more related.
Automatically Augmenting Label Descriptions The previous result showed the importance of web scraped descriptions, and we explore the effect of augmenting label descriptions to increase their number, and hence SEMSUP-XC’s understanding of the class, and report the results in Table 4. We use the widely used Easy Data Augmentation(EDA) (Wei & Zou, 2019) method for descriptions augmentations. Specifically, we apply random word deletion, random word swapping, random insertion, and synonym replacement each with a probabilty 0.5 on the description. We notice that augmentation improves performance on EURLex by 2, 1, and 2 P@1, P@5, and R@10 points respectively, suggesting that augmentation can be a viable way to increase the quantity of descriptions. But results on AmazonCat show that augmentation does not improve and actually slightly hurts the performance (e.g., 0.8 P@1 points). Given that AmazonCat has 3× the number of labels compared to EURLex, we believe that this shows SEMSUP-XC’s effectiveness in capturing the label semantics in the presence of larger number of classes, thus making data augmentation redundant. However, we believe that data augmentation might be a simple tool to boost performance on smaller datasets with lesser labels or descriptions.
6 CONCLUSION
We tackle the task of extreme classification (XC) (Bengio et al., 2019), which involves very large label spaces, using the framework of semantic supervision (Hanjie et al., 2022) that uses class descriptions instead of label IDs. Our method SEMSUP-XC innovates using a combination of contrastive learning, hybrid lexico-semantic matching and automated description collection to train effectively for XC. We achieve state-of-the-art results on three standard XC benchmarks and significantly outperform prior work, while also providing several ablation studies and qualitative analyses demonstrate the relative importance of our various modeling choices. Future work can further improve description quality and use stronger models for input-output similarity to further push the boundaries on this practical task with real-world applications.
APPENDICES
A TRAINING DETAILS
A.1 HYPERPARAMETER TUNING
We tune the learning rate, batch_size using grid search. For the EURLex dataset, we use the standard validation split for choosing the best parameters. We set the input and output encoder’s learning rate at 5e−5 and 1e−4, respectively. We use the same learning rate for the other two datasets. We use batch_size of 16 on EURLex and 32 on AmazonCat and Wikipedia. For Eurlex, we train our zero-shot model for fixed 2 epochs and the generalized zero-shot model for 10 epochs. For the other 2 datasets, we train for a fixed 1 epoch. For baselines, we use the default settings as used in respective papers.
Training
All of our models are trained end-to-end. We use the pretrained BERT model (Devlin et al., 2019b) for encoding input documents, and Bert-Small model (Turc et al., 2019) for encoding output descriptions. For efficiency in training, we freeze the first two layers of the output encoder. We use contrastive learning to train our models and sample hard negatives based on TF-IDF features. All implementation was done in PyTorch and Huggingface transformer and experiments were run NVIDIA RTX2080 and NVIDIA RTX3090 gpus.
A.2 BASELINES
We use the code provided by ZestXML, MACLR and GROOV for running the supervised baselines. We employ the exact implementation of TF-IDF as used in ZestXML. We evaluate T5 as an NLI task (Xue et al., 2021). We separately pass the names of each of the top 100 labels predicted by TF-IDF, and rank labels based on the likelihood of entailment. We evaluate Sentence-Transformer by comparing the similarity between the emeddings of input document and the names of the top 100 labels predicted by TF-IDF. Splade is a sparse neural retreival model that learns label/document sparse expansion via a Bert masked language modelling head. We use the code provided by authors for running the baselines. We experiment with various variations and pretrained models, and find splade_max_CoCodenser pretrained model with low sparsity(λd = 1e − 6 & λq = 1e − 6) to be performing the best.
B LABEL DESCRIPTIONS FROM THE WEB
B.1 AUTOMATICALLY SCRAPING LABEL DESCRIPTIONS FROM THE WEB
We mine label descriptions from web in an automated end-to-end pipeline. We make query of the form ‘what is <class_name>’(or component name in case of Wikipedia) on duckduckgo search engine. Region is set to United States(English), and advertisements are turned off, with safe search set to moderate. We set time range from 1990 uptil June 2019. On average top 50 descriptions are scraped for each query. To further improve the scraped descriptions, we apply a series of heuristics:
• We remove any incomplete sentences. Incomplete sentences do not end in a period or do not have more than one noun, verb or auxiliary verb in them. Eg: Label = Adhesives ; Removed Sentence = What is the best glue or gel for applying
• Statements with lot of punctuation such as semi-colon were found to be non-informative. Descriptions with more than 10 non-period punctuations were removed. Eg: Label = Plant Cages & Supports ; Removed Description = Plant Cages & Supports. My Account; Register; Login; Wish List (0) Shopping Cart; Checkout $ USD $ AUD THB; R$ BRL $ CAD $ CLP $ . . .
• We used regex search to identify urls and currencies in the text. Most of such descriptions were spam and were removed. Eg: Label = Accordion Accessories ; Removed Description = Buy Accordion Accessories
Online, with Buy Now & Pay Later and Rental Options. Free Shipping on most orders over $250. Start Playing Accordion Accessories Today!
• Descriptions with small sentences(<5 words) were removed. Eg: Label = Boats ; Removed Description = Boats for Sale. Buy A Boat; Sell A Boat; Boat Buyers Guide; Boat Insurance; Boat Financing ...
• Descriptions with more than 2 interrogative sentences were filtered out. Eg: Label = Shower Curtains ; Removed Description = So you’re interested...why? you’re starting a company that makes shower curtains? or are you just fooling around? Wiki User 2010-04
• We mined top frequent n-grams from a sample of scraped descriptions, and based on it identified n-grams which were commonly used in advertisements. Examples include: ‘find great deals’, ‘shipped by’. Label = Boat compasses ; Removed Description = Shop and read reviews about Compasses at West Marine. Get free shipping on all orders to any West Marine Store near you today.
• We further remove obscene words from the datasets using an open-source library (Friedland, 2013).
• We also run a spam detection model (Grandury, 2021) on the descriptions and remove those with a confidence threshold above 0.9. Eg: Label = Phones ; Removed Description = Check out the Phones page at <company_name> — the world’s leading music technology and instrument retailer!
• Additionally, most of the sentences in first person, were found to be advertisements, and undetected by previous model. We remove descriptions with more than 3 first person words (such as I, me, mine) were removed. Eg: Label = Alarm Clocks ; Removed Description = We selected the best alarm clocks by taking the necessary, well, time. We tested products with our families, waded our way through expert and real-world user opinions, and determined what models lived up to manufacturers’ claims. . . .
B.2 POST-PROCESSING
We further add hierarchy information in a natural language format to the label descriptions for AmazonCat and EURLex datasets. Precisely, we follow the format of ‘key is value.’ with each key, value pair represented in new line. Here key belongs to the set { ‘Description’, ‘Label’, ‘Alternate Label Names’, ‘Parents’, ‘Children’ }, and the value corresponds to comma separated list of corresponding information from the hierarchy or scraped web description. For example, consider the label ‘video surveillance’ from EURLex dataset. We pass the text: ‘Label is video surveillance. Description is <web_scraped_description>. Parents are video communications. Alternate Label Names are camera surveillance, security camera surveillance.’ to the output encoder. For Wikipedia, label hierarchy is not present, so we only pass the description along with the name of label.
B.3 WIKIPEDIA DESCRIPTIONS
When labels are fine-grained, as in the Wikipedia dataset, making queries for the full label name is not possible. For example, consider the label ‘Fencers at the 1984 Summer Olympics’ from Wikipedia categories; querying for it would link to the same category on Wikipedia itself. Instead, we break the label names into separate constituents using a dependency parser. Then for each constituent(‘Fencers’ and ‘Summer Olympics’), we scrape descriptions. No descriptions are scraped for constituents labelled by Named-Entity Recognition(‘1984’), and their NER tag is directly used. Finally, all the scraped descriptions are concatenated in a proper format and passed to the output encoder.
B.4 DE-DUPLICATION
To ensure no overlap between our descriptions and input documents, we used SuffixArray-based exact match algorithm (Lee et al., 2022) with a minimum threshold of 60 characters and removed the matched descriptions.
C CONTRASTIVE LEARNING
During training, for both EURLex and AmazonCat, we randomly sample 1000−|Y +i | negative labels for each input document. For Wikipedia, we precompute the top 1000 labels for each input based on TF-IDF scores. We then randomly sample 1000− |Y +i | negative labels for each document. At inference time, we evaluate our models on all labels for both EURLex and AmazonCat. However, even evaluation on millions of labels in Wikipedia is not computationally tractable. Therefore, we evaluate only on top 1000 labels predicted by TF-IDF for each input.
D FULL RESULTS FOR ZERO-SHOT CLASSIFICATION
D.1 SPLIT CREATION
For EURLex, and AmazonCat, we follow the same procedure as detailed in GROOV (Simig et al., 2022). We randomly sample k labels from all the labels present in train set, and consider the remaining labels as unseen. For EURLex we have roughly 25%(1057 labels) and for AmazonCat roughly 50%(6500 labels) as unseen. For Wikipedia, we use the standard splits as proposed in ZestXML (Gupta et al., 2021).
D.2 RESULTS
Table 5 contains complete results for ZS-XCacross the three datasets, including additional baselines and metrics.
E FULL RESULTS FOR FEW-SHOT CLASSIFICATION
E.1 SPLIT CREATION
We iteratively select k instances of each label in train documents. If a label has more than k documents associated with it, we drop the label from training(such labels are not sampled as either positives or negatives) for the extra documents. We refer to these labels as neutral labels for convenience.
E.2 MODELS
We use MACLR, GROOV, Light XML as baselines. We initialize the weights from the corresponding pre-trained models in the GZSL setting. We use the default hyperparameters for baselines and SEMSUP models. As discussed in the previous section, neutral labels are not provided at train time for MACLR and GROOV baselines. However, since Light XML uses a final fully-connected classification layer, we cannot selectively remove them for a particular input. Therefore, we mask the loss for labels which are neutral to the documents. We additionally include scores for TF-IDF, but since it is a fully unsupervised method, only zero-shot numbers are included.
E.3 RESULTS
The full results for few-shot classification are present in Table 6.
F COMPUTATIONAL EFFICIENCY
Extreme Classification necessitates that the models scale well in terms of time and memory efficiency with labels at both train and test times. SEMSUP-XC uses contrastive learning for efficiency at train
time. During inference, SEMSUP-XC predicts on top 1000 shortlists by TF-IDF, thereby achieving sub-linear time. Further, contextualized tokens for label descriptions are computed only once and stored in memory-mapped files, thus decreasing computational time significantly. Overall, our computational complexity can be represented by O(TIE ∗N + TOE ∗ |Y |+ k ∗N ∗ Tlex), where TIE , TOE represent the time taken by input encoder and output encoder respectively, N is the total number of input documents, |Y| is the number of all labels, k indicates the shortlist size and |Tlex| denotes the time in soft-lexical computation between contextualized tokens of documents and labels. In our experiments, TIE ∗N >> TOE ∗ |Y | and TIE ≈ Tlex ∗ k. Thus effectively, computational complexity is approximately equal to O(TIE ∗ N ), which is in comparison to other SOTA extreme classification methods.
Table 7 shows that SEMSUP-XC when compared to other XC state-of-the-art baselines, is computationally efficient in terms of speed while demonstrating much better performance. In terms of storage we utilize almost 4 times storage as compared to MACLR, as we need to store contextualized token embeddings of each label. However the overall storage overhead(≈ 17.9GB) is small in comparison to significant improvement in performance and comparable speed. We provide a more detailed analysis of our method in Appendix F
G QUALITATIVE ANALYSIS
We now perform a qualitative analysis of SEMSUP-XC’s predictions and present representative examples in Table 8, and compare them to MACLR which is the next best performing model. Correct predictions are in bold. In the first example, even from the short text in the document SEMSUPXC is able to figure out that it is not just a book, but a textbook. While MACLR predicts five labels which are all similar, SEMSUP-XC is able to predict diverse labels while getting the correct label in five predictions. In the second example, SEMSUP-XC smartly realizes the content of the document is a story and hence predictions literature & fiction, whereas MACLR tries to predict labels for the contents of the story instead. This shows the nuanced understanding of the label space that SEMSUP-XC has learned. The third example portrays the semantic understanding of the SEMSUP-XC’s label space. While MACLR tries to predict labels like powered mixers because of the presence of the word mixer, SEMSUP-XC is able to understand the text at a high level and predict labels like studio recording equipment even though the document has no explicit mention of the words studio, recording or equipment). These qualitative examples show that SEMSUP-XC’s understanding of how different fine-grained classes are related and how documents refer to them is better than the baselines considered. We list more such examples in Appendix G. | 1. What is the focus and contribution of the paper regarding zero/few shot learning in extreme classifications?
2. What are the strengths and weaknesses of the proposed SEMSUP-XC framework, particularly in terms of its enhancements and comparisons with other works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Does the reviewer have any concerns or questions about the paper's methodology or results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this work, the authors develop a method, namely SEMSUP-XC, to tackle the zero/few shot learning problem in the extreme classifications. The proposed framework is an extension of an existing method (SEMSUP) to the extreme large label space. In particular, the authors made three enhancements on top of SEMSUP: adopting contrastive learning to handle large label space, introducing lexical matching to boost performance and automatic pipeline to scrape label descriptions from web search engines. Experimental results show that the proposed method outperform other zero/few-shot learning XC models on three ZS-XC benchmarking datasets.
Strengths And Weaknesses
Strength
The proposed method outperform other models in most of the experiments.
Authors conducted ablation study on how much each component is contributing to the final performance.
Weaknesses
My main concern is the novelty of the work. Firstly, the proposed method is an extension of an existing work to the extreme multi-label setting. Secondly, two of the novel enhancements and are either straightforward (contrastive learning) or direct application of existing work (COIL). Lastly, scraping web search results for label description is more of a data hack than a methodical contribution.
The comparison in table 2 seems not entirely fair. The simple TFIDF baseline is, in many cases, the second best model. This indicates that the label description scraped from web search is informative and contributed a lot in the improved performance. With the label description, would a simple BERT bi-encoder retrieval model have similar performance?
Clarity, Quality, Novelty And Reproducibility
Clarity
The paper is well presented, easy to follow and has clear notations.
Quality
The paper is technically correct and does not have ill-supported claims.
Novelty
The novelty of this work is limited. Reason same as above.
Reproducibility
I have no major concern on the reproducibility. However, I think it would help if the authors could release the label descriptions scraped from web search result. |
ICLR | Title
SemSup-XC: Semantic Supervision for Extreme Classification
Abstract
Extreme classification (XC) considers the scenario of predicting over a very large number of classes (thousands to millions), with real-world applications including serving search engine results, e-commerce product tagging, and news article classification. The zero-shot version of this task involves the addition of new categories at test time, requiring models to generalize to novel classes without additional training data (e.g. one may add a new class “fidget spinner” for ecommerce product tagging). In this paper, we develop SEMSUP-XC, a model that achieves state-of-the-art zero-shot (ZS) and few-shot (FS) performance on three extreme classification benchmarks spanning the domains of law, e-commerce, and Wikipedia. SEMSUP-XC builds upon the recently proposed framework of semantic supervision that uses semantic label descriptions to represent and generalize to classes (e.g., “fidget spinner” described as “A popular spinning toy intended as a stress reliever”). Specifically, we use a combination of contrastive learning, a hybrid lexico-semantic similarity module and automated description collection to train SEMSUP-XC efficiently over extremely large class spaces. SEMSUP-XC significantly outperforms baselines and state-of-the-art models on all three datasets, by up to 6-10 precision@1 points on zero-shot classification and >10 precision points on few-shot classification, with similar gains for recall@10 (3 for zero-shot and 2 for few-shot). Our ablation studies and qualitative analyses demonstrate the relative importance of our various improvements and show that SEMSUP-XC’s automated pipeline offers a consistently efficient method for extreme classification.
1 INTRODUCTION
Extreme classification (XC) studies multi-class and multi-label classification problems with a large number of classes, ranging from thousands to millions (Bengio et al., 2019; Bhatia et al., 2015; Chang et al., 2019; Lin et al., 2014; Jiang et al., 2021). The paradigm has multiple real-world applications including movie and product recommendation, search-engines, and e-commerce product tagging. Moreover, in practical scenarios where XC is deployed, environments are constantly changing, with new classes with zero or few labeled examples being added. Recent work such as ZestXML (Gupta et al., 2021), MACLR (Xiong et al., 2022), LightXML (Jiang et al., 2021), and GROOV (Simig et al., 2022), has explored zero-shot and few-shot extreme classification (ZS-XC and FS-XC). These setups are challenging because of (1) the presence of a large number of fine-grained classes which are often not mutually exclusive, (2) limited or no labeled data per class, (3) and increased computational expense and model size because of the large label space. While the aforementioned works have tried to tackle the latter two issues, they have not attempted to build a semantically rich representation of classes for improved classification, using only class names to represent them.
A large fine-grained label space necessitates capturing the semantics of different attributes of classes. To this end, we leverage semantic supervision (SEMSUP) (Hanjie et al., 2022), a recently proposed framework that represents classes using diverse descriptions to better capture their semantics. This design choice allows SEMSUP to better generalize to novel classes by using corresponding descriptions, compared to standard classifiers. However, SEMSUP as designed in (Hanjie et al., 2022) cannot be naively applied to XC due to several reasons: (1) SEMSUP performs full cross-entropy learning which is computationally intractable for large label spaces (2) it uses only semantic similarity between the instance and label description to measure compatibility, thus ignoring lexically similar common terms like “soccer” and “football”. (3) it uses a semi-automatic pipeline for collecting label descriptions for
Input: Tagore wrote a gem
classes that requires a small amount of human intervention, which is expensive for large label spaces we are dealing with.
We remedy these deficiencies by developing a new model SEMSUP-XC that scales to large class spaces in XC using three innovations. First, SEMSUP-XC employs a contrastive learning objective (Hadsell et al., 2006) which samples a fixed number of negative label descriptions, improving computation speed by as much as 99.9%. Second, we use a novel hybrid lexical-semantic similarity model called Relaxed-COIL (based on COIL (Gao et al., 2021b)) that combines semantic similarity of sentences with soft matching between all token pairs. And finally, we propose SEMSUP-WEB which is a fully automatic pipeline with precise heuristics to scrape high-quality descriptions.
SEMSUP-XC achieves state-of-the-art performance on three diverse XC datasets based on legal (EURLex), e-commerce (AmazonCat), and wiki (Wikipedia) domains, across three settings (zeroshot, generalized zero-shot and few-shot extreme classification – ZS-XC, GZS-XC, FS-XC). For example, on ZS-XC, SEMSUP-XC outperforms the next best baseline by 5 to 19 Precision@1 points on different datasets, and maintains the advantage across all metrics. On FS-XC, SEMSUP-XC consistently outperforms baselines by over 10 Precision@1 points on 5, 10, 20 shot classification. Interestingly, SEMSUP-XC also outperforms larger models like T5 and Sentence Transformers (e.g., by over 30 P@1 points on EURLex) which are pre-trained on web-scale corpora, which shows the importance of contrastive learning to adapt to a specific domain. We perform several ablation studies to dissect the importance of each component in SEMSUP-XC, and also provide a qualitative error analysis of the model.
2 RELATED WORK
Extreme classification Extreme classification (XC) (Bengio et al., 2019) studies multi-class and multi-label classification problems over large label spaces. Traditionally, studies have used sparsefeatures extracted from the bag-of-words representation of input documents (Bhatia et al., 2015; Chang et al., 2019; Lin et al., 2014), and have also explored one-versus-all binary classifiers (Babbar & Schölkopf, 2017; Yen et al., 2017; Jain et al., 2019; Dahiya et al., 2021a) and tree-based methods which utilize the label hierarchy (Prabhu et al., 2018; Wydmuch et al., 2018; Khandagale et al., 2020). Recently, neural-network (NN) based dense-feature methods have demonstrated improved accuracies due to their ability to generate semantically rich and contextual representations of text. Different studies have experimented with architectures like convolutional neural networks (Liu et al., 2017), Transformers (Chang et al., 2020; Jiang et al., 2021; Zhang et al., 2021), attention-based
networks (You et al., 2019) and shallow networks (Medini et al., 2019; Mittal et al., 2021; Dahiya et al., 2021b). While the aforementioned works show impressive performance when the labels during training and evaluation are the same, they do not consider the practical zero-shot classification scenario with unseen labels during evaluation.
Zero-shot extreme classification (ZS-XC) Zero-shot classification (ZS) (Larochelle et al., 2008) aims to predict unseen classes not encountered during training by utilizing auxiliary information like the class name or a prototype. Multiple works have attempted to improve performance for the text domain (Dauphin et al., 2014; Nam et al., 2016; Wang et al., 2018; Pappas & Henderson, 2019; Hanjie et al., 2022), however, given the large label space of XC, these cannot be easily be extended because of computational expense and performance degradation. ZestXML (Gupta et al., 2021) was the first study to attempt ZS extreme classification by projecting bag-of-words input features close to corresponding label features using a sparsified linear transformation, but this limits them to using non-contextual text representations. Subsequent works have used neural-networks to generate contextual text representations (Xiong et al., 2022; Simig et al., 2022; Zhang et al., 2022; Rios & Kavuluru, 2018), with MACLR (Xiong et al., 2022) using an inverse cloze pre-training step and GROOV (Simig et al., 2022) using a sequence-to-sequence generative model to predict novel labels. However, all these works have the shortcoming that they use only label names (e.g., the word “cat”) which lack semantic information to represent classes. In this work, we adapt the recently proposed method of semantic supervision (Hanjie et al., 2022) that uses semantically rich and diverse descriptions to represent classes. SEMSUP underperforms out-of-the-box, and we propose several training and modeling changes (§ 3) to achieve state-of-the-art performance on ZS XC tasks.
3 METHODOLOGY
3.1 BACKGROUND
Zero and Few-shot Extreme Classification Extreme classification (XC) contains classification problems with label spaces (thousands to millions classes). Zero-shot extreme classification (ZS-XC) is a version of XC where a model is evaluated on unseen classes not encountered during training. We consider two settings, (1) Zero-shot (ZS), where the model is tested only on unseen classes not containing any train classes, and (2) Generalized zero-shot (G-ZS), where the model is tested on a combined set of train and unseen classes.
Background: Semantic supervision Semantic supervision (SEMSUP) (Hanjie et al., 2022) is a framework for zero-shot classification that represents classes using rich textual descriptions (e.g., “A form of competitive physical activity or game” for the class sports), instead of discrete IDs (e.g., Class-1, Class-2). This allows a trained model to generalize to new classes as long as their corresponding descriptions are provided. In addition to using an input encoder (fIE), SEMSUP also has an output encoder (gOE) to encode label descriptions, and makes a class prediction by measuring the compatibility of the input representation of the instance and output representation of the label description corresponding to a class.
Formally, let C be the number of classes, d be the dimensionality of the input representation, xi be the input document, D = (d1, . . . , dC) be descriptions corresponding to the classes, fIE (xi) ∈ Rd be the input representation, and gOE (dj) ∈ Rd be the output representation of the jth class. We operate under the multi-label classification setting, which is the default for the extreme classification benchmarks we consider. Then, we have the probability of picking the jth class as:
SEMSUP := P (yj = 1|xi) = σ (gOE (dj)⊺ · fIE (xi)) (1)
SEMSUP is trained using the binary cross-entropy (BCE) loss between the predicted probability and gold answer, where N is the number of instances in the dataset.
LSEMSUP = 1 N · |C| ∑
(xi,Yi)
C∑ j=1 LBCE (P (yij = 1|xi) , yij) (2)
where Yi = {yi1, . . . , yiC |yij ∈ {0, 1}} is the set of the labels for instance xi, with xi belonging to class j if and only if yij = 1.
SEMSUP relies on multiple high-quality label descriptions of classes that contain semantic information, which are semi-automatically scraped from the web and filtered by an expert in (Hanjie et al., 2022). During training, diverse descriptions are randomly sampled, endowing the model with a semantically rich representation since they contain information on different class attributes. For example, the class sports can have a definition (“A form of competitive physical activity or game”), examples (“Examples are football, cricket, and hockey”), or etymology (“Derived from a French word meaning leisure”) in each description, among other attributes.
Shortcomings for large class spaces While (Hanjie et al., 2022) show improved performance of SEMSUP on zero-shot classification, their vanilla method cannot be directly applied to extreme classification due to several reasons. First, they use the binary cross-entropy loss over all the classes in the dataset (Eq 2), which involves encoding C label descriptions for each batch. This is computationally intractable for large label spaces because of GPU memory constraints. Second, they use a simple bi-encoder model which measures the semantic similarity between the input instance and the label description. However, instances and descriptions often share lexical terms with the same or similar lemma (e.g., wrote and written), which are not directly exploited by their method. And third, although their label description collection pipeline is semi-automatic, human intervention of any form is not feasible for extreme classification datasets, especially ones with labels in the order of millions. Our method SEMSUP-XC (Section 3.2) addresses the above contraints by using: (1) contrastive learning with negative samples for improved computational speed, (2) a novel hybrid lexical-semantic similarity model for improved performance, and (3) a completely automatic description scraping pipeline with accurate heuristics for filtering poor descriptions.
3.2 SEMSUP-XC: EFFICIENT GENERALIZATION FOR ZERO-SHOT EXTREME CLASSIFICATION
We provide an overview of our method in Figure 1 and explain it in detail below.
Training using contrastive learning For datasets with a large label space (large |C|), we improve SEMSUP’s computational speed by sampling negative classes for each instance rather than encoding the label descriptions of all classes. For instance xi, consider two partitions of the labels Yi = {yi1, . . . , yiC |yij ∈ {0, 1}}, with Y +i containing the positive classes (yij = 1) and Y − i containing the negative classes (yij = 0). SEMSUP-XC is trained using all positive classes (|Y +i |), but drawing inspiration from contrastive learning (Hadsell et al., 2006), we sample K − |Y +i | negative classes from Y −i instead of C − |Y + i |, with K ≪ |C|. We refer readers to appendix C for exact details. Intuitively, our training objective incentivizes the representations of the instance and label descriptions of the positive classes to be similar while simultaneously increasing the distance with respect to representations of the negative classes. Furthermore, rather than picking negative labels at random, we sample hard negatives that are lexically similar to positive labels. This allows for more explicit separation of embeddings of closely related labels. A typical dataset we consider (AmazonCat) contains |C| = 13, 000 and K ≈ 1000, which leads to SEMSUP-XC being 1200013000 = 92.3% faster than SEMSUP. Mathematically, the following is the training objective:
LSEMSUP-XC = 1 N ·K ∑ i ∑ yk∈Y + i LBCE (P (yk = 1|xi) , yk) + K−|Y +i |∑ l=1,yl Y − i LBCE (P (yl = 1|xi), yl) (3) We follow a similar procedure for inference and we refer readers to Appendix C for further details.
Hybrid lexico-semantic similarity model SEMSUP uses a bi-encoder architecture with two different BERT (Devlin et al., 2019b) models as the input and output encoder respectively. They use the final-layer representation corresponding to the [CLS] token to encode the instance and label description, with the inner product measuring the semantic similarity of the input instance and output class. Drawing inspiration from recent IR models like COIL (Gao et al., 2021b) and ColBERT (Khattab & Zaharia, 2020), we note that BERT’s semantic similarity can ignore lexical matching of words present in the input and output text which exhibit strong evidence of compatibility, thus leading to
performance degradation. COIL is a bi-encoder architecture which alleviates this by incorporating both semantic and lexical similarity. Apart from the dot product between [CLS] vectors, they also include an exact lexical match scoring function which is based on the dot product of representations corresponding to tokens with exact matches in the two pieces of text considered (e.g., input text: “Capture the best momemts in high quality pictures.” and label description “A camera is used to take photos.” If there are multiple occurrences of a common token type, the maximum similarity score is chosen, and the scores are then aggregated over all token types that are present in both sentences. Let xi = (xi1, xi2, . . . , xin) be the input instance with n tokens, dj = (dj1, dj2, . . . , djm) be a label description of class j with m tokens, vxicls and v dj cls be the [CLS] representations of the input and label description, and vxik and v dj l be the token representation of the k
th and lth token of xi and dj respectively. Mathematically, the following would be the probability of picking class j for both vanilla SEMSUP and SEMSUP + COIL:
SEMSUP := P (yj = 1|xi) = σ ( vxicls ⊺ · vdjcls )
SEMSUP + COIL := P (yj = 1|xi) = σ vxicls⊺ · vdjcls + ∑ w∈xi∩dj max w=xik=djl ( vxik ⊺v dj l ) (4) where exact lexical match is used to get the list of common tokens – w ∈ xi ∩ dj . As a result of the exact match, COIL has the drawback that semantically similar tokens (e.g., pictures and photos in the above sentence pair) and words with the same lemma (e.g., walk and walking) are treated as dissimilar tokens. To avoid this, we propose the use of soft lexical matching based on token clustering and lemmatization in our model Relaxed-COIL. We create clusters of tokens based on two characteristics: 1) the BERT token-embedding similarity (Reimers & Gurevych, 2019) and 2) the lemma of the word. Let CL(w) denote the cluster membership of the token w, with CL(wi) = CL(wj) if and only if they have a high token-embedding similarity or if they share the same lemma. Instead of using the exact lexical match in COIL, we use a soft lexical match which only checks if two tokens belong to the same cluster, thus allowing the model to exploit semantic similarity of tokens. Mathematically, Relaxed-COIL computes the probability as follows, where CL(xi) = {CL (xi1) , . . . , CL (xin)} denotes the set of cluster memberships of tokens in xi.
Relaxed-COIL :=P (yj = 1|xi) =
σ vxicls⊺ · vdjcls + ∑ idx∈CL(xi)∩CL(dj) max idx=CL(xik)=CL(djl) ( vxik ⊺v dj l ) (5) Automatically collecting high-quality descriptions SEMSUP uses a semi-automatic pipeline for collecting multiple descriptions for classes, with an expert required for filtering irrelevant ones. However, in our case, large label spaces (e.g., 1 million for Wikipedia) make any degree of human involvment infeasible. To alleviate this, we create a completely automatic pipeline for collecting descriptions which includes heuristics for removing spam, advertisements, and irrelevant descriptions, and we detail the list of heuristics used in Appendix B. In addition to web-scraped label descriptions, we utilize label-hierarchy information if provided by the dataset(EURLex and AmazonCat), which allows us to encode properties about parent and children classes wherever present. Further details are present in Appendix B.2 As we show in the ablation study (§ 5.3), both label descriptions that we collect and the label hierarchy provide significant performance boosts.
4 EXPERIMENTAL SETUP
Datasets We evaluate our model on three diverse public benchmark datasets. They are, EURLex4.3K (Chalkidis et al., 2019) which is legal document classification dataset with 4.3K classes, AmazonCat-13K (McAuley & Leskovec, 2013) which is an e-commerce product tagging dataset including Amazon product descriptions and titles with 13K categories, and Wikipedia-1M (Gupta et al., 2021) which is an article classification dataset made up of over 5 million Wikipedia articles
with 1 million categories. We provide detailed statistics about the number of instances and classes in train and test set in Table 1. We refer readers to Appendix A.2 for additional details.
Baselines We perform extensive experiments with six diverse baselines. 1) TF-IDF performs a nearest neighbour match between the sparse tf-idf features of the input and label description. 2) T5 (Raffel et al., 2019) is a large sequence-to-sequence model which has been pre-trained on 750GB unsupervised data and further fine-tuned on MNLI (Williams et al., 2018) to allow us to check if a label description entails an input instance. Ranking is done on top 50 labels predicted by TF-IDF 3) Sentence Transformer (Reimers & Gurevych, 2019) is a semantic text similarity model fine-tuned using a contrastive learning objective on over 1 billion sentence pairs. We rank the labels based on similarity of their output embeddings with document’s embeddings. The latter two baselines use significantly more data than SEMSUP-XC and T5 has 9× the parameters. The aforementioned baselines are unsupervised and not fine-tuned on our datasets. The following baselines are previously proposed supervised models which are fine-tuned on the datasets we consider. 4) ZestXML (Gupta et al., 2021) learns a highly sparsified linear transformation between sparse input and label features. 5) MACLR (Xiong et al., 2022) is a bi-encoder based model pre-trained on two self-supervised learning tasks to improve extreme classification—Inverse Cloze Task (Lee et al., 2019) and SimCSE (Gao et al., 2021c), and we fine-tune it on the datasets considered. 6) GROOV (Simig et al., 2022) is a T5 model that learns to generate both seen and unseen labels given an input document. 7) SPLADE (Formal et al., 2021) is a sparse neural retreival model that learns label/document sparse expansion via a Bert masked language modelling head. It is one of the current state of the art in information retrieval in out-of-domain tasks. To make comparisons with SEMSUP-XC fair, we fine-tune and re-evaluate the above models on the datasets we consider while including label descriptions and label hierarchy information. We refer readers to Appendix A.2 for additional details.
SEMSUP-XC implementation details We use the Bert-base model (Devlin et al., 2019a) as the backbone for the input encoder and Bert-small model (Turc et al., 2019) for the output encoder. SEMSUP-XC follows the model architecture described in Section 3.2 and we use contrastive learning (Hadsell et al., 2006) to train our models. During training, we randomly sample 1000−p negatives for each instance, where p is the number of positive labels for the instance. At inference, to improve computational efficiency, we precompute the output representations of label descriptions. We use the AdamW optimizer (Loshchilov & Hutter, 2019) and tune our hyperparameters using grid search on the respective validation set. We provide further details in Appendix A.1.
Evaluation setting and metrics We evaluate all models on three different settings: Zero-shot classification (ZS) on a set of unseen classes, generalized zero-shot classification (G-ZS) on a combined set of seen and unseen classes, and few-shot classification (FS) on a set of classes with minimal amounts of supervised data (1 to 20 examples per class). We use Precision@K and Recall@K (with multiple values of K) as our evaluation metrics, as is standard practice. Precision@K measures how accurate the top-K predictions of the model are, and Recall@K measures what fraction of correct labels are present in the top-K predictions, and they are mathematically defined as P@k = 1 k ∑ i∈rankk(ŷ) yi and R@k = 1∑ i yi ∑ i∈rankk(ŷ) yi, where rankk(ŷ) is the set of top-K predictions.
5 RESULTS
5.1 ZERO-SHOT EXTREME CLASSIFICATION
We consider two variants of baselines: with and without descriptions. We provide label hierarchy as output supervision in both cases. Table 2 shows that SEMSUP-XC significantly outperforms baselines on all datasets and metrics, under both zero-shot (ZS-XC) and generalized zero-shot (GZS-XC) settings. On ZS-XC, SEMSUP-XC outperforms MACLR by over 20, 13, and 15 P@1 points on the three datasets, respectively, even though MACLR uses XC specific pre-training (Inverse Cloze Task and SimCSE) , while SEMSUP-XC does not. SEMSUP-XC also outperforms GROOV (e.g., over 45 P@1 points on EURLex) which uses a T5 seq2seq model pre-trained on significantly more data than BERT, and this is likely because GROOV’s output space is unconstrained. SEMSUP-XC’s semantic understanding of instances and labels stands out against ZestXML which uses sparse non-contextual features with the former consistently scoring twice as higher compared to the latter. Interestingly TF-IDF performs better than all other baselines for EURLex ZS. This is because sparse methods often perform better than dense bi-encoders in zero-shot settings (Thakur et al., 2021), as the latter fail to capture fine-grained information. However, due to the introduction of Relaxed COIL in our method, SEMSUP-XC can perform fine-grained lexical matching in descriptions along with capturing deep semantic information, thus resulting in superior ZS and GZS scores. SEMSUP-XC also outperforms the other unsupervised baselines of T5, and Sentence-Transformer, even though the latter two are pre-trained on significantly larger amounts of data than BERT (T5 use 50× compared our base model).
In addition, SEMSUP-XC achieves higher recall on all datasets, beating the best performing baselines by 6, 18, and 5.2R@10 points on the three datasets, respectively. Since GZS-XC includes labels seen during training while evaluation, all methods have higher scores than ZS-XC and the gaps between different models are smaller, but we see both on precision and recall metrics that SEMSUP-XC again outperforms all the baselines considered by margins of 1-2 precision@1 points. Table 5 in Appendix D contains additional results with more methods and metrics.
Further, while SEMSUP-XC improves from the inclusion of descriptions, other methods have little to no advantage. This is because web-scraped descriptions are often noisy and need suitable architecture to make use of them. Unlike other methods, SEMSUP-XC has a hybrid lexical matching module, which improves from the inclusion of descriptions. This demonstrates the combined advantage of our proposed architectural changes and the use of web-scraped descriptions.
5.2 FEW-SHOT EXTREME CLASSIFICATION
We now consider the FS-XC setup, where new classes added at evaluation time have a small number of labeled instances each (K ∈ {1, 5, 10, 20}). For the sake of completeness, we also include zero-shot performance (ZS-XC, K = 0) and report results in Figure 2. Detailed results for other metrics are in appendix E (showing the same trend as P@1) and implementation details regarding creation of the few-shot splits are in appendix E. Similar to the ZS-XC case, SEMSUP-XC outperforms the baselines for all values of K considered. As expected, SEMSUP-XC’s performance increases with K because of access to more labeled data, but crucially, it continues to outperform baselines by the same margins. Interestingly, SEMSUP-XC’s zero-shot performance is higher than even the few-shot scores of baselines that have access up to K = 20 labeled samples on AmazonCat, which further strengthens the model’s applicability to the XC paradigm. We also note that adding a few labeled examples seems to be more effective in EURLex than AmazonCat, with the performance difference between K = 1 and K = 20 being 21 and 6 P@1 points respectively. Combined with the fact that performance seems to plateau for both datasets, we believe that the larger label space with rich descriptions for AmazonCat has allowed SEMSUP-XC to learn label semantics better than for EURLex.
5.3 ABLATIONS
We analyze the performance of SEMSUP-XC by conducting ablation studies and qualitative analysis on EURLex and AmazonCat for the zero-shot extreme classification setting (ZS-XC) in the following sections.
Analyzing components of SEMSUP-XC SEMSUP-XC’s use of the Relaxed-COIL model and semantically rich descriptions enables it to outperform all baselines considered, and we analyze the importance of each component in Table 3. As our base model (first row) we consider the SEMSUP-XC without ensembling it with TF-IDF. We note that the SEMSUP-XC base model is the best performing variant for both datasets and on all metrics other than P@1, for which it is only 0.5 points lower. Web scraped label descriptions are important because removing them decreases both precision and recall scores (e.g., P@1 is lower by 4 points on AmazonCat) on all settings considered. We see bigger improvements with AmazonCat, which is the dataset with larger number of classes (13K), which substantiates the need for semantically rich descriptions when dealing with fine-grained classes. Label hierarchy information is similarly crucial, with large performance drops on both datasets in its absence (e.g., 26 P@1 points on AmazonCat), thus showing that access to structured hierarchy information leads to better semantic representations of labels.
On the modeling side, we observe that both exact and soft lexical matching are important for RelaxedCOIL, with their absence leading to 11 and 4 point P@1 degradation on AmazonCat. While exact lexical-matching is significantly more important for EURLex which is the smaller dataset, we see that both types of matching are important for AmazonCat which tends to have classes which are more related.
Automatically Augmenting Label Descriptions The previous result showed the importance of web scraped descriptions, and we explore the effect of augmenting label descriptions to increase their number, and hence SEMSUP-XC’s understanding of the class, and report the results in Table 4. We use the widely used Easy Data Augmentation(EDA) (Wei & Zou, 2019) method for descriptions augmentations. Specifically, we apply random word deletion, random word swapping, random insertion, and synonym replacement each with a probabilty 0.5 on the description. We notice that augmentation improves performance on EURLex by 2, 1, and 2 P@1, P@5, and R@10 points respectively, suggesting that augmentation can be a viable way to increase the quantity of descriptions. But results on AmazonCat show that augmentation does not improve and actually slightly hurts the performance (e.g., 0.8 P@1 points). Given that AmazonCat has 3× the number of labels compared to EURLex, we believe that this shows SEMSUP-XC’s effectiveness in capturing the label semantics in the presence of larger number of classes, thus making data augmentation redundant. However, we believe that data augmentation might be a simple tool to boost performance on smaller datasets with lesser labels or descriptions.
6 CONCLUSION
We tackle the task of extreme classification (XC) (Bengio et al., 2019), which involves very large label spaces, using the framework of semantic supervision (Hanjie et al., 2022) that uses class descriptions instead of label IDs. Our method SEMSUP-XC innovates using a combination of contrastive learning, hybrid lexico-semantic matching and automated description collection to train effectively for XC. We achieve state-of-the-art results on three standard XC benchmarks and significantly outperform prior work, while also providing several ablation studies and qualitative analyses demonstrate the relative importance of our various modeling choices. Future work can further improve description quality and use stronger models for input-output similarity to further push the boundaries on this practical task with real-world applications.
APPENDICES
A TRAINING DETAILS
A.1 HYPERPARAMETER TUNING
We tune the learning rate, batch_size using grid search. For the EURLex dataset, we use the standard validation split for choosing the best parameters. We set the input and output encoder’s learning rate at 5e−5 and 1e−4, respectively. We use the same learning rate for the other two datasets. We use batch_size of 16 on EURLex and 32 on AmazonCat and Wikipedia. For Eurlex, we train our zero-shot model for fixed 2 epochs and the generalized zero-shot model for 10 epochs. For the other 2 datasets, we train for a fixed 1 epoch. For baselines, we use the default settings as used in respective papers.
Training
All of our models are trained end-to-end. We use the pretrained BERT model (Devlin et al., 2019b) for encoding input documents, and Bert-Small model (Turc et al., 2019) for encoding output descriptions. For efficiency in training, we freeze the first two layers of the output encoder. We use contrastive learning to train our models and sample hard negatives based on TF-IDF features. All implementation was done in PyTorch and Huggingface transformer and experiments were run NVIDIA RTX2080 and NVIDIA RTX3090 gpus.
A.2 BASELINES
We use the code provided by ZestXML, MACLR and GROOV for running the supervised baselines. We employ the exact implementation of TF-IDF as used in ZestXML. We evaluate T5 as an NLI task (Xue et al., 2021). We separately pass the names of each of the top 100 labels predicted by TF-IDF, and rank labels based on the likelihood of entailment. We evaluate Sentence-Transformer by comparing the similarity between the emeddings of input document and the names of the top 100 labels predicted by TF-IDF. Splade is a sparse neural retreival model that learns label/document sparse expansion via a Bert masked language modelling head. We use the code provided by authors for running the baselines. We experiment with various variations and pretrained models, and find splade_max_CoCodenser pretrained model with low sparsity(λd = 1e − 6 & λq = 1e − 6) to be performing the best.
B LABEL DESCRIPTIONS FROM THE WEB
B.1 AUTOMATICALLY SCRAPING LABEL DESCRIPTIONS FROM THE WEB
We mine label descriptions from web in an automated end-to-end pipeline. We make query of the form ‘what is <class_name>’(or component name in case of Wikipedia) on duckduckgo search engine. Region is set to United States(English), and advertisements are turned off, with safe search set to moderate. We set time range from 1990 uptil June 2019. On average top 50 descriptions are scraped for each query. To further improve the scraped descriptions, we apply a series of heuristics:
• We remove any incomplete sentences. Incomplete sentences do not end in a period or do not have more than one noun, verb or auxiliary verb in them. Eg: Label = Adhesives ; Removed Sentence = What is the best glue or gel for applying
• Statements with lot of punctuation such as semi-colon were found to be non-informative. Descriptions with more than 10 non-period punctuations were removed. Eg: Label = Plant Cages & Supports ; Removed Description = Plant Cages & Supports. My Account; Register; Login; Wish List (0) Shopping Cart; Checkout $ USD $ AUD THB; R$ BRL $ CAD $ CLP $ . . .
• We used regex search to identify urls and currencies in the text. Most of such descriptions were spam and were removed. Eg: Label = Accordion Accessories ; Removed Description = Buy Accordion Accessories
Online, with Buy Now & Pay Later and Rental Options. Free Shipping on most orders over $250. Start Playing Accordion Accessories Today!
• Descriptions with small sentences(<5 words) were removed. Eg: Label = Boats ; Removed Description = Boats for Sale. Buy A Boat; Sell A Boat; Boat Buyers Guide; Boat Insurance; Boat Financing ...
• Descriptions with more than 2 interrogative sentences were filtered out. Eg: Label = Shower Curtains ; Removed Description = So you’re interested...why? you’re starting a company that makes shower curtains? or are you just fooling around? Wiki User 2010-04
• We mined top frequent n-grams from a sample of scraped descriptions, and based on it identified n-grams which were commonly used in advertisements. Examples include: ‘find great deals’, ‘shipped by’. Label = Boat compasses ; Removed Description = Shop and read reviews about Compasses at West Marine. Get free shipping on all orders to any West Marine Store near you today.
• We further remove obscene words from the datasets using an open-source library (Friedland, 2013).
• We also run a spam detection model (Grandury, 2021) on the descriptions and remove those with a confidence threshold above 0.9. Eg: Label = Phones ; Removed Description = Check out the Phones page at <company_name> — the world’s leading music technology and instrument retailer!
• Additionally, most of the sentences in first person, were found to be advertisements, and undetected by previous model. We remove descriptions with more than 3 first person words (such as I, me, mine) were removed. Eg: Label = Alarm Clocks ; Removed Description = We selected the best alarm clocks by taking the necessary, well, time. We tested products with our families, waded our way through expert and real-world user opinions, and determined what models lived up to manufacturers’ claims. . . .
B.2 POST-PROCESSING
We further add hierarchy information in a natural language format to the label descriptions for AmazonCat and EURLex datasets. Precisely, we follow the format of ‘key is value.’ with each key, value pair represented in new line. Here key belongs to the set { ‘Description’, ‘Label’, ‘Alternate Label Names’, ‘Parents’, ‘Children’ }, and the value corresponds to comma separated list of corresponding information from the hierarchy or scraped web description. For example, consider the label ‘video surveillance’ from EURLex dataset. We pass the text: ‘Label is video surveillance. Description is <web_scraped_description>. Parents are video communications. Alternate Label Names are camera surveillance, security camera surveillance.’ to the output encoder. For Wikipedia, label hierarchy is not present, so we only pass the description along with the name of label.
B.3 WIKIPEDIA DESCRIPTIONS
When labels are fine-grained, as in the Wikipedia dataset, making queries for the full label name is not possible. For example, consider the label ‘Fencers at the 1984 Summer Olympics’ from Wikipedia categories; querying for it would link to the same category on Wikipedia itself. Instead, we break the label names into separate constituents using a dependency parser. Then for each constituent(‘Fencers’ and ‘Summer Olympics’), we scrape descriptions. No descriptions are scraped for constituents labelled by Named-Entity Recognition(‘1984’), and their NER tag is directly used. Finally, all the scraped descriptions are concatenated in a proper format and passed to the output encoder.
B.4 DE-DUPLICATION
To ensure no overlap between our descriptions and input documents, we used SuffixArray-based exact match algorithm (Lee et al., 2022) with a minimum threshold of 60 characters and removed the matched descriptions.
C CONTRASTIVE LEARNING
During training, for both EURLex and AmazonCat, we randomly sample 1000−|Y +i | negative labels for each input document. For Wikipedia, we precompute the top 1000 labels for each input based on TF-IDF scores. We then randomly sample 1000− |Y +i | negative labels for each document. At inference time, we evaluate our models on all labels for both EURLex and AmazonCat. However, even evaluation on millions of labels in Wikipedia is not computationally tractable. Therefore, we evaluate only on top 1000 labels predicted by TF-IDF for each input.
D FULL RESULTS FOR ZERO-SHOT CLASSIFICATION
D.1 SPLIT CREATION
For EURLex, and AmazonCat, we follow the same procedure as detailed in GROOV (Simig et al., 2022). We randomly sample k labels from all the labels present in train set, and consider the remaining labels as unseen. For EURLex we have roughly 25%(1057 labels) and for AmazonCat roughly 50%(6500 labels) as unseen. For Wikipedia, we use the standard splits as proposed in ZestXML (Gupta et al., 2021).
D.2 RESULTS
Table 5 contains complete results for ZS-XCacross the three datasets, including additional baselines and metrics.
E FULL RESULTS FOR FEW-SHOT CLASSIFICATION
E.1 SPLIT CREATION
We iteratively select k instances of each label in train documents. If a label has more than k documents associated with it, we drop the label from training(such labels are not sampled as either positives or negatives) for the extra documents. We refer to these labels as neutral labels for convenience.
E.2 MODELS
We use MACLR, GROOV, Light XML as baselines. We initialize the weights from the corresponding pre-trained models in the GZSL setting. We use the default hyperparameters for baselines and SEMSUP models. As discussed in the previous section, neutral labels are not provided at train time for MACLR and GROOV baselines. However, since Light XML uses a final fully-connected classification layer, we cannot selectively remove them for a particular input. Therefore, we mask the loss for labels which are neutral to the documents. We additionally include scores for TF-IDF, but since it is a fully unsupervised method, only zero-shot numbers are included.
E.3 RESULTS
The full results for few-shot classification are present in Table 6.
F COMPUTATIONAL EFFICIENCY
Extreme Classification necessitates that the models scale well in terms of time and memory efficiency with labels at both train and test times. SEMSUP-XC uses contrastive learning for efficiency at train
time. During inference, SEMSUP-XC predicts on top 1000 shortlists by TF-IDF, thereby achieving sub-linear time. Further, contextualized tokens for label descriptions are computed only once and stored in memory-mapped files, thus decreasing computational time significantly. Overall, our computational complexity can be represented by O(TIE ∗N + TOE ∗ |Y |+ k ∗N ∗ Tlex), where TIE , TOE represent the time taken by input encoder and output encoder respectively, N is the total number of input documents, |Y| is the number of all labels, k indicates the shortlist size and |Tlex| denotes the time in soft-lexical computation between contextualized tokens of documents and labels. In our experiments, TIE ∗N >> TOE ∗ |Y | and TIE ≈ Tlex ∗ k. Thus effectively, computational complexity is approximately equal to O(TIE ∗ N ), which is in comparison to other SOTA extreme classification methods.
Table 7 shows that SEMSUP-XC when compared to other XC state-of-the-art baselines, is computationally efficient in terms of speed while demonstrating much better performance. In terms of storage we utilize almost 4 times storage as compared to MACLR, as we need to store contextualized token embeddings of each label. However the overall storage overhead(≈ 17.9GB) is small in comparison to significant improvement in performance and comparable speed. We provide a more detailed analysis of our method in Appendix F
G QUALITATIVE ANALYSIS
We now perform a qualitative analysis of SEMSUP-XC’s predictions and present representative examples in Table 8, and compare them to MACLR which is the next best performing model. Correct predictions are in bold. In the first example, even from the short text in the document SEMSUPXC is able to figure out that it is not just a book, but a textbook. While MACLR predicts five labels which are all similar, SEMSUP-XC is able to predict diverse labels while getting the correct label in five predictions. In the second example, SEMSUP-XC smartly realizes the content of the document is a story and hence predictions literature & fiction, whereas MACLR tries to predict labels for the contents of the story instead. This shows the nuanced understanding of the label space that SEMSUP-XC has learned. The third example portrays the semantic understanding of the SEMSUP-XC’s label space. While MACLR tries to predict labels like powered mixers because of the presence of the word mixer, SEMSUP-XC is able to understand the text at a high level and predict labels like studio recording equipment even though the document has no explicit mention of the words studio, recording or equipment). These qualitative examples show that SEMSUP-XC’s understanding of how different fine-grained classes are related and how documents refer to them is better than the baselines considered. We list more such examples in Appendix G. | 1. What is the focus of the paper regarding zero-shot extreme multi-label classification?
2. What are the strengths and weaknesses of the proposed approach compared to previous methods?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the differences between Zero-shot (ZS) and Generalized Zero-shot (GZS)?
5. How does SEMSUP differ from retrieval setups, particularly in terms of indexing and memory requirements?
6. Are there any other related works in the IR community that the author should have considered in their comparison? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper tackled zero-shot extreme multi-label classification (ZS-XMC) problems, where labels in the test set can be unseen to the training set. The author proposed SEMSUP to solve ZS-XMC problems , which seems to be a slight tweak of COIL models widely-used in retrieval tasks. The empirical performance of COIL seems encouraging compared to naive baselines in XMC literature.
Strengths And Weaknesses
Strength
Good empirical performance compared to naive baselines
Weaknesses
The XMC setups are not clearly defined. For example, what’s the difference between Zero-shot (ZS) and Generalized Zero-shot (GZS)? For ZS, is it zero-shot learning (no unsupervised, hence no (query, label) pairs), or just zero-shot evaluation on labels that are unseen to the training set?
Technical novelty is rather limited, as the proposed method is a slight tweak over COIL. However, it is unclear how much of an advantage it improved over COIL.
When rich label descriptions are available, what is the difference between SEMSUP and retrieval setup? If it's a similar setup, the author should also compare with other related works in the IR community, especially the sparse retriever variants such as COIL, SPLADE, Co-SelfDistil, to name just a few.
Similar to COIL, SEMSUP requires to index all contextualized token embeddings appearing in each label description. This may induce huge memory and storage costs for the index system. Unfortunately, this critical aspect is not mentioned in the paper.
Clarity, Quality, Novelty And Reproducibility
Clarity: Writing about the XMC problem setup is not clear
Quality: rather weak, missing important sparse retriever baselines in IR literature
Novelty: rather limited, as the core technique is highly similar to COIL
Reproducibility: No code provided, hence no data point to verify the reproducibility |
ICLR | Title
Impact of the Last Fully Connected Layer on Out-of-distribution Detection
Abstract
Out-of-distribution (OOD) detection, a task that aims to detect OOD data during deployment, has received lots of research attention recently, due to its importance for the safe deployment of deep models. In this task, a major problem is how to handle the overconfidence problem in OOD data. While this problem has been explored from several perspectives in previous works, such as the measure of OOD uncertainty and the activation function, the connection between the last fully connected (FC) layer and this overconfidence problem is still less explored. In this paper, we find that the weight of the last FC layer of the model trained on indistribution (ID) data can be an important source of the overconfidence problem, and we propose a simple yet effective OOD detection method to assign the weight of the last FC layer with small values instead of using the original weight trained on ID data. We analyze in Sec. 5 that our proposed method can make the OOD data and the ID data to be more separable, and thus alleviate the overconfidence problem. Moreover, our proposed method can be flexibly applied on various offthe-shelf OOD detection methods. We show the effectiveness of our proposed method through extensive experiments on the ImageNet dataset, the CIFAR-10 dataset, and the CIFAR-100 dataset.
1 INTRODUCTION
Recently, deep models have achieved good performance in various computer vision tasks, but with a severe reliance on the assumption that the testing data comes from the same distribution as the training set (i.e., in-distribution (ID) test data) (Ben-David et al., 2010; Vapnik, 1991). This assumption, however, can be violated in the open world where out-of-distribution (OOD) data can be often encountered, and these OOD data as inputs can lead models to produce unrelated predictions and result in severe consequences, especially in many safety-critical applications, such as autonomous driving (Filos et al., 2020) and medical diagnosis (Zadorozhny et al., 2021). Due to the severe implications of OOD data in these applications, the task of OOD detection, which aims to detect OOD data during deployment, is important and has received lots of research attention recently (Liang et al., 2017; Hendrycks & Gimpel, 2016; Hendrycks et al., 2019; Liu et al., 2020; Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021; Lee et al., 2018).
To detect OOD data, a naive idea is to classify the OOD data and the ID data based on the confidence of the model in the data input. However, as deep models can be overconfident in the OOD data inputs (Nguyen et al., 2015), it can be non-trivial to separate the OOD data and the ID data based on such a naive idea. To better cope with the overconfidence problem and make the OOD data and the ID data more separable, previous works have proposed methods from several perspectives, such as redefining the measure of OOD uncertainty (Liu et al., 2020; Wang et al., 2022; Hendrycks et al., 2019) and rectifying the activation function (Sun et al., 2021). However, the connection between the last fully connected (FC) layer and the overconfidence problem is still less explored.
In this work, we argue that the weight of the last FC layer of the model trained on ID data can be an important source of the overconfidence problem. To justify our aforementioned argumentation, as a preliminary of our method, in Fig. 1, we use a ResNet-50 (He et al., 2016) model trained on ImageNet and conduct OOD detection experiments on various other datasets including iNaturalist, SUN, Places365, and Textures. Specifically, we compare the baseline that uses the original weight of the last FC layer (original weight) with a variant that assigns the weight of the last FC layer sim-
ply with ones (identity weight). As illustrated, compared to the baseline, this variant consistently reduces the false positive rate (FPR95) over various datasets. This demonstrates that the weight of the last FC layer of the model trained on ID data is not the optimal weight for OOD detection, and there can exist a weight that is more suitable.
Inspired by the above argumentation, in this work, to better cope with the overconfidence problem, we aim to assign the last FC layer of the model with a new weight so that the OOD data and the ID data can be made more separable. We find that this can be achieved via simply assigning small values (e.g. 0.01) to the weight of the last FC layer of the model. To theoretically show the effectiveness of our method, in Sec. 5, we first analyze why assigning constant values (e.g., ones) to the weight of the last FC layer can separate OOD data and ID data; we then explain why assign the weight of the last FC layer with small value can even make OOD data and ID data to be more separable. We also want to point out that, as the original weight of the last FC layer can still be used for the original task, via using our method, the classification accuracy on the original task is completely preserved.
Also, note that, as we just need to assign the last FC layer of the model with small values, our method is simple yet effective and needs neither a retraining process of the model nor additional OOD data. Besides, with only the weight of the last FC layer modified, our method can also be flexibly applied to various off-the-shelf OOD detection methods. We experiment our method with various OOD detection methods and achieve consistent improvement in OOD detection performance.
The contributions of our work are summarized as follows.
• From the novel perspective of the last FC layer, we propose a simple and effective OOD detection method to detect OOD data by simply assigning the weight of the last FC layer with small values.
• We perform theoretical analysis (in Sec. 5) on why assigning constant values (e.g., ones) to the weight of the last FC layer can separate OOD data and ID data. Moreover, we also analyze why a small value can even make the OOD data and ID data to be more separable. Our method thus can improve the OOD detection performance.
• Our method achieves significant OOD detection performance improvement when applied to various OOD detection methods on various evaluation benchmarks (Deng et al., 2009; Krizhevsky et al., 2009).
The rest of the paper is organized as follows. In Sec. 2, we discuss the related works of our paper. In Sec. 3, we provide the background of OOD detection. After that, we present our method in Sec. 4,
the analysis of our method in Sec. 5, and experimental results in Sec. 6. Finally, we conclude our paper in Sec. 7.
2 RELATED WORK
OOD Detection. Being an important task that helps detect OOD data during deployment, OOD detection has received lots of research attention, and most of the OOD detection methods fall into three categories: methods need retraining (DeVries & Taylor, 2018; Huang & Li, 2021; Zaeemzadeh et al., 2021), methods need extra OOD data (Hsu et al., 2020; Hendrycks et al., 2018; Dhamija et al., 2018; Ming et al., 2022; Lee et al., 2017; Yu & Aizawa, 2019; Wu et al., 2021), and posthoc methods. Among methods that need retraining, an extra branch is introduced by (DeVries & Taylor, 2018), and MOS makes use of a group-based feature space, and (Zaeemzadeh et al., 2021) incorporates angular distance into their method. In the category of methods that need extra OOD data, (Hendrycks et al., 2018) is the first to propose this category of method, (Dhamija et al., 2018) proposed to regularize extra image data from different backgrounds, and (Lee et al., 2017) proposed to generate OOD data on the boundary of OOD data and ID data. Besides these two categories of method, the category of post-hoc method have also attracted a lot of attention recently since it need neither retraining nor extra OOD data.
In the category of post-hoc methods, (Hendrycks & Gimpel, 2016) observe that a neural model tends to produce higher softmax values for ID data and lower ones for the OOD data. Therefore, they introduce a score function, the maximum softmax probability (MSP), to achieve OOD detection. To improve the OOD detection performance, (Liang et al., 2017) puts forward ODIN, which enlarges the gap between ID and OOD data by using large temperature scaling and adding perturbations on inputs. Lee uses the features and the class-wise centroids to calculate the Mahalanobis distance (Lee et al., 2018). The energy-based score function is introduced by (Liu et al., 2020). Such a function gives high energy to the OOD data and low energy to the ID data. (Sun et al., 2021) exploits the characteristics of the neural network to the OOD data and leverages the OOD detection performance by removing abnormal activate values.
Different from the existing post-hoc OOD detection methods, this paper takes a different view of the OOD detection problem. Specifically, we propose to connect the last FC layer and the overconfidence problem, and we propose to replace the original weight of the last FC layer with small values instead.
The Last FC Layer. The last FC layer, an important component that appears in many network structures, has been studied in various areas (Basha et al., 2020a;b; Zhao et al., 2020; Zhou et al., 2020) over the year, such as transfer learning (Basha et al., 2020a), continual learning (Zhao et al., 2020), and long tail problem (Zhou et al., 2020). In this paper, from a novel perspective, we build a connection between the last FC layer and the overconfidecne problem in OOD detection. Specifically, we find that the weight of the last FC layer trained on ID data can be an important source of the over confidence problem and propose to assign the weight of the last FC layer with small values instead.
3 BACKGROUND
Following most previous OOD detection works (Liang et al., 2017; Hendrycks & Gimpel, 2016; Hendrycks et al., 2019; Liu et al., 2020; Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021; Lee et al., 2018), this paper considers OOD detection in image classification. Denote Din := X in × Yin drawn from Pin the in-distribution dataset, where Pin denotes the in-distribution, X in denotes the in-distribution input space, and Yin ={1, 2, · · · , C} denotes the in-distribution label space corresponding to X in. Similarly, denote Dout := X out×Yout the out-of-distribution dataset, where X in denotes the out-of-distribution input space, and Yin denotes the corresponding out-ofdistribution label space. Moreover, denote F : X → Y an image classifier trained on Din. OOD detection can then be treated as a binary classification problem to distinguish whether the input data ⟨x, y⟩ ∈ Dm belongs to Din or Dout, where x is an image, and y is its corresponding ground true label. In other words, given a certain neural network F and a random test input x, the goal of OOD
detection is to define a score function G(x;F) such that: G(x;F) = { 1, if x ∈ Din 0, if x ∈ Dout
(1)
where Din ∩Dout = X in ∩X out = Yin ∩Yout = ∅. Note that the Dout is inaccessible during the training stage of F .
4 METHOD
In this section, we introduce our proposed OOD detection methods. The idea behind our method is to better cope with the overconfidence problem by replacing the last FC layer of the model with a new linear layer filled with a small constant value. We consider the input x, the last FC layer f , and the well trained neural network without the last FC layer g. We denote a d dimension feature vector from the penultimate layer of the model as (z1, z2, ..., zd) = z := g(x) ∈ Rd, the output of the model as f(z) where matrix f ∈ Rd×K and K is the number of classes. Following most of the recent OOD detection methods (Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; 2021; Tonin et al., 2021; Du et al., 2022; Elflein et al., 2021; Wang et al., 2020; Joshi et al., 2022; Chen et al., 2022; Ouyang et al., 2021; Ming et al., 2022), we first define the original measure of OOD uncertainty Sori before incorporating our proposed method as:
Sori = log K∑ i=1 efi(z) (2)
where fi indicates the i-th column of the matrix f . Note that a larger Sori indicates more confidence that x belongs to the in-distribution.
We then describe how the measure of OOD uncertainty S looks like after incorporating our proposed method. Specifically, let’s denote the matrix f ′i ∈ Rd×K filled with a value α, and then we replace the f with f ′i ∈ Rd to compute S. Since all entries of f ′i are same, all columns of f ′i are identical i.e. f ′1 = f ′ 2 = ... = f ′ K . Therefore, S can be denoted as:
S = log K∑ i=1 ef ′ i(z)
= log (ef ′ 1(z) + ef ′ 2(z) + ...+ ef ′ K(z))
= logKef ′ 1(z) where f ′1 = αJd,1 = logKef ′T 1 z where f ′1(z) = f ′T 1 z = logKeα ∑d i=1 zi where f ′T1 z = αJd,1z = α d∑
i=1
zi
= logKedαz̄ where z̄ = 1
d d∑ i=1 zi
= dαz̄ + logK (3)
where z̄ := E(z) and Jd,1 indicates a d× 1 all-ones matrix. To perform OOD detection using our proposed method, we further define the score function G(x;F) as:
G(x;F) = { 1, if S ≥ λ 0, if S < λ
(4)
where λ is a threshold. In our experiments, we set λ to be a value such that 95% ID data can be detected correctly, which is the same setting following most previous OOD detection methods (Hendrycks & Gimpel, 2016; Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; Liang et al., 2017; Hendrycks et al., 2019).
5 ANALYSIS
Below, we perform theoretical analysis to show the effectiveness of our method. Specifically, we first explain why replacing the trained weight of the last FC layer with a constant value α separates the distributions of ID and OOD data. After that, we further explain why a smaller α can make the ID and OOD data to be more separable.
5.1 EFFECTIVENESS OF ASSIGNING THE LAST FC LAYER WITH A CONSTANT VALUE
In this section, we analyze why assigning the last FC layer with a constant value α can separate ID and OOD data. Following the settings in Sec. 3, we denote the neural network trained on the ID data as F . Besides, we further denote the output of its penultimate layer is z = (z1, z2, ..., zn) ∈ Rn. Then, we can rewrite the score S produced by our method further as:
S = log K∑ i=1 ef ′ i(z)
= log kedαz̄
= dαz̄ + logK
∝ z̄ = E[z] (5)
We denote the z corresponding to the in-distribution data as zin = (zin1 , z in 2 , ..., z in n ). Following the same assumption from (Ming et al., 2022; Sun et al., 2021; 2022), we assume that each zini obeys the rectified Gaussian distribution i.e. zini ∼ max(0,N (µ, σ2in)). Then, we can model zini with a random variable x as:
zini = 1
σin √ 2π
e − (x−µ)
2
2σ2 in (6)
We denote the corresponding expectation of zin as Ein, and it can be written as:
Ein[z] = ∫ +∞ 0
x
σin √ 2π
e − (x−µ)
2
σ2 in dx
= 1
σin √ 2π ∫ +∞ 0 xe − (x−µ) 2 σ2 in dx
= µ√ 2π ∫ +∞ − µσin e− v2 2 dv + σin√ 2π ∫ +∞ − µσin ve− v2 2 dv where v = x− µ σin
= µ√ 2π (1− ∫ − µσin −∞ e− v2 2 dv) + σin√ 2π e − 12 ( −µ σin )2 = µ[1− Φ(−µ σin )] + σinφ( −µ σin ) (7)
where Φ and φ are Cumulative distribution function(cdf) and Probability density function(pdf) respectively. And then, we are going to model the expectation corresponding to the out-of-distribution data. Following the same observation from (Sun et al., 2021) that the output of the penultimate layer of the network corresponding to the OOD data,zout, is positively skewed. Specifically, we cam denote zout = (zout1 , z out 2 , ..., z out n ), so we can model each z out i ∼ESN(µ, σ2out, ϵ), where µ, σ2out, ϵ indicate the mean, the deviation and the degree of skewness of the ESN distribution. Therefore, following the theorem in (Mudholkar & Hutson, 2000), the expectation of zout, Eout[z], can be modeled as:
Eout[z] = µ− (1 + ϵ)Φ( −µ
(1 + ϵ)σout )µ+ (1 + ϵ)2φ( −µ (1 + ϵ)σout )− 4ϵ√ 2π σout (8)
Therefore, the difference of the Ein[z] and Eout[z] is: ∆ =E[zin]− E[zout]
=µ[1− Φ(−µ σin )] + σinφ( −µ σin )− µ− (1 + ϵ)Φ( −µ (1 + ϵ)σout )µ
+ (1 + ϵ)2φ( −µ (1 + ϵ)σout )− 4ϵ√ 2π σout
=− [(1 + ϵ)2ϕ( −µ (1 + ϵ)σout ) + 4ϵ√ 2π ]σout
− [Φ(−µ σin )− (1 + ϵ)Φ( −µ (1 + ϵ)σout )]µ+ ϕ( −µ σin )σin (9)
Given µ = 1.0 and ϵ = −0.5, we can plot ∆ in Fig. 2, and we can find out that it is greater than 0, i.e Sin > Sout. Therefore, we conclude that our method can produce greater confidence scores to in-distribution data than for the out-distribution data.
5.2 EFFECTIVENESS OF A SMALL α
In this section, we further explain why a smaller α can make the ID and OOD data to be more separable. We denote the norm difference of Sin and Sout as:
Sin − Sout ||α||2 = 1 α2 (log K∑ i=1 ef ′ i(z in) − log K∑ i=1 ef ′ i(z out))
= d
α (E[zin]− E[zout]) (10)
As shown in Eq. 10, to make the ID and OOD data to be more separable, we actually hope to make the norm difference of Sin and Sout to be larger Recall that E[zin] − E[zout] is a positive number as we discuss above. Therefore, a smaller α can make the norm difference of Sin and Sout larger, and thus make the ID and OOD data to be more separable.
6 EXPERIMENTS
In this section, we evaluate the effectiveness of our method on ImageNet and CIFAR OOD detection benchmarks. All experiments are conducted on NVIDIA Tesla V100 GPUs.
6.1 IMAGENET BENCHMARK
Setup. We use ReAct (Sun et al., 2021) as a baseline of our method and follow it. We use both a ResNet50 (He et al., 2016) model and a MobileNet-v2 (Sandler et al., 2018) model pre-trained on ImageNet (Deng et al., 2009) as the image classifier. Note that for fair comparison, we directly use the models trained by (Sun et al., 2021). Moreover, we set α in Eq. 3 to be a small number 0.01 in our experiments.
Evaluation Metric. We evaluate our OOD detection method on the following two common metrics: (1) FPR95 measures the FPR (False Positive Rate) of the OOD data when the recall (Positive Rate of the ID data) is at 95%. Note that a lower FPR95 indicates better performance of OOD detection. (2) AUROC measures the area under the TPR (True Positive Rate) and FPR (False Positive Rate). Note that a higher AUROC indicates better performance of OOD detection.
Dataset. In this benchmark, we consider ImageNet (Deng et al., 2009) as the ID dataset, and following (Sun et al., 2021; Hsu et al., 2020; Huang & Li, 2021), we evaluate our method on four commonly-used OOD datasets, including iNaturalist, SUN, Places365, and Textures. Note that all of these four datasets have non-overlapping classes w.r.t ImageNet. Below, we introduce each of them in more detail: (1) iNaturalist (Van Horn et al., 2018) contains 5,000 categories of plants and animals images, and the resolution of each image is 800 × 800. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 110 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (2) SUN (Xiao et al., 2010) contains 397 classes of natural images, and the resolution of each image is larger than 200 × 200. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 50 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (3) Places (Zhou et al., 2017) contains 205 categories of scene images whose resolutions are 512 × 512. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 50 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (4) Textures (Cimpoi et al., 2014) contains 47 classes of textural images whose resolutions are either 300 × 300 or 640 × 640. Following (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), the whole dataset with 5,640 images is used for OOD detection evaluation.
Results. In Tab. 1, we compare our method with the existing post-hoc OOD detection methods on all the four OOD datasets. As shown, our method demonstrates the best averaged result compared with common post-hoc OOD detection methods on both ResNet50 and MobileNet-V2, which demonstrates the effectiveness of our method.
6.2 CIFAR BENCHMARK
Setup. We use ReAct (Sun et al., 2021) as a baseline of our method and follow it. We use the ResNet18 (He et al., 2016) model as the image classifier for both CIFAR-10 and CIFAR-100. Note that for fair comparison, we directly use the models trained by (Sun et al., 2021). Moreover, we set α in Eq. 3 to be a small number 0.01 in our experiments.
Evaluation metric & Dataset. Following (Hendrycks & Gimpel, 2016; Liu et al., 2020; Liang et al., 2017; Sun et al., 2021; Huang et al., 2021; Lee et al., 2018), we use the FPR95 and AUROC metrics elaborated in Sec. 6.1 to evaluate our OOD detection method. In this benchmark, we use CIFAR-10 and CIFAR-100 as the ID datasets (Krizhevsky et al., 2009). With respect to the OOD datasets, following (Liu et al., 2020; Sun et al., 2021; Huang et al., 2021; Cimpoi et al., 2014), besides using the Places dataset and the Textures dataset that we have introduced above, we also
evaluate our method on three other OOD datasets including iSUN (Xu et al., 2015), LSUN (Yu et al., 2015), and SVHN (Netzer et al., 2011). Below, we introduce each of them in more detail: (1) LSUN dataset contains 10,000 images of 10 scene categories. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), to conduct OOD detection on this dataset, we randomly crop images in this dataset to size 32×32. Besides, following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), we also conduct OOD detection on a variant of this dataset (LSUN Resize) by resizing images in LSUN dataset to size 32×32. (2) iSUN dataset is sampled from the SUN (Xiao et al., 2010) dataset, and contains 20,608 images of 397 categories. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), the whole dataset is used for OOD detection evaluation. (3) SVHN dataset contains 26,032 images of 10 categories for testing. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), we use all the 26,032 images for OOD detection evaluation.
Results In Tab. 2, we compare our method with the existing post-hoc OOD detection methods on all the six OOD datasets. As shown, our method demonstrates the best averaged result compared with common post-hoc OOD detection methods, which demonstrates the effectiveness of our method.
6.3 ABLATION STUDIES
Effect of α. In the previous section, we analyzed the effect of α on the performance of OOD detection from a mathematical point of view and concluded that a smaller α has a positive effect on performance. In this subsection, we will experimentally show the impact of α on the performance
of OOD detection. We randomly sample α from a continuous uniform distribution between 0 and 1 i.e. α ∈ U[0,1]. And then we evaluate FPR95 and AUROC under various α on the iNaturalist, SUN, Places and Textures OOD datasets (Van Horn et al., 2018; Xiao et al., 2010; Zhou et al., 2017; Cimpoi et al., 2014) with ResNet50 (He et al., 2016) and MobileNet-V2 (Sandler et al., 2018) trained on ImageNet (Deng et al., 2009). The result is shown in the Fig. 3. As shown, as long as α decreases, a larger AUROC and a smaller FPR95 are consistently achieved throughout various OOD datasets, demonstrating the effectiveness of our method.
Effect of different baseline methods. To validate the general effectiveness of our proposed method, we apply our method on various different post-hoc OOD detection methods, including MSP, energy, react, vim, MaxLogit, and KL-Matching (Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; Hendrycks et al., 2019). As shown in Tab. 3, our method achieves consistent performance improvement when applied on various different post-hoc OOD detection methods. This demonstrates that our proposed method can be flexibly applied on various post-hoc OOD detection methods to improve their performance.
7 CONCLUSION
In this paper, we present a simple yet effective OOD detection method, which replaces the trained weight of the last FC layer with a small value. We theoretically analyze that the proposed method can make the ID data and OOD data to be more separable, and thus better cope with the overconfidence problem. We shows two ablation experiments to show that our method is compatible with existing OOD detection methods and achieves consistent performance improvement. Our method achieves superior performance on the ImageNet and CIFAR OOD detection benchmarks. | 1. What is the focus and contribution of the paper regarding OOD detection?
2. What are the strengths and weaknesses of the proposed method, particularly in terms of its performance and limitations?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are some empirical questions that can be considered to better understand the effectiveness and limitations of the proposed approach under various settings of OOD?
5. How does the proposed approach compare to other methods such as model calibration and decomposition representations? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this work authors propose a simple method method for OOD based by assigning small values to weights of last FC layer. They demonstrate strong performance on OOD benchmarks.
Strengths And Weaknesses
Strengths • Paper is well written, experiments are also presented well. Shows good performance on standard OOD detection benchmarks, especially reduction in False positive rate for on-par or better AUROC.
Weakness Currently it's unclear why the method works and also limitations of proposed approach. Needs evaluation on near OOD or open set on at least one setting, like CIFAR100 as in-distribution and CIFAR10 as OOD to understand behavior of such approach. Also illustrate skewness of OOD distributions for this near-ood? Include consider including relative Mahalanobis[3] distance as another strong baseline for OOD especially for near-ood.
Clarity, Quality, Novelty And Reproducibility
First, it is not completely clear what is motivation/justification of small weights on last layer? Is it skewness of OOD compared to In-distribution data, and exploiting this property in proposed approach?
If the key intuition is to address over-confidence of last layer, it might be worthwhile to compare and provide insights comparing to model calibration work? If the model is calibrated then why should the proposed approach be better performing than a calibrated model, albeit post-hoc and needs additional data?
Also, it is not very clear why the current approach works based on presented experiments, there are various empirical questions that can be considered to better understand why the method works and also limitations and effectiveness of proposed approach under various settings of OOD.
Few suggestive experiments to better understand the approach 1. By assigning small values what is relative difference in predictive entropy distributions compared to default last layer, with and without some calibration technique such as temperature scaling? 2. So to understand effectiveness of current approach, by using more uniform and/or small weights for last FC layer we are going from a classification aware to relatively less class-sensitive or more class irrelevant/agonistic representation? Is it because of background pixels, as we go towards class irrelevant features OOD detection improves? a. It might be worthwhile to report some likelihood/relative mahalnobis distance/ cosine distance based similarity metric in representation space of penultimate layer 'with and without being class agnostic' b. Also, can consider this work[1] which decomposes representations for and uses iCE loss [2] to decompose representation into discriminative and non-discriminative features. c. It might be worthwhile to observe the correlation in behavior based on decomposed representations vs proposed small weight approach.
References: 1. Huang et al. Decomposing Representations for Deterministic Uncertainty Estimation (arXiv: 2112.00856, BDL Workshop NeurIPS 2021) 2. J.H.Jacobsen et al. Excessive invariance causes adversarial vulnerability (ICLR 2019) 3. J. Ren et al. A simple fix to Mahalanobis distance for improving Near OOD. (arXiv: 2106.09022) |
ICLR | Title
Impact of the Last Fully Connected Layer on Out-of-distribution Detection
Abstract
Out-of-distribution (OOD) detection, a task that aims to detect OOD data during deployment, has received lots of research attention recently, due to its importance for the safe deployment of deep models. In this task, a major problem is how to handle the overconfidence problem in OOD data. While this problem has been explored from several perspectives in previous works, such as the measure of OOD uncertainty and the activation function, the connection between the last fully connected (FC) layer and this overconfidence problem is still less explored. In this paper, we find that the weight of the last FC layer of the model trained on indistribution (ID) data can be an important source of the overconfidence problem, and we propose a simple yet effective OOD detection method to assign the weight of the last FC layer with small values instead of using the original weight trained on ID data. We analyze in Sec. 5 that our proposed method can make the OOD data and the ID data to be more separable, and thus alleviate the overconfidence problem. Moreover, our proposed method can be flexibly applied on various offthe-shelf OOD detection methods. We show the effectiveness of our proposed method through extensive experiments on the ImageNet dataset, the CIFAR-10 dataset, and the CIFAR-100 dataset.
1 INTRODUCTION
Recently, deep models have achieved good performance in various computer vision tasks, but with a severe reliance on the assumption that the testing data comes from the same distribution as the training set (i.e., in-distribution (ID) test data) (Ben-David et al., 2010; Vapnik, 1991). This assumption, however, can be violated in the open world where out-of-distribution (OOD) data can be often encountered, and these OOD data as inputs can lead models to produce unrelated predictions and result in severe consequences, especially in many safety-critical applications, such as autonomous driving (Filos et al., 2020) and medical diagnosis (Zadorozhny et al., 2021). Due to the severe implications of OOD data in these applications, the task of OOD detection, which aims to detect OOD data during deployment, is important and has received lots of research attention recently (Liang et al., 2017; Hendrycks & Gimpel, 2016; Hendrycks et al., 2019; Liu et al., 2020; Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021; Lee et al., 2018).
To detect OOD data, a naive idea is to classify the OOD data and the ID data based on the confidence of the model in the data input. However, as deep models can be overconfident in the OOD data inputs (Nguyen et al., 2015), it can be non-trivial to separate the OOD data and the ID data based on such a naive idea. To better cope with the overconfidence problem and make the OOD data and the ID data more separable, previous works have proposed methods from several perspectives, such as redefining the measure of OOD uncertainty (Liu et al., 2020; Wang et al., 2022; Hendrycks et al., 2019) and rectifying the activation function (Sun et al., 2021). However, the connection between the last fully connected (FC) layer and the overconfidence problem is still less explored.
In this work, we argue that the weight of the last FC layer of the model trained on ID data can be an important source of the overconfidence problem. To justify our aforementioned argumentation, as a preliminary of our method, in Fig. 1, we use a ResNet-50 (He et al., 2016) model trained on ImageNet and conduct OOD detection experiments on various other datasets including iNaturalist, SUN, Places365, and Textures. Specifically, we compare the baseline that uses the original weight of the last FC layer (original weight) with a variant that assigns the weight of the last FC layer sim-
ply with ones (identity weight). As illustrated, compared to the baseline, this variant consistently reduces the false positive rate (FPR95) over various datasets. This demonstrates that the weight of the last FC layer of the model trained on ID data is not the optimal weight for OOD detection, and there can exist a weight that is more suitable.
Inspired by the above argumentation, in this work, to better cope with the overconfidence problem, we aim to assign the last FC layer of the model with a new weight so that the OOD data and the ID data can be made more separable. We find that this can be achieved via simply assigning small values (e.g. 0.01) to the weight of the last FC layer of the model. To theoretically show the effectiveness of our method, in Sec. 5, we first analyze why assigning constant values (e.g., ones) to the weight of the last FC layer can separate OOD data and ID data; we then explain why assign the weight of the last FC layer with small value can even make OOD data and ID data to be more separable. We also want to point out that, as the original weight of the last FC layer can still be used for the original task, via using our method, the classification accuracy on the original task is completely preserved.
Also, note that, as we just need to assign the last FC layer of the model with small values, our method is simple yet effective and needs neither a retraining process of the model nor additional OOD data. Besides, with only the weight of the last FC layer modified, our method can also be flexibly applied to various off-the-shelf OOD detection methods. We experiment our method with various OOD detection methods and achieve consistent improvement in OOD detection performance.
The contributions of our work are summarized as follows.
• From the novel perspective of the last FC layer, we propose a simple and effective OOD detection method to detect OOD data by simply assigning the weight of the last FC layer with small values.
• We perform theoretical analysis (in Sec. 5) on why assigning constant values (e.g., ones) to the weight of the last FC layer can separate OOD data and ID data. Moreover, we also analyze why a small value can even make the OOD data and ID data to be more separable. Our method thus can improve the OOD detection performance.
• Our method achieves significant OOD detection performance improvement when applied to various OOD detection methods on various evaluation benchmarks (Deng et al., 2009; Krizhevsky et al., 2009).
The rest of the paper is organized as follows. In Sec. 2, we discuss the related works of our paper. In Sec. 3, we provide the background of OOD detection. After that, we present our method in Sec. 4,
the analysis of our method in Sec. 5, and experimental results in Sec. 6. Finally, we conclude our paper in Sec. 7.
2 RELATED WORK
OOD Detection. Being an important task that helps detect OOD data during deployment, OOD detection has received lots of research attention, and most of the OOD detection methods fall into three categories: methods need retraining (DeVries & Taylor, 2018; Huang & Li, 2021; Zaeemzadeh et al., 2021), methods need extra OOD data (Hsu et al., 2020; Hendrycks et al., 2018; Dhamija et al., 2018; Ming et al., 2022; Lee et al., 2017; Yu & Aizawa, 2019; Wu et al., 2021), and posthoc methods. Among methods that need retraining, an extra branch is introduced by (DeVries & Taylor, 2018), and MOS makes use of a group-based feature space, and (Zaeemzadeh et al., 2021) incorporates angular distance into their method. In the category of methods that need extra OOD data, (Hendrycks et al., 2018) is the first to propose this category of method, (Dhamija et al., 2018) proposed to regularize extra image data from different backgrounds, and (Lee et al., 2017) proposed to generate OOD data on the boundary of OOD data and ID data. Besides these two categories of method, the category of post-hoc method have also attracted a lot of attention recently since it need neither retraining nor extra OOD data.
In the category of post-hoc methods, (Hendrycks & Gimpel, 2016) observe that a neural model tends to produce higher softmax values for ID data and lower ones for the OOD data. Therefore, they introduce a score function, the maximum softmax probability (MSP), to achieve OOD detection. To improve the OOD detection performance, (Liang et al., 2017) puts forward ODIN, which enlarges the gap between ID and OOD data by using large temperature scaling and adding perturbations on inputs. Lee uses the features and the class-wise centroids to calculate the Mahalanobis distance (Lee et al., 2018). The energy-based score function is introduced by (Liu et al., 2020). Such a function gives high energy to the OOD data and low energy to the ID data. (Sun et al., 2021) exploits the characteristics of the neural network to the OOD data and leverages the OOD detection performance by removing abnormal activate values.
Different from the existing post-hoc OOD detection methods, this paper takes a different view of the OOD detection problem. Specifically, we propose to connect the last FC layer and the overconfidence problem, and we propose to replace the original weight of the last FC layer with small values instead.
The Last FC Layer. The last FC layer, an important component that appears in many network structures, has been studied in various areas (Basha et al., 2020a;b; Zhao et al., 2020; Zhou et al., 2020) over the year, such as transfer learning (Basha et al., 2020a), continual learning (Zhao et al., 2020), and long tail problem (Zhou et al., 2020). In this paper, from a novel perspective, we build a connection between the last FC layer and the overconfidecne problem in OOD detection. Specifically, we find that the weight of the last FC layer trained on ID data can be an important source of the over confidence problem and propose to assign the weight of the last FC layer with small values instead.
3 BACKGROUND
Following most previous OOD detection works (Liang et al., 2017; Hendrycks & Gimpel, 2016; Hendrycks et al., 2019; Liu et al., 2020; Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021; Lee et al., 2018), this paper considers OOD detection in image classification. Denote Din := X in × Yin drawn from Pin the in-distribution dataset, where Pin denotes the in-distribution, X in denotes the in-distribution input space, and Yin ={1, 2, · · · , C} denotes the in-distribution label space corresponding to X in. Similarly, denote Dout := X out×Yout the out-of-distribution dataset, where X in denotes the out-of-distribution input space, and Yin denotes the corresponding out-ofdistribution label space. Moreover, denote F : X → Y an image classifier trained on Din. OOD detection can then be treated as a binary classification problem to distinguish whether the input data ⟨x, y⟩ ∈ Dm belongs to Din or Dout, where x is an image, and y is its corresponding ground true label. In other words, given a certain neural network F and a random test input x, the goal of OOD
detection is to define a score function G(x;F) such that: G(x;F) = { 1, if x ∈ Din 0, if x ∈ Dout
(1)
where Din ∩Dout = X in ∩X out = Yin ∩Yout = ∅. Note that the Dout is inaccessible during the training stage of F .
4 METHOD
In this section, we introduce our proposed OOD detection methods. The idea behind our method is to better cope with the overconfidence problem by replacing the last FC layer of the model with a new linear layer filled with a small constant value. We consider the input x, the last FC layer f , and the well trained neural network without the last FC layer g. We denote a d dimension feature vector from the penultimate layer of the model as (z1, z2, ..., zd) = z := g(x) ∈ Rd, the output of the model as f(z) where matrix f ∈ Rd×K and K is the number of classes. Following most of the recent OOD detection methods (Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; 2021; Tonin et al., 2021; Du et al., 2022; Elflein et al., 2021; Wang et al., 2020; Joshi et al., 2022; Chen et al., 2022; Ouyang et al., 2021; Ming et al., 2022), we first define the original measure of OOD uncertainty Sori before incorporating our proposed method as:
Sori = log K∑ i=1 efi(z) (2)
where fi indicates the i-th column of the matrix f . Note that a larger Sori indicates more confidence that x belongs to the in-distribution.
We then describe how the measure of OOD uncertainty S looks like after incorporating our proposed method. Specifically, let’s denote the matrix f ′i ∈ Rd×K filled with a value α, and then we replace the f with f ′i ∈ Rd to compute S. Since all entries of f ′i are same, all columns of f ′i are identical i.e. f ′1 = f ′ 2 = ... = f ′ K . Therefore, S can be denoted as:
S = log K∑ i=1 ef ′ i(z)
= log (ef ′ 1(z) + ef ′ 2(z) + ...+ ef ′ K(z))
= logKef ′ 1(z) where f ′1 = αJd,1 = logKef ′T 1 z where f ′1(z) = f ′T 1 z = logKeα ∑d i=1 zi where f ′T1 z = αJd,1z = α d∑
i=1
zi
= logKedαz̄ where z̄ = 1
d d∑ i=1 zi
= dαz̄ + logK (3)
where z̄ := E(z) and Jd,1 indicates a d× 1 all-ones matrix. To perform OOD detection using our proposed method, we further define the score function G(x;F) as:
G(x;F) = { 1, if S ≥ λ 0, if S < λ
(4)
where λ is a threshold. In our experiments, we set λ to be a value such that 95% ID data can be detected correctly, which is the same setting following most previous OOD detection methods (Hendrycks & Gimpel, 2016; Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; Liang et al., 2017; Hendrycks et al., 2019).
5 ANALYSIS
Below, we perform theoretical analysis to show the effectiveness of our method. Specifically, we first explain why replacing the trained weight of the last FC layer with a constant value α separates the distributions of ID and OOD data. After that, we further explain why a smaller α can make the ID and OOD data to be more separable.
5.1 EFFECTIVENESS OF ASSIGNING THE LAST FC LAYER WITH A CONSTANT VALUE
In this section, we analyze why assigning the last FC layer with a constant value α can separate ID and OOD data. Following the settings in Sec. 3, we denote the neural network trained on the ID data as F . Besides, we further denote the output of its penultimate layer is z = (z1, z2, ..., zn) ∈ Rn. Then, we can rewrite the score S produced by our method further as:
S = log K∑ i=1 ef ′ i(z)
= log kedαz̄
= dαz̄ + logK
∝ z̄ = E[z] (5)
We denote the z corresponding to the in-distribution data as zin = (zin1 , z in 2 , ..., z in n ). Following the same assumption from (Ming et al., 2022; Sun et al., 2021; 2022), we assume that each zini obeys the rectified Gaussian distribution i.e. zini ∼ max(0,N (µ, σ2in)). Then, we can model zini with a random variable x as:
zini = 1
σin √ 2π
e − (x−µ)
2
2σ2 in (6)
We denote the corresponding expectation of zin as Ein, and it can be written as:
Ein[z] = ∫ +∞ 0
x
σin √ 2π
e − (x−µ)
2
σ2 in dx
= 1
σin √ 2π ∫ +∞ 0 xe − (x−µ) 2 σ2 in dx
= µ√ 2π ∫ +∞ − µσin e− v2 2 dv + σin√ 2π ∫ +∞ − µσin ve− v2 2 dv where v = x− µ σin
= µ√ 2π (1− ∫ − µσin −∞ e− v2 2 dv) + σin√ 2π e − 12 ( −µ σin )2 = µ[1− Φ(−µ σin )] + σinφ( −µ σin ) (7)
where Φ and φ are Cumulative distribution function(cdf) and Probability density function(pdf) respectively. And then, we are going to model the expectation corresponding to the out-of-distribution data. Following the same observation from (Sun et al., 2021) that the output of the penultimate layer of the network corresponding to the OOD data,zout, is positively skewed. Specifically, we cam denote zout = (zout1 , z out 2 , ..., z out n ), so we can model each z out i ∼ESN(µ, σ2out, ϵ), where µ, σ2out, ϵ indicate the mean, the deviation and the degree of skewness of the ESN distribution. Therefore, following the theorem in (Mudholkar & Hutson, 2000), the expectation of zout, Eout[z], can be modeled as:
Eout[z] = µ− (1 + ϵ)Φ( −µ
(1 + ϵ)σout )µ+ (1 + ϵ)2φ( −µ (1 + ϵ)σout )− 4ϵ√ 2π σout (8)
Therefore, the difference of the Ein[z] and Eout[z] is: ∆ =E[zin]− E[zout]
=µ[1− Φ(−µ σin )] + σinφ( −µ σin )− µ− (1 + ϵ)Φ( −µ (1 + ϵ)σout )µ
+ (1 + ϵ)2φ( −µ (1 + ϵ)σout )− 4ϵ√ 2π σout
=− [(1 + ϵ)2ϕ( −µ (1 + ϵ)σout ) + 4ϵ√ 2π ]σout
− [Φ(−µ σin )− (1 + ϵ)Φ( −µ (1 + ϵ)σout )]µ+ ϕ( −µ σin )σin (9)
Given µ = 1.0 and ϵ = −0.5, we can plot ∆ in Fig. 2, and we can find out that it is greater than 0, i.e Sin > Sout. Therefore, we conclude that our method can produce greater confidence scores to in-distribution data than for the out-distribution data.
5.2 EFFECTIVENESS OF A SMALL α
In this section, we further explain why a smaller α can make the ID and OOD data to be more separable. We denote the norm difference of Sin and Sout as:
Sin − Sout ||α||2 = 1 α2 (log K∑ i=1 ef ′ i(z in) − log K∑ i=1 ef ′ i(z out))
= d
α (E[zin]− E[zout]) (10)
As shown in Eq. 10, to make the ID and OOD data to be more separable, we actually hope to make the norm difference of Sin and Sout to be larger Recall that E[zin] − E[zout] is a positive number as we discuss above. Therefore, a smaller α can make the norm difference of Sin and Sout larger, and thus make the ID and OOD data to be more separable.
6 EXPERIMENTS
In this section, we evaluate the effectiveness of our method on ImageNet and CIFAR OOD detection benchmarks. All experiments are conducted on NVIDIA Tesla V100 GPUs.
6.1 IMAGENET BENCHMARK
Setup. We use ReAct (Sun et al., 2021) as a baseline of our method and follow it. We use both a ResNet50 (He et al., 2016) model and a MobileNet-v2 (Sandler et al., 2018) model pre-trained on ImageNet (Deng et al., 2009) as the image classifier. Note that for fair comparison, we directly use the models trained by (Sun et al., 2021). Moreover, we set α in Eq. 3 to be a small number 0.01 in our experiments.
Evaluation Metric. We evaluate our OOD detection method on the following two common metrics: (1) FPR95 measures the FPR (False Positive Rate) of the OOD data when the recall (Positive Rate of the ID data) is at 95%. Note that a lower FPR95 indicates better performance of OOD detection. (2) AUROC measures the area under the TPR (True Positive Rate) and FPR (False Positive Rate). Note that a higher AUROC indicates better performance of OOD detection.
Dataset. In this benchmark, we consider ImageNet (Deng et al., 2009) as the ID dataset, and following (Sun et al., 2021; Hsu et al., 2020; Huang & Li, 2021), we evaluate our method on four commonly-used OOD datasets, including iNaturalist, SUN, Places365, and Textures. Note that all of these four datasets have non-overlapping classes w.r.t ImageNet. Below, we introduce each of them in more detail: (1) iNaturalist (Van Horn et al., 2018) contains 5,000 categories of plants and animals images, and the resolution of each image is 800 × 800. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 110 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (2) SUN (Xiao et al., 2010) contains 397 classes of natural images, and the resolution of each image is larger than 200 × 200. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 50 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (3) Places (Zhou et al., 2017) contains 205 categories of scene images whose resolutions are 512 × 512. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 50 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (4) Textures (Cimpoi et al., 2014) contains 47 classes of textural images whose resolutions are either 300 × 300 or 640 × 640. Following (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), the whole dataset with 5,640 images is used for OOD detection evaluation.
Results. In Tab. 1, we compare our method with the existing post-hoc OOD detection methods on all the four OOD datasets. As shown, our method demonstrates the best averaged result compared with common post-hoc OOD detection methods on both ResNet50 and MobileNet-V2, which demonstrates the effectiveness of our method.
6.2 CIFAR BENCHMARK
Setup. We use ReAct (Sun et al., 2021) as a baseline of our method and follow it. We use the ResNet18 (He et al., 2016) model as the image classifier for both CIFAR-10 and CIFAR-100. Note that for fair comparison, we directly use the models trained by (Sun et al., 2021). Moreover, we set α in Eq. 3 to be a small number 0.01 in our experiments.
Evaluation metric & Dataset. Following (Hendrycks & Gimpel, 2016; Liu et al., 2020; Liang et al., 2017; Sun et al., 2021; Huang et al., 2021; Lee et al., 2018), we use the FPR95 and AUROC metrics elaborated in Sec. 6.1 to evaluate our OOD detection method. In this benchmark, we use CIFAR-10 and CIFAR-100 as the ID datasets (Krizhevsky et al., 2009). With respect to the OOD datasets, following (Liu et al., 2020; Sun et al., 2021; Huang et al., 2021; Cimpoi et al., 2014), besides using the Places dataset and the Textures dataset that we have introduced above, we also
evaluate our method on three other OOD datasets including iSUN (Xu et al., 2015), LSUN (Yu et al., 2015), and SVHN (Netzer et al., 2011). Below, we introduce each of them in more detail: (1) LSUN dataset contains 10,000 images of 10 scene categories. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), to conduct OOD detection on this dataset, we randomly crop images in this dataset to size 32×32. Besides, following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), we also conduct OOD detection on a variant of this dataset (LSUN Resize) by resizing images in LSUN dataset to size 32×32. (2) iSUN dataset is sampled from the SUN (Xiao et al., 2010) dataset, and contains 20,608 images of 397 categories. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), the whole dataset is used for OOD detection evaluation. (3) SVHN dataset contains 26,032 images of 10 categories for testing. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), we use all the 26,032 images for OOD detection evaluation.
Results In Tab. 2, we compare our method with the existing post-hoc OOD detection methods on all the six OOD datasets. As shown, our method demonstrates the best averaged result compared with common post-hoc OOD detection methods, which demonstrates the effectiveness of our method.
6.3 ABLATION STUDIES
Effect of α. In the previous section, we analyzed the effect of α on the performance of OOD detection from a mathematical point of view and concluded that a smaller α has a positive effect on performance. In this subsection, we will experimentally show the impact of α on the performance
of OOD detection. We randomly sample α from a continuous uniform distribution between 0 and 1 i.e. α ∈ U[0,1]. And then we evaluate FPR95 and AUROC under various α on the iNaturalist, SUN, Places and Textures OOD datasets (Van Horn et al., 2018; Xiao et al., 2010; Zhou et al., 2017; Cimpoi et al., 2014) with ResNet50 (He et al., 2016) and MobileNet-V2 (Sandler et al., 2018) trained on ImageNet (Deng et al., 2009). The result is shown in the Fig. 3. As shown, as long as α decreases, a larger AUROC and a smaller FPR95 are consistently achieved throughout various OOD datasets, demonstrating the effectiveness of our method.
Effect of different baseline methods. To validate the general effectiveness of our proposed method, we apply our method on various different post-hoc OOD detection methods, including MSP, energy, react, vim, MaxLogit, and KL-Matching (Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; Hendrycks et al., 2019). As shown in Tab. 3, our method achieves consistent performance improvement when applied on various different post-hoc OOD detection methods. This demonstrates that our proposed method can be flexibly applied on various post-hoc OOD detection methods to improve their performance.
7 CONCLUSION
In this paper, we present a simple yet effective OOD detection method, which replaces the trained weight of the last FC layer with a small value. We theoretically analyze that the proposed method can make the ID data and OOD data to be more separable, and thus better cope with the overconfidence problem. We shows two ablation experiments to show that our method is compatible with existing OOD detection methods and achieves consistent performance improvement. Our method achieves superior performance on the ImageNet and CIFAR OOD detection benchmarks. | 1. What is the focus of the paper regarding connecting out-of-distribution detection with classification networks?
2. What are the strengths and weaknesses of the proposed method, particularly concerning its straightforward idea and formalization?
3. Do you have any concerns regarding the design and theoretical framework of the solution?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions regarding the experimental analysis, such as the selection of test sets and the combination of methods? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper reports on a study about connecting out-of-distribution detection (OOD) with the last fully connected layer of classification networks. The authors propose that OOD can be performed by assigning a constant value to the weights of the last fully connected layer of a network trained on in-distribution (ID) data. In this way, they claim to address the problem of overconfidence on OOD data. The method is the result of a straightforward idea and formalization, which is appreciatable. The paper reports about experiments on OOD datasets (e.g. SUN, Places, SVHN,etc.) of ID-trained networks using ImageNet and CIFAR.
Strengths And Weaknesses
Strenghts
the idea and implementation of the method is straigthforward (at the same time, it also is a weakness as it is not backed thoroughly with motivations and theory)
results look good (although presentation is not that clear)
Weaknesses
the hypothesis and design of the solution lacks clear motivations, so do the several choices made to construct the theoretical framework and explanations.
several choices are presented as arbitrary: value of \lambda, \mu and \eps in Eq. 9, normalization factor \alpha in Eq. 10, are not clearly backed, or just assumed to be as they are without substantial motivations.
experimental analysis not clear: details about the constructions of the test sets are vague (e.g. what classes are selected? how?). Whay only a subset of methods are selected from Table 2 to be included in Table 3? Interpretations of why the combination of methods (and in which cases they are complementary) works are also not elaborated on.
The link to overconfidence problem of OOD is claimed in abstract and introduction but not explicitly and completely addressed in the remaining of the paper.
in several parts the text is not clear, requiring substantial revision.
Clarity, Quality, Novelty And Reproducibility
The paper lacks clarity of explanations in several places, as mentioned among the weaknesses. Many assumptions are not justified or properly motivated, which makes the construction of the method somehow questionable, and mostly arbitrary.
The quality thus suffers from these lacks of clarity, and the main intuitions and observations behind the functioning of the methods are not presented well. One possible interpretation, which might be worth to discuss, is that by arbitrarly changing the weights learned during training with a constant value, the focus of the network is deliberately moved to a different part of the feature space and scaled to a uniform distribution. If that is the case, what are the implications and how reliable would this method be? Moreover, although I appreciate that the formulation of the method is very simple, the novelty is limited and not completely backed with substantial justification and motivations.
No details are provided about the classes chosen for the experiments, making reproducibility impossible. |
ICLR | Title
Impact of the Last Fully Connected Layer on Out-of-distribution Detection
Abstract
Out-of-distribution (OOD) detection, a task that aims to detect OOD data during deployment, has received lots of research attention recently, due to its importance for the safe deployment of deep models. In this task, a major problem is how to handle the overconfidence problem in OOD data. While this problem has been explored from several perspectives in previous works, such as the measure of OOD uncertainty and the activation function, the connection between the last fully connected (FC) layer and this overconfidence problem is still less explored. In this paper, we find that the weight of the last FC layer of the model trained on indistribution (ID) data can be an important source of the overconfidence problem, and we propose a simple yet effective OOD detection method to assign the weight of the last FC layer with small values instead of using the original weight trained on ID data. We analyze in Sec. 5 that our proposed method can make the OOD data and the ID data to be more separable, and thus alleviate the overconfidence problem. Moreover, our proposed method can be flexibly applied on various offthe-shelf OOD detection methods. We show the effectiveness of our proposed method through extensive experiments on the ImageNet dataset, the CIFAR-10 dataset, and the CIFAR-100 dataset.
1 INTRODUCTION
Recently, deep models have achieved good performance in various computer vision tasks, but with a severe reliance on the assumption that the testing data comes from the same distribution as the training set (i.e., in-distribution (ID) test data) (Ben-David et al., 2010; Vapnik, 1991). This assumption, however, can be violated in the open world where out-of-distribution (OOD) data can be often encountered, and these OOD data as inputs can lead models to produce unrelated predictions and result in severe consequences, especially in many safety-critical applications, such as autonomous driving (Filos et al., 2020) and medical diagnosis (Zadorozhny et al., 2021). Due to the severe implications of OOD data in these applications, the task of OOD detection, which aims to detect OOD data during deployment, is important and has received lots of research attention recently (Liang et al., 2017; Hendrycks & Gimpel, 2016; Hendrycks et al., 2019; Liu et al., 2020; Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021; Lee et al., 2018).
To detect OOD data, a naive idea is to classify the OOD data and the ID data based on the confidence of the model in the data input. However, as deep models can be overconfident in the OOD data inputs (Nguyen et al., 2015), it can be non-trivial to separate the OOD data and the ID data based on such a naive idea. To better cope with the overconfidence problem and make the OOD data and the ID data more separable, previous works have proposed methods from several perspectives, such as redefining the measure of OOD uncertainty (Liu et al., 2020; Wang et al., 2022; Hendrycks et al., 2019) and rectifying the activation function (Sun et al., 2021). However, the connection between the last fully connected (FC) layer and the overconfidence problem is still less explored.
In this work, we argue that the weight of the last FC layer of the model trained on ID data can be an important source of the overconfidence problem. To justify our aforementioned argumentation, as a preliminary of our method, in Fig. 1, we use a ResNet-50 (He et al., 2016) model trained on ImageNet and conduct OOD detection experiments on various other datasets including iNaturalist, SUN, Places365, and Textures. Specifically, we compare the baseline that uses the original weight of the last FC layer (original weight) with a variant that assigns the weight of the last FC layer sim-
ply with ones (identity weight). As illustrated, compared to the baseline, this variant consistently reduces the false positive rate (FPR95) over various datasets. This demonstrates that the weight of the last FC layer of the model trained on ID data is not the optimal weight for OOD detection, and there can exist a weight that is more suitable.
Inspired by the above argumentation, in this work, to better cope with the overconfidence problem, we aim to assign the last FC layer of the model with a new weight so that the OOD data and the ID data can be made more separable. We find that this can be achieved via simply assigning small values (e.g. 0.01) to the weight of the last FC layer of the model. To theoretically show the effectiveness of our method, in Sec. 5, we first analyze why assigning constant values (e.g., ones) to the weight of the last FC layer can separate OOD data and ID data; we then explain why assign the weight of the last FC layer with small value can even make OOD data and ID data to be more separable. We also want to point out that, as the original weight of the last FC layer can still be used for the original task, via using our method, the classification accuracy on the original task is completely preserved.
Also, note that, as we just need to assign the last FC layer of the model with small values, our method is simple yet effective and needs neither a retraining process of the model nor additional OOD data. Besides, with only the weight of the last FC layer modified, our method can also be flexibly applied to various off-the-shelf OOD detection methods. We experiment our method with various OOD detection methods and achieve consistent improvement in OOD detection performance.
The contributions of our work are summarized as follows.
• From the novel perspective of the last FC layer, we propose a simple and effective OOD detection method to detect OOD data by simply assigning the weight of the last FC layer with small values.
• We perform theoretical analysis (in Sec. 5) on why assigning constant values (e.g., ones) to the weight of the last FC layer can separate OOD data and ID data. Moreover, we also analyze why a small value can even make the OOD data and ID data to be more separable. Our method thus can improve the OOD detection performance.
• Our method achieves significant OOD detection performance improvement when applied to various OOD detection methods on various evaluation benchmarks (Deng et al., 2009; Krizhevsky et al., 2009).
The rest of the paper is organized as follows. In Sec. 2, we discuss the related works of our paper. In Sec. 3, we provide the background of OOD detection. After that, we present our method in Sec. 4,
the analysis of our method in Sec. 5, and experimental results in Sec. 6. Finally, we conclude our paper in Sec. 7.
2 RELATED WORK
OOD Detection. Being an important task that helps detect OOD data during deployment, OOD detection has received lots of research attention, and most of the OOD detection methods fall into three categories: methods need retraining (DeVries & Taylor, 2018; Huang & Li, 2021; Zaeemzadeh et al., 2021), methods need extra OOD data (Hsu et al., 2020; Hendrycks et al., 2018; Dhamija et al., 2018; Ming et al., 2022; Lee et al., 2017; Yu & Aizawa, 2019; Wu et al., 2021), and posthoc methods. Among methods that need retraining, an extra branch is introduced by (DeVries & Taylor, 2018), and MOS makes use of a group-based feature space, and (Zaeemzadeh et al., 2021) incorporates angular distance into their method. In the category of methods that need extra OOD data, (Hendrycks et al., 2018) is the first to propose this category of method, (Dhamija et al., 2018) proposed to regularize extra image data from different backgrounds, and (Lee et al., 2017) proposed to generate OOD data on the boundary of OOD data and ID data. Besides these two categories of method, the category of post-hoc method have also attracted a lot of attention recently since it need neither retraining nor extra OOD data.
In the category of post-hoc methods, (Hendrycks & Gimpel, 2016) observe that a neural model tends to produce higher softmax values for ID data and lower ones for the OOD data. Therefore, they introduce a score function, the maximum softmax probability (MSP), to achieve OOD detection. To improve the OOD detection performance, (Liang et al., 2017) puts forward ODIN, which enlarges the gap between ID and OOD data by using large temperature scaling and adding perturbations on inputs. Lee uses the features and the class-wise centroids to calculate the Mahalanobis distance (Lee et al., 2018). The energy-based score function is introduced by (Liu et al., 2020). Such a function gives high energy to the OOD data and low energy to the ID data. (Sun et al., 2021) exploits the characteristics of the neural network to the OOD data and leverages the OOD detection performance by removing abnormal activate values.
Different from the existing post-hoc OOD detection methods, this paper takes a different view of the OOD detection problem. Specifically, we propose to connect the last FC layer and the overconfidence problem, and we propose to replace the original weight of the last FC layer with small values instead.
The Last FC Layer. The last FC layer, an important component that appears in many network structures, has been studied in various areas (Basha et al., 2020a;b; Zhao et al., 2020; Zhou et al., 2020) over the year, such as transfer learning (Basha et al., 2020a), continual learning (Zhao et al., 2020), and long tail problem (Zhou et al., 2020). In this paper, from a novel perspective, we build a connection between the last FC layer and the overconfidecne problem in OOD detection. Specifically, we find that the weight of the last FC layer trained on ID data can be an important source of the over confidence problem and propose to assign the weight of the last FC layer with small values instead.
3 BACKGROUND
Following most previous OOD detection works (Liang et al., 2017; Hendrycks & Gimpel, 2016; Hendrycks et al., 2019; Liu et al., 2020; Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021; Lee et al., 2018), this paper considers OOD detection in image classification. Denote Din := X in × Yin drawn from Pin the in-distribution dataset, where Pin denotes the in-distribution, X in denotes the in-distribution input space, and Yin ={1, 2, · · · , C} denotes the in-distribution label space corresponding to X in. Similarly, denote Dout := X out×Yout the out-of-distribution dataset, where X in denotes the out-of-distribution input space, and Yin denotes the corresponding out-ofdistribution label space. Moreover, denote F : X → Y an image classifier trained on Din. OOD detection can then be treated as a binary classification problem to distinguish whether the input data ⟨x, y⟩ ∈ Dm belongs to Din or Dout, where x is an image, and y is its corresponding ground true label. In other words, given a certain neural network F and a random test input x, the goal of OOD
detection is to define a score function G(x;F) such that: G(x;F) = { 1, if x ∈ Din 0, if x ∈ Dout
(1)
where Din ∩Dout = X in ∩X out = Yin ∩Yout = ∅. Note that the Dout is inaccessible during the training stage of F .
4 METHOD
In this section, we introduce our proposed OOD detection methods. The idea behind our method is to better cope with the overconfidence problem by replacing the last FC layer of the model with a new linear layer filled with a small constant value. We consider the input x, the last FC layer f , and the well trained neural network without the last FC layer g. We denote a d dimension feature vector from the penultimate layer of the model as (z1, z2, ..., zd) = z := g(x) ∈ Rd, the output of the model as f(z) where matrix f ∈ Rd×K and K is the number of classes. Following most of the recent OOD detection methods (Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; 2021; Tonin et al., 2021; Du et al., 2022; Elflein et al., 2021; Wang et al., 2020; Joshi et al., 2022; Chen et al., 2022; Ouyang et al., 2021; Ming et al., 2022), we first define the original measure of OOD uncertainty Sori before incorporating our proposed method as:
Sori = log K∑ i=1 efi(z) (2)
where fi indicates the i-th column of the matrix f . Note that a larger Sori indicates more confidence that x belongs to the in-distribution.
We then describe how the measure of OOD uncertainty S looks like after incorporating our proposed method. Specifically, let’s denote the matrix f ′i ∈ Rd×K filled with a value α, and then we replace the f with f ′i ∈ Rd to compute S. Since all entries of f ′i are same, all columns of f ′i are identical i.e. f ′1 = f ′ 2 = ... = f ′ K . Therefore, S can be denoted as:
S = log K∑ i=1 ef ′ i(z)
= log (ef ′ 1(z) + ef ′ 2(z) + ...+ ef ′ K(z))
= logKef ′ 1(z) where f ′1 = αJd,1 = logKef ′T 1 z where f ′1(z) = f ′T 1 z = logKeα ∑d i=1 zi where f ′T1 z = αJd,1z = α d∑
i=1
zi
= logKedαz̄ where z̄ = 1
d d∑ i=1 zi
= dαz̄ + logK (3)
where z̄ := E(z) and Jd,1 indicates a d× 1 all-ones matrix. To perform OOD detection using our proposed method, we further define the score function G(x;F) as:
G(x;F) = { 1, if S ≥ λ 0, if S < λ
(4)
where λ is a threshold. In our experiments, we set λ to be a value such that 95% ID data can be detected correctly, which is the same setting following most previous OOD detection methods (Hendrycks & Gimpel, 2016; Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; Liang et al., 2017; Hendrycks et al., 2019).
5 ANALYSIS
Below, we perform theoretical analysis to show the effectiveness of our method. Specifically, we first explain why replacing the trained weight of the last FC layer with a constant value α separates the distributions of ID and OOD data. After that, we further explain why a smaller α can make the ID and OOD data to be more separable.
5.1 EFFECTIVENESS OF ASSIGNING THE LAST FC LAYER WITH A CONSTANT VALUE
In this section, we analyze why assigning the last FC layer with a constant value α can separate ID and OOD data. Following the settings in Sec. 3, we denote the neural network trained on the ID data as F . Besides, we further denote the output of its penultimate layer is z = (z1, z2, ..., zn) ∈ Rn. Then, we can rewrite the score S produced by our method further as:
S = log K∑ i=1 ef ′ i(z)
= log kedαz̄
= dαz̄ + logK
∝ z̄ = E[z] (5)
We denote the z corresponding to the in-distribution data as zin = (zin1 , z in 2 , ..., z in n ). Following the same assumption from (Ming et al., 2022; Sun et al., 2021; 2022), we assume that each zini obeys the rectified Gaussian distribution i.e. zini ∼ max(0,N (µ, σ2in)). Then, we can model zini with a random variable x as:
zini = 1
σin √ 2π
e − (x−µ)
2
2σ2 in (6)
We denote the corresponding expectation of zin as Ein, and it can be written as:
Ein[z] = ∫ +∞ 0
x
σin √ 2π
e − (x−µ)
2
σ2 in dx
= 1
σin √ 2π ∫ +∞ 0 xe − (x−µ) 2 σ2 in dx
= µ√ 2π ∫ +∞ − µσin e− v2 2 dv + σin√ 2π ∫ +∞ − µσin ve− v2 2 dv where v = x− µ σin
= µ√ 2π (1− ∫ − µσin −∞ e− v2 2 dv) + σin√ 2π e − 12 ( −µ σin )2 = µ[1− Φ(−µ σin )] + σinφ( −µ σin ) (7)
where Φ and φ are Cumulative distribution function(cdf) and Probability density function(pdf) respectively. And then, we are going to model the expectation corresponding to the out-of-distribution data. Following the same observation from (Sun et al., 2021) that the output of the penultimate layer of the network corresponding to the OOD data,zout, is positively skewed. Specifically, we cam denote zout = (zout1 , z out 2 , ..., z out n ), so we can model each z out i ∼ESN(µ, σ2out, ϵ), where µ, σ2out, ϵ indicate the mean, the deviation and the degree of skewness of the ESN distribution. Therefore, following the theorem in (Mudholkar & Hutson, 2000), the expectation of zout, Eout[z], can be modeled as:
Eout[z] = µ− (1 + ϵ)Φ( −µ
(1 + ϵ)σout )µ+ (1 + ϵ)2φ( −µ (1 + ϵ)σout )− 4ϵ√ 2π σout (8)
Therefore, the difference of the Ein[z] and Eout[z] is: ∆ =E[zin]− E[zout]
=µ[1− Φ(−µ σin )] + σinφ( −µ σin )− µ− (1 + ϵ)Φ( −µ (1 + ϵ)σout )µ
+ (1 + ϵ)2φ( −µ (1 + ϵ)σout )− 4ϵ√ 2π σout
=− [(1 + ϵ)2ϕ( −µ (1 + ϵ)σout ) + 4ϵ√ 2π ]σout
− [Φ(−µ σin )− (1 + ϵ)Φ( −µ (1 + ϵ)σout )]µ+ ϕ( −µ σin )σin (9)
Given µ = 1.0 and ϵ = −0.5, we can plot ∆ in Fig. 2, and we can find out that it is greater than 0, i.e Sin > Sout. Therefore, we conclude that our method can produce greater confidence scores to in-distribution data than for the out-distribution data.
5.2 EFFECTIVENESS OF A SMALL α
In this section, we further explain why a smaller α can make the ID and OOD data to be more separable. We denote the norm difference of Sin and Sout as:
Sin − Sout ||α||2 = 1 α2 (log K∑ i=1 ef ′ i(z in) − log K∑ i=1 ef ′ i(z out))
= d
α (E[zin]− E[zout]) (10)
As shown in Eq. 10, to make the ID and OOD data to be more separable, we actually hope to make the norm difference of Sin and Sout to be larger Recall that E[zin] − E[zout] is a positive number as we discuss above. Therefore, a smaller α can make the norm difference of Sin and Sout larger, and thus make the ID and OOD data to be more separable.
6 EXPERIMENTS
In this section, we evaluate the effectiveness of our method on ImageNet and CIFAR OOD detection benchmarks. All experiments are conducted on NVIDIA Tesla V100 GPUs.
6.1 IMAGENET BENCHMARK
Setup. We use ReAct (Sun et al., 2021) as a baseline of our method and follow it. We use both a ResNet50 (He et al., 2016) model and a MobileNet-v2 (Sandler et al., 2018) model pre-trained on ImageNet (Deng et al., 2009) as the image classifier. Note that for fair comparison, we directly use the models trained by (Sun et al., 2021). Moreover, we set α in Eq. 3 to be a small number 0.01 in our experiments.
Evaluation Metric. We evaluate our OOD detection method on the following two common metrics: (1) FPR95 measures the FPR (False Positive Rate) of the OOD data when the recall (Positive Rate of the ID data) is at 95%. Note that a lower FPR95 indicates better performance of OOD detection. (2) AUROC measures the area under the TPR (True Positive Rate) and FPR (False Positive Rate). Note that a higher AUROC indicates better performance of OOD detection.
Dataset. In this benchmark, we consider ImageNet (Deng et al., 2009) as the ID dataset, and following (Sun et al., 2021; Hsu et al., 2020; Huang & Li, 2021), we evaluate our method on four commonly-used OOD datasets, including iNaturalist, SUN, Places365, and Textures. Note that all of these four datasets have non-overlapping classes w.r.t ImageNet. Below, we introduce each of them in more detail: (1) iNaturalist (Van Horn et al., 2018) contains 5,000 categories of plants and animals images, and the resolution of each image is 800 × 800. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 110 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (2) SUN (Xiao et al., 2010) contains 397 classes of natural images, and the resolution of each image is larger than 200 × 200. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 50 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (3) Places (Zhou et al., 2017) contains 205 categories of scene images whose resolutions are 512 × 512. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 50 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (4) Textures (Cimpoi et al., 2014) contains 47 classes of textural images whose resolutions are either 300 × 300 or 640 × 640. Following (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), the whole dataset with 5,640 images is used for OOD detection evaluation.
Results. In Tab. 1, we compare our method with the existing post-hoc OOD detection methods on all the four OOD datasets. As shown, our method demonstrates the best averaged result compared with common post-hoc OOD detection methods on both ResNet50 and MobileNet-V2, which demonstrates the effectiveness of our method.
6.2 CIFAR BENCHMARK
Setup. We use ReAct (Sun et al., 2021) as a baseline of our method and follow it. We use the ResNet18 (He et al., 2016) model as the image classifier for both CIFAR-10 and CIFAR-100. Note that for fair comparison, we directly use the models trained by (Sun et al., 2021). Moreover, we set α in Eq. 3 to be a small number 0.01 in our experiments.
Evaluation metric & Dataset. Following (Hendrycks & Gimpel, 2016; Liu et al., 2020; Liang et al., 2017; Sun et al., 2021; Huang et al., 2021; Lee et al., 2018), we use the FPR95 and AUROC metrics elaborated in Sec. 6.1 to evaluate our OOD detection method. In this benchmark, we use CIFAR-10 and CIFAR-100 as the ID datasets (Krizhevsky et al., 2009). With respect to the OOD datasets, following (Liu et al., 2020; Sun et al., 2021; Huang et al., 2021; Cimpoi et al., 2014), besides using the Places dataset and the Textures dataset that we have introduced above, we also
evaluate our method on three other OOD datasets including iSUN (Xu et al., 2015), LSUN (Yu et al., 2015), and SVHN (Netzer et al., 2011). Below, we introduce each of them in more detail: (1) LSUN dataset contains 10,000 images of 10 scene categories. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), to conduct OOD detection on this dataset, we randomly crop images in this dataset to size 32×32. Besides, following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), we also conduct OOD detection on a variant of this dataset (LSUN Resize) by resizing images in LSUN dataset to size 32×32. (2) iSUN dataset is sampled from the SUN (Xiao et al., 2010) dataset, and contains 20,608 images of 397 categories. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), the whole dataset is used for OOD detection evaluation. (3) SVHN dataset contains 26,032 images of 10 categories for testing. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), we use all the 26,032 images for OOD detection evaluation.
Results In Tab. 2, we compare our method with the existing post-hoc OOD detection methods on all the six OOD datasets. As shown, our method demonstrates the best averaged result compared with common post-hoc OOD detection methods, which demonstrates the effectiveness of our method.
6.3 ABLATION STUDIES
Effect of α. In the previous section, we analyzed the effect of α on the performance of OOD detection from a mathematical point of view and concluded that a smaller α has a positive effect on performance. In this subsection, we will experimentally show the impact of α on the performance
of OOD detection. We randomly sample α from a continuous uniform distribution between 0 and 1 i.e. α ∈ U[0,1]. And then we evaluate FPR95 and AUROC under various α on the iNaturalist, SUN, Places and Textures OOD datasets (Van Horn et al., 2018; Xiao et al., 2010; Zhou et al., 2017; Cimpoi et al., 2014) with ResNet50 (He et al., 2016) and MobileNet-V2 (Sandler et al., 2018) trained on ImageNet (Deng et al., 2009). The result is shown in the Fig. 3. As shown, as long as α decreases, a larger AUROC and a smaller FPR95 are consistently achieved throughout various OOD datasets, demonstrating the effectiveness of our method.
Effect of different baseline methods. To validate the general effectiveness of our proposed method, we apply our method on various different post-hoc OOD detection methods, including MSP, energy, react, vim, MaxLogit, and KL-Matching (Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; Hendrycks et al., 2019). As shown in Tab. 3, our method achieves consistent performance improvement when applied on various different post-hoc OOD detection methods. This demonstrates that our proposed method can be flexibly applied on various post-hoc OOD detection methods to improve their performance.
7 CONCLUSION
In this paper, we present a simple yet effective OOD detection method, which replaces the trained weight of the last FC layer with a small value. We theoretically analyze that the proposed method can make the ID data and OOD data to be more separable, and thus better cope with the overconfidence problem. We shows two ablation experiments to show that our method is compatible with existing OOD detection methods and achieves consistent performance improvement. Our method achieves superior performance on the ImageNet and CIFAR OOD detection benchmarks. | 1. What is the focus of the paper regarding the out-of-distribution detection problem?
2. What are the strengths of the proposed approach, particularly its simplicity and theoretical analysis?
3. What are the weaknesses of the paper, such as the potential lack of novelty or generalizability?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper suggests an extremely simple yet effective method that replaces final fully-connected layer weight with small scalar
α
for out-of-distribution (OOD) detection problem. Authors claim that final layer weight can significantly impact the OOD uncertainty measure based on an observation (Figure 1) and suggests replacing final fully-connected output with small scalar
α
whose advantage is shows through theoretical analysis under substantial assumptions. Besides, authors prepared empirical results that can showcase the effectiveness of the proposed method.
Strengths And Weaknesses
Strengths
Simple: Simplicity of the method (replacing final layer weight with small scalar) helps readers understand the paper more easily.
Reasonable: The logical storyline with an interesting observation and theoretical analysis raises the validity of the paper.
Weaknesses (And questions)
Simple: To some readers, the simplicity of the suggested method may be recognized as less novel or not generalizable. For example, is it still feasible for hard OOD samples?
Limited theoretical analysis: Although providing theoretical analysis is indeed advantageous, to some readers, the analysis can be seen as a way of placeholder due to method simplicity because it contains assumptions that leads to a stale analysis conclusion and may raise questions on generalizability for domains that OOD samples are not easily distinguishable.
Clarity, Quality, Novelty And Reproducibility
Clarity: I could enjoy reading the paper not only due to its simplicity but also well-structured outline. Quality: There is no quality issue in that the paper consists of an interesting observation, method with theoretical analysis, and empirical supports. Novelty: Although manipulating final layer weight may be new to out-of-distribution data domain, the method itself would not be novel enough as authors referred to. Reproducibility: It is easily reproducible because it only requires replacing final fully-connected layer output with small scalar value close to zero. |
ICLR | Title
Impact of the Last Fully Connected Layer on Out-of-distribution Detection
Abstract
Out-of-distribution (OOD) detection, a task that aims to detect OOD data during deployment, has received lots of research attention recently, due to its importance for the safe deployment of deep models. In this task, a major problem is how to handle the overconfidence problem in OOD data. While this problem has been explored from several perspectives in previous works, such as the measure of OOD uncertainty and the activation function, the connection between the last fully connected (FC) layer and this overconfidence problem is still less explored. In this paper, we find that the weight of the last FC layer of the model trained on indistribution (ID) data can be an important source of the overconfidence problem, and we propose a simple yet effective OOD detection method to assign the weight of the last FC layer with small values instead of using the original weight trained on ID data. We analyze in Sec. 5 that our proposed method can make the OOD data and the ID data to be more separable, and thus alleviate the overconfidence problem. Moreover, our proposed method can be flexibly applied on various offthe-shelf OOD detection methods. We show the effectiveness of our proposed method through extensive experiments on the ImageNet dataset, the CIFAR-10 dataset, and the CIFAR-100 dataset.
1 INTRODUCTION
Recently, deep models have achieved good performance in various computer vision tasks, but with a severe reliance on the assumption that the testing data comes from the same distribution as the training set (i.e., in-distribution (ID) test data) (Ben-David et al., 2010; Vapnik, 1991). This assumption, however, can be violated in the open world where out-of-distribution (OOD) data can be often encountered, and these OOD data as inputs can lead models to produce unrelated predictions and result in severe consequences, especially in many safety-critical applications, such as autonomous driving (Filos et al., 2020) and medical diagnosis (Zadorozhny et al., 2021). Due to the severe implications of OOD data in these applications, the task of OOD detection, which aims to detect OOD data during deployment, is important and has received lots of research attention recently (Liang et al., 2017; Hendrycks & Gimpel, 2016; Hendrycks et al., 2019; Liu et al., 2020; Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021; Lee et al., 2018).
To detect OOD data, a naive idea is to classify the OOD data and the ID data based on the confidence of the model in the data input. However, as deep models can be overconfident in the OOD data inputs (Nguyen et al., 2015), it can be non-trivial to separate the OOD data and the ID data based on such a naive idea. To better cope with the overconfidence problem and make the OOD data and the ID data more separable, previous works have proposed methods from several perspectives, such as redefining the measure of OOD uncertainty (Liu et al., 2020; Wang et al., 2022; Hendrycks et al., 2019) and rectifying the activation function (Sun et al., 2021). However, the connection between the last fully connected (FC) layer and the overconfidence problem is still less explored.
In this work, we argue that the weight of the last FC layer of the model trained on ID data can be an important source of the overconfidence problem. To justify our aforementioned argumentation, as a preliminary of our method, in Fig. 1, we use a ResNet-50 (He et al., 2016) model trained on ImageNet and conduct OOD detection experiments on various other datasets including iNaturalist, SUN, Places365, and Textures. Specifically, we compare the baseline that uses the original weight of the last FC layer (original weight) with a variant that assigns the weight of the last FC layer sim-
ply with ones (identity weight). As illustrated, compared to the baseline, this variant consistently reduces the false positive rate (FPR95) over various datasets. This demonstrates that the weight of the last FC layer of the model trained on ID data is not the optimal weight for OOD detection, and there can exist a weight that is more suitable.
Inspired by the above argumentation, in this work, to better cope with the overconfidence problem, we aim to assign the last FC layer of the model with a new weight so that the OOD data and the ID data can be made more separable. We find that this can be achieved via simply assigning small values (e.g. 0.01) to the weight of the last FC layer of the model. To theoretically show the effectiveness of our method, in Sec. 5, we first analyze why assigning constant values (e.g., ones) to the weight of the last FC layer can separate OOD data and ID data; we then explain why assign the weight of the last FC layer with small value can even make OOD data and ID data to be more separable. We also want to point out that, as the original weight of the last FC layer can still be used for the original task, via using our method, the classification accuracy on the original task is completely preserved.
Also, note that, as we just need to assign the last FC layer of the model with small values, our method is simple yet effective and needs neither a retraining process of the model nor additional OOD data. Besides, with only the weight of the last FC layer modified, our method can also be flexibly applied to various off-the-shelf OOD detection methods. We experiment our method with various OOD detection methods and achieve consistent improvement in OOD detection performance.
The contributions of our work are summarized as follows.
• From the novel perspective of the last FC layer, we propose a simple and effective OOD detection method to detect OOD data by simply assigning the weight of the last FC layer with small values.
• We perform theoretical analysis (in Sec. 5) on why assigning constant values (e.g., ones) to the weight of the last FC layer can separate OOD data and ID data. Moreover, we also analyze why a small value can even make the OOD data and ID data to be more separable. Our method thus can improve the OOD detection performance.
• Our method achieves significant OOD detection performance improvement when applied to various OOD detection methods on various evaluation benchmarks (Deng et al., 2009; Krizhevsky et al., 2009).
The rest of the paper is organized as follows. In Sec. 2, we discuss the related works of our paper. In Sec. 3, we provide the background of OOD detection. After that, we present our method in Sec. 4,
the analysis of our method in Sec. 5, and experimental results in Sec. 6. Finally, we conclude our paper in Sec. 7.
2 RELATED WORK
OOD Detection. Being an important task that helps detect OOD data during deployment, OOD detection has received lots of research attention, and most of the OOD detection methods fall into three categories: methods need retraining (DeVries & Taylor, 2018; Huang & Li, 2021; Zaeemzadeh et al., 2021), methods need extra OOD data (Hsu et al., 2020; Hendrycks et al., 2018; Dhamija et al., 2018; Ming et al., 2022; Lee et al., 2017; Yu & Aizawa, 2019; Wu et al., 2021), and posthoc methods. Among methods that need retraining, an extra branch is introduced by (DeVries & Taylor, 2018), and MOS makes use of a group-based feature space, and (Zaeemzadeh et al., 2021) incorporates angular distance into their method. In the category of methods that need extra OOD data, (Hendrycks et al., 2018) is the first to propose this category of method, (Dhamija et al., 2018) proposed to regularize extra image data from different backgrounds, and (Lee et al., 2017) proposed to generate OOD data on the boundary of OOD data and ID data. Besides these two categories of method, the category of post-hoc method have also attracted a lot of attention recently since it need neither retraining nor extra OOD data.
In the category of post-hoc methods, (Hendrycks & Gimpel, 2016) observe that a neural model tends to produce higher softmax values for ID data and lower ones for the OOD data. Therefore, they introduce a score function, the maximum softmax probability (MSP), to achieve OOD detection. To improve the OOD detection performance, (Liang et al., 2017) puts forward ODIN, which enlarges the gap between ID and OOD data by using large temperature scaling and adding perturbations on inputs. Lee uses the features and the class-wise centroids to calculate the Mahalanobis distance (Lee et al., 2018). The energy-based score function is introduced by (Liu et al., 2020). Such a function gives high energy to the OOD data and low energy to the ID data. (Sun et al., 2021) exploits the characteristics of the neural network to the OOD data and leverages the OOD detection performance by removing abnormal activate values.
Different from the existing post-hoc OOD detection methods, this paper takes a different view of the OOD detection problem. Specifically, we propose to connect the last FC layer and the overconfidence problem, and we propose to replace the original weight of the last FC layer with small values instead.
The Last FC Layer. The last FC layer, an important component that appears in many network structures, has been studied in various areas (Basha et al., 2020a;b; Zhao et al., 2020; Zhou et al., 2020) over the year, such as transfer learning (Basha et al., 2020a), continual learning (Zhao et al., 2020), and long tail problem (Zhou et al., 2020). In this paper, from a novel perspective, we build a connection between the last FC layer and the overconfidecne problem in OOD detection. Specifically, we find that the weight of the last FC layer trained on ID data can be an important source of the over confidence problem and propose to assign the weight of the last FC layer with small values instead.
3 BACKGROUND
Following most previous OOD detection works (Liang et al., 2017; Hendrycks & Gimpel, 2016; Hendrycks et al., 2019; Liu et al., 2020; Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021; Lee et al., 2018), this paper considers OOD detection in image classification. Denote Din := X in × Yin drawn from Pin the in-distribution dataset, where Pin denotes the in-distribution, X in denotes the in-distribution input space, and Yin ={1, 2, · · · , C} denotes the in-distribution label space corresponding to X in. Similarly, denote Dout := X out×Yout the out-of-distribution dataset, where X in denotes the out-of-distribution input space, and Yin denotes the corresponding out-ofdistribution label space. Moreover, denote F : X → Y an image classifier trained on Din. OOD detection can then be treated as a binary classification problem to distinguish whether the input data ⟨x, y⟩ ∈ Dm belongs to Din or Dout, where x is an image, and y is its corresponding ground true label. In other words, given a certain neural network F and a random test input x, the goal of OOD
detection is to define a score function G(x;F) such that: G(x;F) = { 1, if x ∈ Din 0, if x ∈ Dout
(1)
where Din ∩Dout = X in ∩X out = Yin ∩Yout = ∅. Note that the Dout is inaccessible during the training stage of F .
4 METHOD
In this section, we introduce our proposed OOD detection methods. The idea behind our method is to better cope with the overconfidence problem by replacing the last FC layer of the model with a new linear layer filled with a small constant value. We consider the input x, the last FC layer f , and the well trained neural network without the last FC layer g. We denote a d dimension feature vector from the penultimate layer of the model as (z1, z2, ..., zd) = z := g(x) ∈ Rd, the output of the model as f(z) where matrix f ∈ Rd×K and K is the number of classes. Following most of the recent OOD detection methods (Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; 2021; Tonin et al., 2021; Du et al., 2022; Elflein et al., 2021; Wang et al., 2020; Joshi et al., 2022; Chen et al., 2022; Ouyang et al., 2021; Ming et al., 2022), we first define the original measure of OOD uncertainty Sori before incorporating our proposed method as:
Sori = log K∑ i=1 efi(z) (2)
where fi indicates the i-th column of the matrix f . Note that a larger Sori indicates more confidence that x belongs to the in-distribution.
We then describe how the measure of OOD uncertainty S looks like after incorporating our proposed method. Specifically, let’s denote the matrix f ′i ∈ Rd×K filled with a value α, and then we replace the f with f ′i ∈ Rd to compute S. Since all entries of f ′i are same, all columns of f ′i are identical i.e. f ′1 = f ′ 2 = ... = f ′ K . Therefore, S can be denoted as:
S = log K∑ i=1 ef ′ i(z)
= log (ef ′ 1(z) + ef ′ 2(z) + ...+ ef ′ K(z))
= logKef ′ 1(z) where f ′1 = αJd,1 = logKef ′T 1 z where f ′1(z) = f ′T 1 z = logKeα ∑d i=1 zi where f ′T1 z = αJd,1z = α d∑
i=1
zi
= logKedαz̄ where z̄ = 1
d d∑ i=1 zi
= dαz̄ + logK (3)
where z̄ := E(z) and Jd,1 indicates a d× 1 all-ones matrix. To perform OOD detection using our proposed method, we further define the score function G(x;F) as:
G(x;F) = { 1, if S ≥ λ 0, if S < λ
(4)
where λ is a threshold. In our experiments, we set λ to be a value such that 95% ID data can be detected correctly, which is the same setting following most previous OOD detection methods (Hendrycks & Gimpel, 2016; Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; Liang et al., 2017; Hendrycks et al., 2019).
5 ANALYSIS
Below, we perform theoretical analysis to show the effectiveness of our method. Specifically, we first explain why replacing the trained weight of the last FC layer with a constant value α separates the distributions of ID and OOD data. After that, we further explain why a smaller α can make the ID and OOD data to be more separable.
5.1 EFFECTIVENESS OF ASSIGNING THE LAST FC LAYER WITH A CONSTANT VALUE
In this section, we analyze why assigning the last FC layer with a constant value α can separate ID and OOD data. Following the settings in Sec. 3, we denote the neural network trained on the ID data as F . Besides, we further denote the output of its penultimate layer is z = (z1, z2, ..., zn) ∈ Rn. Then, we can rewrite the score S produced by our method further as:
S = log K∑ i=1 ef ′ i(z)
= log kedαz̄
= dαz̄ + logK
∝ z̄ = E[z] (5)
We denote the z corresponding to the in-distribution data as zin = (zin1 , z in 2 , ..., z in n ). Following the same assumption from (Ming et al., 2022; Sun et al., 2021; 2022), we assume that each zini obeys the rectified Gaussian distribution i.e. zini ∼ max(0,N (µ, σ2in)). Then, we can model zini with a random variable x as:
zini = 1
σin √ 2π
e − (x−µ)
2
2σ2 in (6)
We denote the corresponding expectation of zin as Ein, and it can be written as:
Ein[z] = ∫ +∞ 0
x
σin √ 2π
e − (x−µ)
2
σ2 in dx
= 1
σin √ 2π ∫ +∞ 0 xe − (x−µ) 2 σ2 in dx
= µ√ 2π ∫ +∞ − µσin e− v2 2 dv + σin√ 2π ∫ +∞ − µσin ve− v2 2 dv where v = x− µ σin
= µ√ 2π (1− ∫ − µσin −∞ e− v2 2 dv) + σin√ 2π e − 12 ( −µ σin )2 = µ[1− Φ(−µ σin )] + σinφ( −µ σin ) (7)
where Φ and φ are Cumulative distribution function(cdf) and Probability density function(pdf) respectively. And then, we are going to model the expectation corresponding to the out-of-distribution data. Following the same observation from (Sun et al., 2021) that the output of the penultimate layer of the network corresponding to the OOD data,zout, is positively skewed. Specifically, we cam denote zout = (zout1 , z out 2 , ..., z out n ), so we can model each z out i ∼ESN(µ, σ2out, ϵ), where µ, σ2out, ϵ indicate the mean, the deviation and the degree of skewness of the ESN distribution. Therefore, following the theorem in (Mudholkar & Hutson, 2000), the expectation of zout, Eout[z], can be modeled as:
Eout[z] = µ− (1 + ϵ)Φ( −µ
(1 + ϵ)σout )µ+ (1 + ϵ)2φ( −µ (1 + ϵ)σout )− 4ϵ√ 2π σout (8)
Therefore, the difference of the Ein[z] and Eout[z] is: ∆ =E[zin]− E[zout]
=µ[1− Φ(−µ σin )] + σinφ( −µ σin )− µ− (1 + ϵ)Φ( −µ (1 + ϵ)σout )µ
+ (1 + ϵ)2φ( −µ (1 + ϵ)σout )− 4ϵ√ 2π σout
=− [(1 + ϵ)2ϕ( −µ (1 + ϵ)σout ) + 4ϵ√ 2π ]σout
− [Φ(−µ σin )− (1 + ϵ)Φ( −µ (1 + ϵ)σout )]µ+ ϕ( −µ σin )σin (9)
Given µ = 1.0 and ϵ = −0.5, we can plot ∆ in Fig. 2, and we can find out that it is greater than 0, i.e Sin > Sout. Therefore, we conclude that our method can produce greater confidence scores to in-distribution data than for the out-distribution data.
5.2 EFFECTIVENESS OF A SMALL α
In this section, we further explain why a smaller α can make the ID and OOD data to be more separable. We denote the norm difference of Sin and Sout as:
Sin − Sout ||α||2 = 1 α2 (log K∑ i=1 ef ′ i(z in) − log K∑ i=1 ef ′ i(z out))
= d
α (E[zin]− E[zout]) (10)
As shown in Eq. 10, to make the ID and OOD data to be more separable, we actually hope to make the norm difference of Sin and Sout to be larger Recall that E[zin] − E[zout] is a positive number as we discuss above. Therefore, a smaller α can make the norm difference of Sin and Sout larger, and thus make the ID and OOD data to be more separable.
6 EXPERIMENTS
In this section, we evaluate the effectiveness of our method on ImageNet and CIFAR OOD detection benchmarks. All experiments are conducted on NVIDIA Tesla V100 GPUs.
6.1 IMAGENET BENCHMARK
Setup. We use ReAct (Sun et al., 2021) as a baseline of our method and follow it. We use both a ResNet50 (He et al., 2016) model and a MobileNet-v2 (Sandler et al., 2018) model pre-trained on ImageNet (Deng et al., 2009) as the image classifier. Note that for fair comparison, we directly use the models trained by (Sun et al., 2021). Moreover, we set α in Eq. 3 to be a small number 0.01 in our experiments.
Evaluation Metric. We evaluate our OOD detection method on the following two common metrics: (1) FPR95 measures the FPR (False Positive Rate) of the OOD data when the recall (Positive Rate of the ID data) is at 95%. Note that a lower FPR95 indicates better performance of OOD detection. (2) AUROC measures the area under the TPR (True Positive Rate) and FPR (False Positive Rate). Note that a higher AUROC indicates better performance of OOD detection.
Dataset. In this benchmark, we consider ImageNet (Deng et al., 2009) as the ID dataset, and following (Sun et al., 2021; Hsu et al., 2020; Huang & Li, 2021), we evaluate our method on four commonly-used OOD datasets, including iNaturalist, SUN, Places365, and Textures. Note that all of these four datasets have non-overlapping classes w.r.t ImageNet. Below, we introduce each of them in more detail: (1) iNaturalist (Van Horn et al., 2018) contains 5,000 categories of plants and animals images, and the resolution of each image is 800 × 800. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 110 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (2) SUN (Xiao et al., 2010) contains 397 classes of natural images, and the resolution of each image is larger than 200 × 200. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 50 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (3) Places (Zhou et al., 2017) contains 205 categories of scene images whose resolutions are 512 × 512. To conduct OOD detection on this dataset, following the setting of (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), 50 classes that is non-overlapping with classes of ImageNet are first picked up, and 10,000 images from these 110 classes are then randomly selected. (4) Textures (Cimpoi et al., 2014) contains 47 classes of textural images whose resolutions are either 300 × 300 or 640 × 640. Following (Sun et al., 2021; Huang & Li, 2021; Huang et al., 2021), the whole dataset with 5,640 images is used for OOD detection evaluation.
Results. In Tab. 1, we compare our method with the existing post-hoc OOD detection methods on all the four OOD datasets. As shown, our method demonstrates the best averaged result compared with common post-hoc OOD detection methods on both ResNet50 and MobileNet-V2, which demonstrates the effectiveness of our method.
6.2 CIFAR BENCHMARK
Setup. We use ReAct (Sun et al., 2021) as a baseline of our method and follow it. We use the ResNet18 (He et al., 2016) model as the image classifier for both CIFAR-10 and CIFAR-100. Note that for fair comparison, we directly use the models trained by (Sun et al., 2021). Moreover, we set α in Eq. 3 to be a small number 0.01 in our experiments.
Evaluation metric & Dataset. Following (Hendrycks & Gimpel, 2016; Liu et al., 2020; Liang et al., 2017; Sun et al., 2021; Huang et al., 2021; Lee et al., 2018), we use the FPR95 and AUROC metrics elaborated in Sec. 6.1 to evaluate our OOD detection method. In this benchmark, we use CIFAR-10 and CIFAR-100 as the ID datasets (Krizhevsky et al., 2009). With respect to the OOD datasets, following (Liu et al., 2020; Sun et al., 2021; Huang et al., 2021; Cimpoi et al., 2014), besides using the Places dataset and the Textures dataset that we have introduced above, we also
evaluate our method on three other OOD datasets including iSUN (Xu et al., 2015), LSUN (Yu et al., 2015), and SVHN (Netzer et al., 2011). Below, we introduce each of them in more detail: (1) LSUN dataset contains 10,000 images of 10 scene categories. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), to conduct OOD detection on this dataset, we randomly crop images in this dataset to size 32×32. Besides, following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), we also conduct OOD detection on a variant of this dataset (LSUN Resize) by resizing images in LSUN dataset to size 32×32. (2) iSUN dataset is sampled from the SUN (Xiao et al., 2010) dataset, and contains 20,608 images of 397 categories. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), the whole dataset is used for OOD detection evaluation. (3) SVHN dataset contains 26,032 images of 10 categories for testing. Following (Sun et al., 2021; Liu et al., 2020; Hendrycks & Gimpel, 2016), we use all the 26,032 images for OOD detection evaluation.
Results In Tab. 2, we compare our method with the existing post-hoc OOD detection methods on all the six OOD datasets. As shown, our method demonstrates the best averaged result compared with common post-hoc OOD detection methods, which demonstrates the effectiveness of our method.
6.3 ABLATION STUDIES
Effect of α. In the previous section, we analyzed the effect of α on the performance of OOD detection from a mathematical point of view and concluded that a smaller α has a positive effect on performance. In this subsection, we will experimentally show the impact of α on the performance
of OOD detection. We randomly sample α from a continuous uniform distribution between 0 and 1 i.e. α ∈ U[0,1]. And then we evaluate FPR95 and AUROC under various α on the iNaturalist, SUN, Places and Textures OOD datasets (Van Horn et al., 2018; Xiao et al., 2010; Zhou et al., 2017; Cimpoi et al., 2014) with ResNet50 (He et al., 2016) and MobileNet-V2 (Sandler et al., 2018) trained on ImageNet (Deng et al., 2009). The result is shown in the Fig. 3. As shown, as long as α decreases, a larger AUROC and a smaller FPR95 are consistently achieved throughout various OOD datasets, demonstrating the effectiveness of our method.
Effect of different baseline methods. To validate the general effectiveness of our proposed method, we apply our method on various different post-hoc OOD detection methods, including MSP, energy, react, vim, MaxLogit, and KL-Matching (Liu et al., 2020; Sun et al., 2021; Wang et al., 2022; Hendrycks et al., 2019). As shown in Tab. 3, our method achieves consistent performance improvement when applied on various different post-hoc OOD detection methods. This demonstrates that our proposed method can be flexibly applied on various post-hoc OOD detection methods to improve their performance.
7 CONCLUSION
In this paper, we present a simple yet effective OOD detection method, which replaces the trained weight of the last FC layer with a small value. We theoretically analyze that the proposed method can make the ID data and OOD data to be more separable, and thus better cope with the overconfidence problem. We shows two ablation experiments to show that our method is compatible with existing OOD detection methods and achieves consistent performance improvement. Our method achieves superior performance on the ImageNet and CIFAR OOD detection benchmarks. | 1. What is the focus of the paper regarding out-of-distribution detection?
2. What are the strengths and weaknesses of the proposed method?
3. Do you have any concerns or questions about the theoretical analysis and empirical evaluation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a method for out-of-distibution (OOD) detection in deep models. The method simply set all the weights of the last fully-connected (FC) layer to a small value (alpha). The authors show both analytically and empirically that this approach increases the separability of OOD and ID (in-distribution) examples directly from the score of the model. The method is tested on ResNet and MobileNet-v2 models on several standard image datasets. The method is shown to improve upon SOT methods for OOD detection benchmarks. The effect of the value of alpha is also studied.
Strengths And Weaknesses
pros:
It is s simple method that is easy to implement and improves upon SOT for OOD detection.
cons:
the theoretical analysis is extensive but some parts are not well explained. In particular the skewness of the ESN (epsilon-skew-normal - by the way, that abbreviation is not extended in the paper) is not discussed and makes the whole analysis lack strong foundation.
The authors categorize the studies of the effect of the alpha parameter and the study of compatibility with score methods as 'ablation'. These studies are not ablation studies.
The study of alpha shows that the smaller the alpha, the better this works. The curves appear to show asymptotical behavior. However this is not discussed. Is there improvement up to the point where floating-point precision degrades ? What is the sweet spot ? No explanation/intuition/discussion is given whatsoever. A value of 0.01 is used in the experiment section, but would 0.001 provide better results ???
The study of compatibility is confusing and not well explained. It is labeled as 'Effect of baseline method' under the heading 'ablation studies'. As stated above, it is not an ablation (nothing is removed). Second, it has nothing to do with a 'baseline'. A baseline is a method you compare against. Here, the paper appear to combine the approach with some OOD scoring function from the litterature. These methods should be explained and there should be an intuition why the combination with the paper's approach could be beneficial. For example the paper claims to combine ReACT and this method to achieve an improvement over React (table 3). However, based on the number on table 1, the result of "Ours+ReACT" is the same as the result of "Ours", so there's no combination of the two. There's a different between compatibility and combination. Here, i assume the authors want to show that their approach works with (is compatible with) various scoring methods (softmax, energy, ODIN, etc.).
Clarity, Quality, Novelty And Reproducibility
The paper follows the ReAct paper (Sun et al 2021) quite closely and therefore the novelty is just a slight variation on ReACT (instead of rectifying the activation of large units of the last FC layer, it sets the weight of all units to a small value).
Because the paper follows the ReACT paper so closely, many shortcuts are taken in terms of explanation and it is often hard to understand the theoretical analysis and empirical evaluation without reading the ReACT paper. The paper should stand better on its own.
The quality of the writing is just ok. It is still has many grammatical errors and typos.
The method should be easily reproducible. |
ICLR | Title
Bi-Level Dynamic Parameter Sharing among Individuals and Teams for Promoting Collaborations in Multi-Agent Reinforcement Learning
Abstract
Parameter sharing has greatly contributed to the success of multi-agent reinforcement learning in recent years. However, most existing parameter sharing mechanisms are static, and parameters are indiscriminately shared among individuals, ignoring the dynamic environments and different roles of multiple agents. In addition, although a single-level selective parameter sharing mechanism can promote the diversity of strategies, it is hard to establish complementary and cooperative relationships between agents. To address these issues, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams for promoting effective collaborations (BDPS). Specifically, at the individual level, we define virtual dynamic roles based on the long-term cumulative advantages of agents and share parameters among agents in the same role. At the team level, we combine agents of different virtual roles and share parameters of agents in the same group. Through the joint efforts of these two levels, we achieve a dynamic balance between the individuality and commonality of agents, enabling agents to learn more complex and complementary collaborative relationships. We evaluate BDPS on a challenging set of StarCraft II micromanagement tasks. The experimental results show that our method outperforms the current state-of-the-art baselines, and we demonstrate the reliability of our proposed structure through ablation experiments.
N/A
Parameter sharing has greatly contributed to the success of multi-agent reinforcement learning in recent years. However, most existing parameter sharing mechanisms are static, and parameters are indiscriminately shared among individuals, ignoring the dynamic environments and different roles of multiple agents. In addition, although a single-level selective parameter sharing mechanism can promote the diversity of strategies, it is hard to establish complementary and cooperative relationships between agents. To address these issues, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams for promoting effective collaborations (BDPS). Specifically, at the individual level, we define virtual dynamic roles based on the long-term cumulative advantages of agents and share parameters among agents in the same role. At the team level, we combine agents of different virtual roles and share parameters of agents in the same group. Through the joint efforts of these two levels, we achieve a dynamic balance between the individuality and commonality of agents, enabling agents to learn more complex and complementary collaborative relationships. We evaluate BDPS on a challenging set of StarCraft II micromanagement tasks. The experimental results show that our method outperforms the current state-of-the-art baselines, and we demonstrate the reliability of our proposed structure through ablation experiments.
1 INTRODUCTION
In many areas, collaborative Multi-Agent Reinforcement Learning (MARL) has broad application prospects, such as robots cluster control (Buşoniu et al., 2010), multi-vehicle auto-driving (Bhalla et al., 2020), and shop scheduling (Jiménez, 2012). In a multi-agent environment, an agent should observe the environment’s dynamics and understand the learning policies of other agents to form good collaborations. Real-world scenarios usually have a large number of agents with different identities or capabilities, which puts forward higher requirements for collaborations among agents. Therefore, how to solve the large-scale MARL problem and promote to form stable and complementary cooperation among agents with different identities and capabilities are particularly important.
To solve the large-scale agents issue, we can find that many collaborative MARL works adopting the centralized training paradigm use the full static parameter sharing mechanism (Gupta et al., 2017), which allows agents to share parameters of agents’ policy networks, thus simplifying the algorithm structure and improving performance efficiency. This mechanism is effective because agents generally receive similar observation information in the existing narrow and simple multiagent environments. In our Google Research Football (GRF) (Kurach et al., 2020) experiments, we can find that blindly applying the full parameter sharing mechanism does not improve the performance of algorithms because the observation information is very different due to the movement of different players. At the same time, because the full static parameter sharing mechanism ignores the identities and abilities of different agents, it constantly limits the diversity of agents’ behavior policies (Li et al., 2021; Yang et al., 2022), which makes it difficult to promote complementarity and reliable cooperation between agents in complex scenarios.
Recently, in order to eliminate the disadvantage of full parameter sharing, a single-layer selective parameter sharing mechanism has been proposed (Christianos et al., 2021; Wang et al., 2022), that is, extracting deep features from agents’ observation information by an encoder and clustering them to achieve the combination of different agents in order to select different agents for parameter sharing.
Although the single-level selective parameter sharing mechanism can promote the diversity of agents’ policies, it causes the relationship between agents that do not share parameters simultaneously fragmented. So that agents cannot establish complementary cooperative relationships in a broader range. More importantly, designing an effective selector is the key to sharing selective parameters, especially for the single-level dynamic selective parameter sharing mechanism, which needs to be done within several rounds of the selection operation. Most methods only use the realtime observation information of agents, which loses attention to agents’ history and is not conducive to correctly mining the implicit identity characteristics of the agents. For a football team, special training for different players, such as shooting training for the forwards and defense training for the defenders, will be carried out. However, winning a game requires special training and coordination between players of different roles. That is, not only do we need to share parameters with agents of the same role, but we also need to combine agents of different identities to ensure that they can form robust and complementary collaboration on a larger scale.
To address these issues, in this paper, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams (BDPS). The advantage functions of agents can be expressed as the advantages of taking action relative to the average in the current state. We consider that the advantage function can better represent the actual roles of agents in current identity and grouping. In order to more accurately identify the role of agents, at the individual level, we compute the long-term advantage information of the agents as the key to virtual role identification and use the variational autoencoders (VAE) (Kingma & Welling, 2014) to learn the distribution of the role characteristics of the agents, and obtain more accurate virtual role directly by sampling the distribution of role features. To alleviate the split of the relationship between different virtual role agents by the singlelayer dynamic parameter sharing mechanism, we further use the graph attention network (GAT) (Velickovic et al., 2018) to learn the topological relationships between different roles based on the roles obtained at the individual level, so as to achieve the combination of different identity agents at a higher level and a broader range. Through the method we designed, we achieve dynamic and selective parameter sharing for agents at two different levels, individual and team, and achieve the goal of stabilizing complementary collaboration among agents in a more extensive scope while achieving the diversity of agents’ policies.
We test BDPS and algorithms using different parameter sharing mechanisms on the StarCraft II micromanagement environments (SMAC) (Samvelyan et al., 2019) and the Google Research Football (GRF) (Kurach et al., 2020). The experimental results show that our method not only outperforms other methods with single-level selective parameter sharing mechanism and full parameter sharing mechanism in general on all super hard maps and four hard maps of SMAC, but also performs well in the experimental scenarios in the used GRF. In addition, we carried out ablation experiments to verify the influence of different parameter sharing settings on the formation of complementary cooperation between agents, which fully proves the reliability of our proposed method.
2 BACKGROUND
2.1 DECENTRALIZED PARTIALLY OBSERVABLE MARKOV PROCESS
A full cooperative MARL task can usually be modeled as a decentralized partially observable markov process (Dec-POMDP) (Oliehoek & Amato, 2016). We can define a tuple M = ⟨N ,S,A, R, P,Ω, O, γ⟩ to represent it, where N is the finite set of n agents, s ∈ S is a finite set of global state, and γ ∈ [0, 1). At each time step t, each agent i ∈ N receives a local observation oi ∈ Ω according to the observation function O (s, i), takes an action ai ∈ A to form a joint action a ∈ An, and gets a shared global reward r = R (s,a). Due to the limitation of partial observability, each agent conditions its policy πi (ai|τi) on its own local action-observation history τi ∈ T ≡ (Ω×A)∗. Agents together aim to maximize the expected return, that is, to find a joint policy π = ⟨π1, ..., πn⟩ to maximize a joint action-value function Qπ = Es0:∞,a0:∞ [ ∑∞ t=0 γ trt|s0 = s,a0 = a,π].
2.2 CENTRALIZED TRAINING WITH DECENTRALIZED EXECUTION
In many MARL settings, partial observability problems can be solved by centralized training with decentralized execution (CTDE) paradigm (Lowe et al., 2017; Foerster et al., 2018; Rashid et al., 2020; Wang et al., 2021a; Son et al., 2019), which is currently the mainstream of MARL methods. The centralized mode is adopted in training, and after the training, agents only make decisions based on their local observation and the trained policy network. Thus, the problems of unstable environment and large-scale agents can be overcome at the same time to a certain extent.
2.3 VARIATIONAL AUTOENCODER
Variational autoencoder (VAE) (Kingma & Welling, 2014) is a generative network model based on Variational Bayesian (VB) (Fox & Roberts, 2012) inference. Two probability density distribution models are established by the inference network qϕ and the generative network pξ. The inference network is used for the variational inference of the original input data x to generate the variational probability distribution of hidden variables z. The generative network restores the approximate probability distribution of the original data according to the generated implicit variable variational probability distribution. VAE model can be divided into the following two processes: the approximate inference process of posterior distribution of hidden variables: qϕ (z|x) and the generation process of conditional distribution of generated variables: pξ (z) pξ (x̂|z). In order to make qϕ (z|x) and the true posterior distribution pξ (z|x) approximately equal, VAE uses KL-divergence to measure the similarity between them:
DKL(qϕ (z|x) || pξ (z|x)) = log pξ (x) + Eqϕ(z|x) [log qϕ (z|x)− log pξ (x, z)] , (1) where the term log pξ (x) is called log-evidence and it is constant. Another term is the negative evidence lower bound (ELBO). The VAE with additional GRU network is widely used in sequence anomaly detection (Su et al., 2019). In this paper, we use it to identify the identity feature distribution mapped behind the advantage information sequence of agents for a period of time to better guide them in choosing appropriate parameter sharing partners.
2.4 GRAPH ATTENTION NETWORK
Graph attention network (GAT) (Velickovic et al., 2018) is a network architecture based on an attention mechanism, which can learn different weights for different neighbors through the attention mechanism. For calculating the output characteristics of node i, GAT first trains a shared weight matrix W for all nodes to obtain the weight of each neighbor node of node i. Then, according to the weight, the attention coefficients between node i and its neighbor nodes are calculated. Finally, these attention coefficients are weighted and summed to obtain the output features h′i of node i:
h′i = σ1 ∑ j∈Ni
exp ( σ2 ( aT [Whi||hj ] ))∑ k∈Ni exp (σ2 (a T [Whi||hk])) Whj (1−Head) , (2)
where σ1 and σ2 represent nonlinear functions, Ni represents the first-order neighborhood of node i, aT represents the transpose of the weight vector a and || is the concatenation operation. In recent years, MARL with communication has widely used GAT to determine the explicit communication goals of agents (Niu et al., 2021; Seraj et al., 2021). In this paper, we use the GAT to establish the topological relationship between different virtual roles, and further combine the agents under different virtual role mappings to form complementary and reliable cooperative relationships.
3 OUR METHOD
In this section, we introduce the proposed BDPS in detail. BDPS mainly comprises individuals and teams, as shown in Figure 1. We get inspiration from people: a person may play multiple roles and can freely switch roles in different scenes, which promotes the stable development of people, and the same is true for agents. Therefore, we choose to sacrifice some network parameters for agents to maintain multiple roles and groups at the same time. And according to the role selector U and the group selector V , choose the most suitable role and group for the agents.
First, we explain the relationship between agents and individual virtual roles and groups.
Definition 1 Given n (n≥2) agents, k (2≤k≤n) roles and m (2≤m≤k) groups, we have the following mapping relationships: fU : N 7→ K and fV : K 7→ M, where K and M represent finite sets of k roles and m groups, respectively.
From the fV , we can see the agents’ groups depend on agents’ roles, so we give the time relationship for agents to update the groups and roles: TV = e · TU , where TV represents the update period of the group, TU represents the identity’s update period and e ∈ Z+.
3.1 INDIVIDUAL LEVEL DESIGN
Different from the existing methods of determining the roles of agents based on real-time observation information, we consider the impact of the long-term cumulative advantages of agents on the definition of the agents’ roles, because the advantage information can better represent the current state of agents than the observation information. At the same time, unlike the existing methods that only use the encoder to learn the role characteristics of agents for clustering, we learn the long-term advantages of agents through the VAE to obtain the distribution information of role. Because in many cases, the role of an agent will not change just because it makes a unique action. At the individual level in Figure 1, we first maintain the role selector U to select appropriate roles uU for agents, then apply parameter sharing to agents according to the same role.
3.1.1 CHOOSE APPROPRIATE ROLES FOR AGENTS
Firstly, the role selector U inputs the pre-calculated agents’ advantage information sequence At−TU :t−1 = {At−TU ,At−TU+1, ...,At−1} into a GRU network to capture complex temporal dependence between agents’ advantage information in the virtual roles’ update period. Secondly, when the time meets the periodic condition of updating virtual roles, we take the implicit output information ht−1 at this time as the characteristics of the agents and input it into the Encoder Eϕ of the VAE. Finally, we obtain the virtual roles uU of agents from the k-dimensional Gaussian distribution output by the Encoder Eϕ.
For agent i, its advantage information at time t can be obtained by Ait = max { Qit } − Qit, where Qit represents the agent i’s local Q-function at time t, and unlike the classical calculating advantage function A = Q − V , ours Ait ≥ 0 is convenient for us to use the Decoder Dξ to reconstruct the input information. When the virtual role uiU of agent i needs to update at time t, its virtual role can be obtained from the VAE’s bottleneck:
uiU = argmax{ziU}, (3)
where zU = µU + σU · ϵU , ϵU ∼ N (0, I).
3.2 TEAM LEVEL DESIGN
The purpose of introducing grouping is to eliminate the fragmentation of the relationship between agents caused by the single-layer parameter sharing mechanism, and to establish the cooperative relationship between agents of different roles in a wider range. As shown in Figure 1, the team level is mainly composed of the group selector V and grouping cooperative networks. The group selector V is realized through an additional reinforcement learning task. We introduce a GAT for this task to find the correlation between different dynamic roles and promote the role composition in the grouping results to be more comprehensive and complete. In the part of parameter sharing, we add additional implicit information hUt from the individual role level to give full play to the positive impact of the individual identities on the team results.
3.2.1 COMBINE DIFFERENT ROLES TO ACHIEVE GROUPING
The group selector V takes the virtual roles uU as inputs. This input information is encoded into the feature vector of the agents via a multi-layer perceptron (MLP). Subsequently, these feature vectors form a set { x1t ,x 2 t , ...,x N t } to be input into the GAT. We use the self-attention mechanism in the GAT to calculate the attention coefficient eij between agents and use it as an essential basis for grouping:
eij = LeakyReLU ( aT [ Wxit||Wx j t ]) , (4)
where i, j ∈ N . In our method, considering the similar observation information and close location arrangement between agents, and there is no rigid separation restriction between roles of different agents, we allow the agent’s attention to come from all undead agents (including itself) at the current moment. Of course, to facilitate the comparison of the attention coefficient between agents, we also use the softmax function for normalization.
xit ′ =
K
|| k=1 σ ∑ j∈Ni αkijW kxjt , (5) where k represents the number of attention heads, which is consistent with the number of virtual roles we set. We hope to find the influence of different agents’ roles as the dominant factors on the formation of teams by agents. For other parts in Equation 5, Ni represents the first-order neighborhood of agent i, and αij = softmaxj (eij) represents the normalized attention coefficient that indicates the importance of agent j’s features to agent i.
A new set of feature vectors x′t = { x1t ′ ,x2t ′ , ...,xNt ′ }
is obtained after the GAT, which is input into the GRU network to get the hidden state affecting the grouping of agents. When the time t increment is equal to the period for agents to update groups TV , we use the hidden state to output the groups’ value and use the ε−greedy to select groups vV for agents.
3.3 OVERALL OBJECTIVES
3.3.1 GET THE LOCAL Q-FUNCTIONS REQUIRED BY AGENTS
In previous sections, we use the role selector U at the individual level and the group selector V at the team level to map agents from actual individuals to virtual roles uU and groups vV , respectively. The bi-level parameter sharing mechanism is also established for agents using roles and groups information obtained. Although contributions from hidden states at the individual level are used in calculating the local Q-functions of agents at the team level, we still may not completely discard explicit efforts at the individual level of agents. So we give the local Q-function:
Q (τ ,a) = QU (τ ,a) +QV (τ ,a) . (6)
3.3.2 TRAIN THE MODULES INCLUDED IN OUR METHOD
Starting from the individual level, we need to train the role selector U to select appropriate virtual roles for agents. The training objectives of the Role Selector U include the corrected reconstruction item and the KL-divergence item, and our goal is to minimize these two items:
LU (ξ, ϕ;AU ) = Lmse ( AU , ÂU ) + λDKL [qϕ (zlU |hlU ) ||pξ (zlU )] , (7)
where Lmse (·) is the mean squared error term for calculate the reconstruction loss of the advantage sequence, lU is the length of the AU , λ is a scaling factor and ÂU is the agents’ advantage information reconstruction sequence, which is obtained by the Decoder’s output ĥlU .
For the group selector V at the team level, we used the QMIX (Rashid et al., 2018) and RODE (Wang et al., 2021b) methods because we introduced an additional deep reinforcement learning task to generate groups for agents:
LV (θν) = ∑ b (TV −1∑ ∆t=0 rt+∆t + γmaxv′V Q̄ V tot (τ ′,a′,u′U , s ′)−QVtot (τ ,a,uU , s) )2 , (8) where b is the batch size of transitions sampled from the replay buffer.
In order for agents to use global rewards to learn local Q-functions, we input the local Q-value into the mixing network of QMIX (Rashid et al., 2018) again to estimate the global action value Qtot (τ ,a):
LTD(θµ) = ∑ b [( r + γmaxa′Q̄tot (τ ′,a′, s′)−Qtot (τ ,a, s) )2] . (9)
4 EXPERIMENTS
In this section, we demonstrate and evaluate the advantages of our proposed BDPS using the challenging tasks in the StarCraft II micromanagement enviroments (SMAC) and Google Research Football (GRF). We not only compare our proposed method with QMIX (Rashid et al., 2018) which adopts full parameter sharing mechanism and no parameter sharing mechanism, but also further compare with several baseline methods aimed at promoting the diversity of agents’ policies, such as CDS (Li et al., 2021), EOI (Jiang & Lu, 2021) and RODE(Wang et al., 2021b). Finally, we ablate our method in SMAC, verifying the true utility of the components in our method.
4.1 PERFORMANCE ON GOOGLE RESEARCH FOOTBALL (DEC-POMDP)
In the official multi-agent example, agents are allowed to observe all information on the field, which is contrary to our research hypothesis. We provide an observation space setting for agents in the Dec-POMDP version. The observation space of agents can dynamically change with their motion vectors. See Appendix A.2 for details.
First, we compare our method and baseline methods in the GRF of the Dec-POMDP version we provide. Compared with the experimental scenario provided by CDS (Li et al., 2021), we use the algorithm to control all agents in the left team, use the built-in scoring and checkpoints reward settings of GRF, and set that the agents in the left team cannot be moved by the built-in AI of the system when they do not touch the ball.
As shown in Figure 2, we select two scenarios academy 3 vs 1 with keeper and academy pass and shoot with keeper for comparison. We compare the actual reward received by the agents with the goal score in both scenarios. In general, our method is able to achieve better results than other baseline algorithms in both scenarios. Among them, the full parameter sharing version of QMIX is better than the non-sharing version, and other improved baseline algorithms based on full parameter sharing QMIX can also achieve goals, but the effect is not obvious.
As we mentioned in Section 1, the full parameter sharing mechanism ignores the identities and capabilities of agents, which makes them lose the diversity of behavior policies. As a result, full parameter sharing cannot improve the effectiveness of the algorithm when agents observe scenes with large differences in information. For the single-layer dynamic selective parameter sharing, due to the hard cutting of the relationship between agents that do not participate in parameter sharing, although better results can be achieved by relying on the selective parameter sharing mechanism, the agents cannot form a sufficiently stable cooperative relationship, which makes the effect fluctuate significantly. See Appendix C for the specific results.
In both scenarios, we can see that RODE differentiated the roles of the agents by limiting their action space, but this limitation also prevented the agents from achieving further training results. Through the experimental results, we can find that these baseline algorithms drop the ball in these two academy-scenarios, indicating that the agents do not form a sufficiently complementary collaborative relationship.
4.2 PERFORMANCE ON STARCRAFT II
As can be seen from Figure 3, our method is superior to other baseline methods on these maps, especially in maps with a larger number of agents, where our method is able to maintain its advantage. Of course, EOI and RODE also show good stability in these maps, and the full parameter sharing version of QMIX is still better than the no parameter sharing version.
Specifically, our method has more advantages than the CDS, which emphasizes the balance of personalities and commonalities of agents. As we mentioned, exploring the dynamic balance between individual personalities and commonalities is essential. Still, this balance is difficult to define, so we do not give factors in Equation 6 that can adjust the different importance at the individual and team levels. This balance is dynamic and hides in the process of sharing dynamic parameters we design. We will describe it in detail in the ablation experiment in Section 4.3.
4.3 ABLATION STUDY
In this section, we conduct ablation studies to understand the actual utility of each level in our bilevel dynamic parameter sharing mechanism. In addition to showing the winning rate of maps in SMAC, we calculate the entropy difference between individuals and teams to quantify the advantages of different components.
Figure 4: Comparison of different parameter sharing mechanisms in 5m vs 6m.
Figure 5: Comparison of different virtual roles in MMM2 (Only individual level).
We first use the homogeneous agents’ map 5m vs 6m in SMAC to conduct ablation research to analyze the advantages of dynamic parameter sharing over full and no parameter sharing. As shown in Figure 4, we can see that the method of applying parameter sharing has apparent advantages over the method without parameter sharing. Of course, our method is better than QMIX’s full static parameter sharing. Specifically, the single-level dynamic parameter sharing has a weak advantage over the full static parameter sharing. Our bi-level dynamic parameter sharing is more advantageous than the single-level dynamic and full static, which is exactly the goal of our proposed method.
As shown in Figure 5, we also compare the impact of different numbers of virtual roles on the collaboration of agents in the heterogeneous agents’ map MMM2. From the curve in the figure, we can find that the winning rate in the MMM2 map increases first and then decreases with the number of virtual roles, which indicates that blindly increasing the number of virtual roles can adversely affect the collaboration of agents.
To quantify the difference between our single-level and bi-level dynamic parameter sharing, we consider analyzing from the perspective of information theory. First, we introduce the evaluation indicator: Entropy difference.
Entropy describes uncertainty. We describe the capabilities of agents in different layers by comparing the entropy of action value between the two layers.
∆H = 1
N N∑ i=1 (∑ ai piV · log 1 piV − ∑ ai piU · log 1 piU ) , (10)
where, piV = softmaxQ i V ( τ i, ait−1, u i U ) and piU = softmaxQ i U ( τ i, ait−1 ) .
As shown in Figure 6, we can see that in 5m vs 6m, the information entropy at the team level is always smaller than that at the individual level, and the difference between the two is increasing. The change of entropy shows that the agents at the team level are more orderly than strategies
formed by agents only at the individual level, which also confirms that our team-level design can find commonalities among agents with different virtual roles to establish correct collaborations between agents. For MMM2 in Figure 6, our conclusion is still valid. No matter whether k = 3 or k = 5, the entropy difference between the team level and the individual level is constantly expanding with the training, and the entropy of the team level is constantly developing in a direction smaller than that of the individual level. And this phenomenon is consistent with our winning rate in Figure 4 and Figure 5.
5 RELATED WORK
5.1 PARAMETER SHARING
Parameter sharing plays an important role in MARL. Tan (1993) first studied the positive role of “sharing” in promoting collaborative agents in classical reinforcement learning algorithms. Gupta et al. (2017) proposed a parameter sharing variant of the single agent DRL algorithms, which introduced the parameter sharing mechanism into homogeneous MARL. Obviously, parameter sharing has been widely used as the implementation details of homogeneous multi-agent algorithms, such as QMIX (Rashid et al., 2018), QTRAN (Son et al., 2019), Qatten (Yang et al., 2020), etc. Recently, Terry et al. (2020) applied parameter sharing extensions to heterogeneous agents algorithm by padding based method, demonstrating again the important use of parameter sharing for multiagent algorithms. Of course, recent works have pointed out that full parameter sharing mechanism tend to make the behavior strategies of agents more same, Christianos et al. (2021) proposed a selective parameter sharing mechanism to eliminate the limitations of full parameter sharing.
5.2 INDIVIDUALS AND TEAMS
Focusing on the expected development of individuals and teams will not only help agents to maintain a variety of policies but also help them form a more stable collaboration. At the individual level, Wang et al. (2020; 2021b) focus on discovering the character traits behind the agents. Jiang & Lu (2021) studies the identifiability of agent trajectories and fixed identities. Du et al. (2019) proposes the use of internal rewards to stimulate diverse behaviors among agents. At the team level, Iqbal et al. (2021) randomly groups agents into related and unrelated groups, allowing agents to explore only specific entities in their environment. Wang et al. (2022) implements a dynamic grouping method by extracting the potential intentions of agents as tags. Li et al. (2021) divides agents’ value functions into shared and unshared parts to focus on the agents’ personality and commonality. In this paper, we focus on the impact of both individual and team levels on the complementary collaboration of agents. Of course, our approach is not to overlap the two layers of individuals and teams but to fully extend and combine the individual identities acquired by the agents.
6 CONCLUSION AND FUTURE WORK
In this paper, we proposed BDPS, a novel bi-level dynamic parameter sharing mechanism in MARL. By maintaining a role selector and a group selector, BDPS provides a new solution for agents to select the partners for parameter sharing in a timely and dynamic manner at both individual and team levels. And we integrate the roles and groups of agents into a whole to achieve a dynamic balance between their personalities and commonalities. Our experiments on SMAC and GRF show that BDPS can significantly facilitate the formation of complementary and reliable collaboration between agents.
Additionally, although we defined that agents have the opportunity to be in any role or grouping, they cannot explicitly utilize past knowledge when they were in other roles or groups. Therefore, how to make explicit use of the role knowledge they have learned and how to make use of other similar knowledge are issues that we need to explore further. We will present this paradigm of knowledge flow across roles, groups, and agents in future work.
A MULTI-AGENT ENVIRONMENTS
In this paper, we use three multi-agent environments, Starcraft II Multi-agent Challenge (SMAC)(Samvelyan et al., 2019), Google Research Football (GRF) (Kurach et al., 2020) and LevelBased Foraging (LBF) Papoudakis et al. (2021); Albrecht & Ramamoorthy (2015); Albrecht & Stone (2019), to conduct verification experiments.
A.1 SMAC
The starcraft multi-agent challenge is a fully collaborative, partially observable set of multiagent tasks. This environment implements various micro-management tasks based on the popular realtime strategy game StarCraft II. Each mission is a specific battle scenario in which a group of agents, each of whom controls a single unit, fight against an army controlled by the built-in AI in StarCraft II.
Compared with the LBF, SMAC provides more abundant battle scenes for agents. We conducted comparative experiments with baseline methods on hard and super hard maps. The characteristics of relevant maps are shown in Table 1.
In Google Research Football (GRF) tasks, agents need to cooperate according to the rules of football matches to win games or related training tasks. The reward settings of GRF can be divided into two types. One is that only goals can be rewarded (scoring), and the other is that when an agent reaches a specific coordinate point (checkpoints), it will get a certain reward. Compared with the two types of reward, the setting of the first type is more sparse. In the official multi-agent example, agents are allowed to observe all information on the field, which is contrary to our research hypothesis. We provide an observation space setting for agents in the Dec-POMDP version. The observation space of agents can dynamically change with their motion vectors, as shown in Figure 7. Considering the size of the field in the environment, we set the visual distance of the agents to 0.84, and the forward visual angle to 200◦. The agents can observe the position and direction of the ball, and within the range of viewing them, we also allow the agents to observe the position of their teammates and the position of their opponents.
In our experiment, we control all the players on the left side and set their movement state to be lazy (when not touching the ball, there is no built-in AI intervention), and the built-in AI in the system completely controls all the players on the right side.
A.3 LBF
The LBF is a multi-agent collection task built on a grid world where each agent and object is assigned a level. The criteria for successful collection by agents is the sum of the levels of the related agent needs to be equal to or greater than the level of the item. The agent will receive a reward at the relative item level if the item is successfully collected. LBF provides a more sparse reward environment and collaboration constraints than SMAC, which puts forward higher requirements for the effectiveness of multi-agent algorithms.
The difficulty of the LBF can be adjusted by the number of agents, the size of the grid world, the type of items, and the hard collaboration constraints. In this paper, we select 15×15-4p-5f as a supplement to the ablation experiment to verify whether the different components of our method are still valid in a more challenging collaborative setting. The 15×15-4p-5f scene indicates that 4 agents in a 15×15 grid world are involved in collecting 5 items.
B CASE STUDY IN LBF
As discussed in Appendix A.3, the constraints from level and the sparse reward environment pose a greater challenge for agents to establish complementary and reliable collaborations. In order to verify the effectiveness of our proposed dynamic parameter sharing mechanism for the formation of complementary relationships between agents, we verify the performance of different parameter sharing mechanisms in 15×15-4p-5f, including single-layer dynamic parameter sharing (ours with only individual level), bi-layer dynamic parameter sharing (ours), full static parameter sharing (classic QMIX) and no parameter sharing (QMIX with non-sharing) at all.
As shown in Figure 9, our method still yields the best return, demonstrating that our approach promotes more collaboration among agents. Although single-layer dynamic parameter sharing can promote agent collaboration, it can not always form complementary collaborative relationships. The result again confirms that forming complementary collaborations among agents requires a dynamic balance of personality and commonality, which is hidden in dynamic parameter sharing at both the individual and team levels.
C ABLATION STUDY IN GRF
As shown in Figure 8, we additionally verified the performance of the single-layer dynamic parameter sharing mechanism (only individual level) in two GRF scenarios.
Unlike the experimental results of SMAC and LBF, the algorithm of single-layer dynamic parameter sharing is better than the QMIX algorithm of full static parameter sharing and even can achieve the result of bi-level parameter sharing in academy 3 vs 1 with keeper. The experimental phenomena is because, in the GRF environment, agents have a more extensive range of actions, which makes them have a unique observation space than the other two environments, which proves that the full parameter is more effective because agents have similar observation spaces(Christianos et al., 2021).
We need to point out that the single-layer dynamic parameter sharing mechanism in the two scenarios is not as stable as our bi-level dynamic parameter sharing mechanism. This is because, on the one hand, agents are unstable due to changes in shared partners; on the other hand, agents that do not share parameters cannot establish complementary cooperation relationships, which again proves that our approach is correct in considering both individual and team levels.
D THE EFFECT OF THE NUMBER OF VIRTUAL ROLES
During the experiment, we found that setting different numbers of virtual roles would affect the performance of the entire algorithm. As shown in Figure 11, we show that in the SMAC environment, we only consider the impact of different numbers of virtual roles on the experimental results.
As we analyzed in the text, setting too many virtual roles will not improve the performance of the algorithm.
E FULL EXPERIMENTAL RESULTS IN SMAC
F HYPERPARAMETERS SETTING
In our method, we design a role selector and a group selector to guide agents in selecting appropriate partners for dynamic parameter sharing at the individual and team levels. The core of the role selector is a gated recurrent unit variational autoencoders (GRU-VAE). Besides the GRU unit, the encoder and decoder parts that makeup VAE are composed of simple linear layers and activation functions. The design of the group selector can be divided into a graph attention network with a 128- dimensional output state and other corresponding dimensions of the network layers (a linear layer and a GRU unit with a 64-dimensional hidden state). To expand, we use the softmax function to get the roles of agents at the individual level through the bottleneck of the GRU-VAE. For the grouping of agents at the team level, considering that our grouping task is an extra-designed reinforcement learning task, we use the same ε− greedy method as agents’ selection of actions. We need to state that, in addition to the newly designed components of our method, we maintain as much agreement as possible with the baseline method QMIX on the structure, parameters, and optimization methods that make up the rest of our method, so as to fully demonstrate in ablation experiments that the real utility of the different components that make up our method comes entirely from our design.
G EXPERIMENTAL DETAILS
For our method and baseline algorithms used, we tested 2M and 5M steps on all hard maps and all super hard maps of SMAC using five random seeds, respectively. Due to the use of the ε− greedy method, we set ε to be linearly annealed from 1.0 to 0.05 in 70K time steps in all hard maps and three super hard maps and kept constant during the rest of the training period. We set the annealing time for the remaining super hard map 6h vs 8z to 500K.
In GRF, we respectively tested 10M and 8M steps on academy 3 vs 1 with keeper and academy pass and shoot with keeper using three random seeds. We set ε to be linearly annealed from 1.0 to 0.05 in 100K time steps in both scenarios. For part of the observable space we designed, it contains the location information of the agent itself, the position, direction and attribution information of the football, as well as the player information and opponent information within the visual range.
To test our method components’ effectiveness in the 15×15-4p-5f scenario of the LBF environment, we set the experimental test step as 20M and the annealing time of ε as 200K. The setting of ε remains the same in our method and baseline algorithms.
For the section on the virtual roles and groups setup in our method, in order to ensure that the collaborative relationship between agents is sufficiently complementary, we set the number of agents n, the number of virtual roles k, and the number of groups m satisfying the relationship n ≥ k ≥ m ≥ 2. Of course, as we obtained plots of the number of virtual roles and the win rate in our ablation experiments, the number of virtual roles could not be increased blindly. Blindly increasing the number of virtual roles not only brings a vast amount of parameters, but also makes complementary collaborations between agents impossible. In the SMAC and the LBF, the number of virtual roles we set is shown in Table 2. In response to agents’ virtual roles and groups update periods, we have TV = e ·TU . To guarantee that the virtual roles and groups are updated normally in each episode, we require that the update periods of the virtual roles and the groups satisfies TU , TV < len(episode). We set update periods for virtual roles and groups based on the average length of a episode. In general, we set the update periods of 5, 7, 8, 10 or 14 with e = 1 or 2.
Our experiments are completed at Ubuntu 18.04 with a GPU of NVIDIA GTX 3090, a CPU of Intel i9-12900k and a memory size of 128G. In addition, the SMAC version is 4.10, the experimental version of 15×15-4p-5f in LBF is v2 and the version of the GRF is 2.10. We will open source my algorithm and experimental environments code on time. | 1. What is the main contribution of the paper in MARL?
2. What are the strengths and weaknesses of the proposed Bi-level Dynamic Parameter Sharing mechanism?
3. Do you have any concerns regarding the motivation and explanation of the method?
4. How could the writing and quality of the paper be improved?
5. Are there any typos or errors in the paper that need correction?
6. What are your thoughts on the experimental evaluations in SMAC, and do you agree that the algorithm could be verified in more complex environments?
7. Do you think the references provided are relevant and useful for understanding the paper's content?
8. Can you suggest any improvements for the paper's clarity, quality, novelty, and reproducibility? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a Bi-level Dynamic Parameter Sharing mechanism (BDPS) in MARL. The core idea is assigning different agents to different roles based on the long-term cumulative advantages and grouping multiple roles to a set of teams via the Graph Attention Network. And the authors verify the proposed method in some typical MARL benchmarks.
Strengths And Weaknesses
Weaknesses
The motivation is not clear enough. From the paper, I do not see why using the advantage function to represent the roles of agents is better and why incorporating a two-level selective parameter sharing is important.
The writing of the paper could be significantly improved. Lots of sentences are too long and many sentences are unclear, e.g., "we add additional implicit information
h
U
t
from the individual role level to give full play to the positive impact of the individual identities on the team results."
Two important single-layer selective parameter sharing baselines ([1] and [2]) are mentioned in the introduction but are missing in the experiments.
The experimental evaluations in SMAC is not convincing. Recently, [3] and [4] have verified that the optimized QMIX (completely sharing the parameters among all agents) could achieve 100% win rates on all Easy, Hard and Super Hard scenarios of SMAC. Therefore, SMAC may not be a good testbed to validate the benefits of the dynamic parameter sharing mechanism. The authors could verify the algorithm in more complex environments.
minor:
Typos:
"And according to the role selector and the U group selector, choose the most suitable role and group for the agents V."
In equation (6), the local Q-function improperly takes the joint action a as input.
Reference
[1] Scaling multi-agent reinforcement learning with selective parameter sharing.
[2] A cooperative multi-agent reinforcement learning algorithm based on dynamic self-selection parameters sharing.
[3] Rethinking the implementation tricks and monotonicity constraint in cooperative multi-agent reinforcement learning
[4] API: Boosting Multi-Agent Reinforcement Learning via Agent-Permutation-Invariant Networks
Clarity, Quality, Novelty And Reproducibility
Both the writing and the quality of the paper should be further improved.
The code is not attached in the Appendix. |
ICLR | Title
Bi-Level Dynamic Parameter Sharing among Individuals and Teams for Promoting Collaborations in Multi-Agent Reinforcement Learning
Abstract
Parameter sharing has greatly contributed to the success of multi-agent reinforcement learning in recent years. However, most existing parameter sharing mechanisms are static, and parameters are indiscriminately shared among individuals, ignoring the dynamic environments and different roles of multiple agents. In addition, although a single-level selective parameter sharing mechanism can promote the diversity of strategies, it is hard to establish complementary and cooperative relationships between agents. To address these issues, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams for promoting effective collaborations (BDPS). Specifically, at the individual level, we define virtual dynamic roles based on the long-term cumulative advantages of agents and share parameters among agents in the same role. At the team level, we combine agents of different virtual roles and share parameters of agents in the same group. Through the joint efforts of these two levels, we achieve a dynamic balance between the individuality and commonality of agents, enabling agents to learn more complex and complementary collaborative relationships. We evaluate BDPS on a challenging set of StarCraft II micromanagement tasks. The experimental results show that our method outperforms the current state-of-the-art baselines, and we demonstrate the reliability of our proposed structure through ablation experiments.
N/A
Parameter sharing has greatly contributed to the success of multi-agent reinforcement learning in recent years. However, most existing parameter sharing mechanisms are static, and parameters are indiscriminately shared among individuals, ignoring the dynamic environments and different roles of multiple agents. In addition, although a single-level selective parameter sharing mechanism can promote the diversity of strategies, it is hard to establish complementary and cooperative relationships between agents. To address these issues, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams for promoting effective collaborations (BDPS). Specifically, at the individual level, we define virtual dynamic roles based on the long-term cumulative advantages of agents and share parameters among agents in the same role. At the team level, we combine agents of different virtual roles and share parameters of agents in the same group. Through the joint efforts of these two levels, we achieve a dynamic balance between the individuality and commonality of agents, enabling agents to learn more complex and complementary collaborative relationships. We evaluate BDPS on a challenging set of StarCraft II micromanagement tasks. The experimental results show that our method outperforms the current state-of-the-art baselines, and we demonstrate the reliability of our proposed structure through ablation experiments.
1 INTRODUCTION
In many areas, collaborative Multi-Agent Reinforcement Learning (MARL) has broad application prospects, such as robots cluster control (Buşoniu et al., 2010), multi-vehicle auto-driving (Bhalla et al., 2020), and shop scheduling (Jiménez, 2012). In a multi-agent environment, an agent should observe the environment’s dynamics and understand the learning policies of other agents to form good collaborations. Real-world scenarios usually have a large number of agents with different identities or capabilities, which puts forward higher requirements for collaborations among agents. Therefore, how to solve the large-scale MARL problem and promote to form stable and complementary cooperation among agents with different identities and capabilities are particularly important.
To solve the large-scale agents issue, we can find that many collaborative MARL works adopting the centralized training paradigm use the full static parameter sharing mechanism (Gupta et al., 2017), which allows agents to share parameters of agents’ policy networks, thus simplifying the algorithm structure and improving performance efficiency. This mechanism is effective because agents generally receive similar observation information in the existing narrow and simple multiagent environments. In our Google Research Football (GRF) (Kurach et al., 2020) experiments, we can find that blindly applying the full parameter sharing mechanism does not improve the performance of algorithms because the observation information is very different due to the movement of different players. At the same time, because the full static parameter sharing mechanism ignores the identities and abilities of different agents, it constantly limits the diversity of agents’ behavior policies (Li et al., 2021; Yang et al., 2022), which makes it difficult to promote complementarity and reliable cooperation between agents in complex scenarios.
Recently, in order to eliminate the disadvantage of full parameter sharing, a single-layer selective parameter sharing mechanism has been proposed (Christianos et al., 2021; Wang et al., 2022), that is, extracting deep features from agents’ observation information by an encoder and clustering them to achieve the combination of different agents in order to select different agents for parameter sharing.
Although the single-level selective parameter sharing mechanism can promote the diversity of agents’ policies, it causes the relationship between agents that do not share parameters simultaneously fragmented. So that agents cannot establish complementary cooperative relationships in a broader range. More importantly, designing an effective selector is the key to sharing selective parameters, especially for the single-level dynamic selective parameter sharing mechanism, which needs to be done within several rounds of the selection operation. Most methods only use the realtime observation information of agents, which loses attention to agents’ history and is not conducive to correctly mining the implicit identity characteristics of the agents. For a football team, special training for different players, such as shooting training for the forwards and defense training for the defenders, will be carried out. However, winning a game requires special training and coordination between players of different roles. That is, not only do we need to share parameters with agents of the same role, but we also need to combine agents of different identities to ensure that they can form robust and complementary collaboration on a larger scale.
To address these issues, in this paper, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams (BDPS). The advantage functions of agents can be expressed as the advantages of taking action relative to the average in the current state. We consider that the advantage function can better represent the actual roles of agents in current identity and grouping. In order to more accurately identify the role of agents, at the individual level, we compute the long-term advantage information of the agents as the key to virtual role identification and use the variational autoencoders (VAE) (Kingma & Welling, 2014) to learn the distribution of the role characteristics of the agents, and obtain more accurate virtual role directly by sampling the distribution of role features. To alleviate the split of the relationship between different virtual role agents by the singlelayer dynamic parameter sharing mechanism, we further use the graph attention network (GAT) (Velickovic et al., 2018) to learn the topological relationships between different roles based on the roles obtained at the individual level, so as to achieve the combination of different identity agents at a higher level and a broader range. Through the method we designed, we achieve dynamic and selective parameter sharing for agents at two different levels, individual and team, and achieve the goal of stabilizing complementary collaboration among agents in a more extensive scope while achieving the diversity of agents’ policies.
We test BDPS and algorithms using different parameter sharing mechanisms on the StarCraft II micromanagement environments (SMAC) (Samvelyan et al., 2019) and the Google Research Football (GRF) (Kurach et al., 2020). The experimental results show that our method not only outperforms other methods with single-level selective parameter sharing mechanism and full parameter sharing mechanism in general on all super hard maps and four hard maps of SMAC, but also performs well in the experimental scenarios in the used GRF. In addition, we carried out ablation experiments to verify the influence of different parameter sharing settings on the formation of complementary cooperation between agents, which fully proves the reliability of our proposed method.
2 BACKGROUND
2.1 DECENTRALIZED PARTIALLY OBSERVABLE MARKOV PROCESS
A full cooperative MARL task can usually be modeled as a decentralized partially observable markov process (Dec-POMDP) (Oliehoek & Amato, 2016). We can define a tuple M = ⟨N ,S,A, R, P,Ω, O, γ⟩ to represent it, where N is the finite set of n agents, s ∈ S is a finite set of global state, and γ ∈ [0, 1). At each time step t, each agent i ∈ N receives a local observation oi ∈ Ω according to the observation function O (s, i), takes an action ai ∈ A to form a joint action a ∈ An, and gets a shared global reward r = R (s,a). Due to the limitation of partial observability, each agent conditions its policy πi (ai|τi) on its own local action-observation history τi ∈ T ≡ (Ω×A)∗. Agents together aim to maximize the expected return, that is, to find a joint policy π = ⟨π1, ..., πn⟩ to maximize a joint action-value function Qπ = Es0:∞,a0:∞ [ ∑∞ t=0 γ trt|s0 = s,a0 = a,π].
2.2 CENTRALIZED TRAINING WITH DECENTRALIZED EXECUTION
In many MARL settings, partial observability problems can be solved by centralized training with decentralized execution (CTDE) paradigm (Lowe et al., 2017; Foerster et al., 2018; Rashid et al., 2020; Wang et al., 2021a; Son et al., 2019), which is currently the mainstream of MARL methods. The centralized mode is adopted in training, and after the training, agents only make decisions based on their local observation and the trained policy network. Thus, the problems of unstable environment and large-scale agents can be overcome at the same time to a certain extent.
2.3 VARIATIONAL AUTOENCODER
Variational autoencoder (VAE) (Kingma & Welling, 2014) is a generative network model based on Variational Bayesian (VB) (Fox & Roberts, 2012) inference. Two probability density distribution models are established by the inference network qϕ and the generative network pξ. The inference network is used for the variational inference of the original input data x to generate the variational probability distribution of hidden variables z. The generative network restores the approximate probability distribution of the original data according to the generated implicit variable variational probability distribution. VAE model can be divided into the following two processes: the approximate inference process of posterior distribution of hidden variables: qϕ (z|x) and the generation process of conditional distribution of generated variables: pξ (z) pξ (x̂|z). In order to make qϕ (z|x) and the true posterior distribution pξ (z|x) approximately equal, VAE uses KL-divergence to measure the similarity between them:
DKL(qϕ (z|x) || pξ (z|x)) = log pξ (x) + Eqϕ(z|x) [log qϕ (z|x)− log pξ (x, z)] , (1) where the term log pξ (x) is called log-evidence and it is constant. Another term is the negative evidence lower bound (ELBO). The VAE with additional GRU network is widely used in sequence anomaly detection (Su et al., 2019). In this paper, we use it to identify the identity feature distribution mapped behind the advantage information sequence of agents for a period of time to better guide them in choosing appropriate parameter sharing partners.
2.4 GRAPH ATTENTION NETWORK
Graph attention network (GAT) (Velickovic et al., 2018) is a network architecture based on an attention mechanism, which can learn different weights for different neighbors through the attention mechanism. For calculating the output characteristics of node i, GAT first trains a shared weight matrix W for all nodes to obtain the weight of each neighbor node of node i. Then, according to the weight, the attention coefficients between node i and its neighbor nodes are calculated. Finally, these attention coefficients are weighted and summed to obtain the output features h′i of node i:
h′i = σ1 ∑ j∈Ni
exp ( σ2 ( aT [Whi||hj ] ))∑ k∈Ni exp (σ2 (a T [Whi||hk])) Whj (1−Head) , (2)
where σ1 and σ2 represent nonlinear functions, Ni represents the first-order neighborhood of node i, aT represents the transpose of the weight vector a and || is the concatenation operation. In recent years, MARL with communication has widely used GAT to determine the explicit communication goals of agents (Niu et al., 2021; Seraj et al., 2021). In this paper, we use the GAT to establish the topological relationship between different virtual roles, and further combine the agents under different virtual role mappings to form complementary and reliable cooperative relationships.
3 OUR METHOD
In this section, we introduce the proposed BDPS in detail. BDPS mainly comprises individuals and teams, as shown in Figure 1. We get inspiration from people: a person may play multiple roles and can freely switch roles in different scenes, which promotes the stable development of people, and the same is true for agents. Therefore, we choose to sacrifice some network parameters for agents to maintain multiple roles and groups at the same time. And according to the role selector U and the group selector V , choose the most suitable role and group for the agents.
First, we explain the relationship between agents and individual virtual roles and groups.
Definition 1 Given n (n≥2) agents, k (2≤k≤n) roles and m (2≤m≤k) groups, we have the following mapping relationships: fU : N 7→ K and fV : K 7→ M, where K and M represent finite sets of k roles and m groups, respectively.
From the fV , we can see the agents’ groups depend on agents’ roles, so we give the time relationship for agents to update the groups and roles: TV = e · TU , where TV represents the update period of the group, TU represents the identity’s update period and e ∈ Z+.
3.1 INDIVIDUAL LEVEL DESIGN
Different from the existing methods of determining the roles of agents based on real-time observation information, we consider the impact of the long-term cumulative advantages of agents on the definition of the agents’ roles, because the advantage information can better represent the current state of agents than the observation information. At the same time, unlike the existing methods that only use the encoder to learn the role characteristics of agents for clustering, we learn the long-term advantages of agents through the VAE to obtain the distribution information of role. Because in many cases, the role of an agent will not change just because it makes a unique action. At the individual level in Figure 1, we first maintain the role selector U to select appropriate roles uU for agents, then apply parameter sharing to agents according to the same role.
3.1.1 CHOOSE APPROPRIATE ROLES FOR AGENTS
Firstly, the role selector U inputs the pre-calculated agents’ advantage information sequence At−TU :t−1 = {At−TU ,At−TU+1, ...,At−1} into a GRU network to capture complex temporal dependence between agents’ advantage information in the virtual roles’ update period. Secondly, when the time meets the periodic condition of updating virtual roles, we take the implicit output information ht−1 at this time as the characteristics of the agents and input it into the Encoder Eϕ of the VAE. Finally, we obtain the virtual roles uU of agents from the k-dimensional Gaussian distribution output by the Encoder Eϕ.
For agent i, its advantage information at time t can be obtained by Ait = max { Qit } − Qit, where Qit represents the agent i’s local Q-function at time t, and unlike the classical calculating advantage function A = Q − V , ours Ait ≥ 0 is convenient for us to use the Decoder Dξ to reconstruct the input information. When the virtual role uiU of agent i needs to update at time t, its virtual role can be obtained from the VAE’s bottleneck:
uiU = argmax{ziU}, (3)
where zU = µU + σU · ϵU , ϵU ∼ N (0, I).
3.2 TEAM LEVEL DESIGN
The purpose of introducing grouping is to eliminate the fragmentation of the relationship between agents caused by the single-layer parameter sharing mechanism, and to establish the cooperative relationship between agents of different roles in a wider range. As shown in Figure 1, the team level is mainly composed of the group selector V and grouping cooperative networks. The group selector V is realized through an additional reinforcement learning task. We introduce a GAT for this task to find the correlation between different dynamic roles and promote the role composition in the grouping results to be more comprehensive and complete. In the part of parameter sharing, we add additional implicit information hUt from the individual role level to give full play to the positive impact of the individual identities on the team results.
3.2.1 COMBINE DIFFERENT ROLES TO ACHIEVE GROUPING
The group selector V takes the virtual roles uU as inputs. This input information is encoded into the feature vector of the agents via a multi-layer perceptron (MLP). Subsequently, these feature vectors form a set { x1t ,x 2 t , ...,x N t } to be input into the GAT. We use the self-attention mechanism in the GAT to calculate the attention coefficient eij between agents and use it as an essential basis for grouping:
eij = LeakyReLU ( aT [ Wxit||Wx j t ]) , (4)
where i, j ∈ N . In our method, considering the similar observation information and close location arrangement between agents, and there is no rigid separation restriction between roles of different agents, we allow the agent’s attention to come from all undead agents (including itself) at the current moment. Of course, to facilitate the comparison of the attention coefficient between agents, we also use the softmax function for normalization.
xit ′ =
K
|| k=1 σ ∑ j∈Ni αkijW kxjt , (5) where k represents the number of attention heads, which is consistent with the number of virtual roles we set. We hope to find the influence of different agents’ roles as the dominant factors on the formation of teams by agents. For other parts in Equation 5, Ni represents the first-order neighborhood of agent i, and αij = softmaxj (eij) represents the normalized attention coefficient that indicates the importance of agent j’s features to agent i.
A new set of feature vectors x′t = { x1t ′ ,x2t ′ , ...,xNt ′ }
is obtained after the GAT, which is input into the GRU network to get the hidden state affecting the grouping of agents. When the time t increment is equal to the period for agents to update groups TV , we use the hidden state to output the groups’ value and use the ε−greedy to select groups vV for agents.
3.3 OVERALL OBJECTIVES
3.3.1 GET THE LOCAL Q-FUNCTIONS REQUIRED BY AGENTS
In previous sections, we use the role selector U at the individual level and the group selector V at the team level to map agents from actual individuals to virtual roles uU and groups vV , respectively. The bi-level parameter sharing mechanism is also established for agents using roles and groups information obtained. Although contributions from hidden states at the individual level are used in calculating the local Q-functions of agents at the team level, we still may not completely discard explicit efforts at the individual level of agents. So we give the local Q-function:
Q (τ ,a) = QU (τ ,a) +QV (τ ,a) . (6)
3.3.2 TRAIN THE MODULES INCLUDED IN OUR METHOD
Starting from the individual level, we need to train the role selector U to select appropriate virtual roles for agents. The training objectives of the Role Selector U include the corrected reconstruction item and the KL-divergence item, and our goal is to minimize these two items:
LU (ξ, ϕ;AU ) = Lmse ( AU , ÂU ) + λDKL [qϕ (zlU |hlU ) ||pξ (zlU )] , (7)
where Lmse (·) is the mean squared error term for calculate the reconstruction loss of the advantage sequence, lU is the length of the AU , λ is a scaling factor and ÂU is the agents’ advantage information reconstruction sequence, which is obtained by the Decoder’s output ĥlU .
For the group selector V at the team level, we used the QMIX (Rashid et al., 2018) and RODE (Wang et al., 2021b) methods because we introduced an additional deep reinforcement learning task to generate groups for agents:
LV (θν) = ∑ b (TV −1∑ ∆t=0 rt+∆t + γmaxv′V Q̄ V tot (τ ′,a′,u′U , s ′)−QVtot (τ ,a,uU , s) )2 , (8) where b is the batch size of transitions sampled from the replay buffer.
In order for agents to use global rewards to learn local Q-functions, we input the local Q-value into the mixing network of QMIX (Rashid et al., 2018) again to estimate the global action value Qtot (τ ,a):
LTD(θµ) = ∑ b [( r + γmaxa′Q̄tot (τ ′,a′, s′)−Qtot (τ ,a, s) )2] . (9)
4 EXPERIMENTS
In this section, we demonstrate and evaluate the advantages of our proposed BDPS using the challenging tasks in the StarCraft II micromanagement enviroments (SMAC) and Google Research Football (GRF). We not only compare our proposed method with QMIX (Rashid et al., 2018) which adopts full parameter sharing mechanism and no parameter sharing mechanism, but also further compare with several baseline methods aimed at promoting the diversity of agents’ policies, such as CDS (Li et al., 2021), EOI (Jiang & Lu, 2021) and RODE(Wang et al., 2021b). Finally, we ablate our method in SMAC, verifying the true utility of the components in our method.
4.1 PERFORMANCE ON GOOGLE RESEARCH FOOTBALL (DEC-POMDP)
In the official multi-agent example, agents are allowed to observe all information on the field, which is contrary to our research hypothesis. We provide an observation space setting for agents in the Dec-POMDP version. The observation space of agents can dynamically change with their motion vectors. See Appendix A.2 for details.
First, we compare our method and baseline methods in the GRF of the Dec-POMDP version we provide. Compared with the experimental scenario provided by CDS (Li et al., 2021), we use the algorithm to control all agents in the left team, use the built-in scoring and checkpoints reward settings of GRF, and set that the agents in the left team cannot be moved by the built-in AI of the system when they do not touch the ball.
As shown in Figure 2, we select two scenarios academy 3 vs 1 with keeper and academy pass and shoot with keeper for comparison. We compare the actual reward received by the agents with the goal score in both scenarios. In general, our method is able to achieve better results than other baseline algorithms in both scenarios. Among them, the full parameter sharing version of QMIX is better than the non-sharing version, and other improved baseline algorithms based on full parameter sharing QMIX can also achieve goals, but the effect is not obvious.
As we mentioned in Section 1, the full parameter sharing mechanism ignores the identities and capabilities of agents, which makes them lose the diversity of behavior policies. As a result, full parameter sharing cannot improve the effectiveness of the algorithm when agents observe scenes with large differences in information. For the single-layer dynamic selective parameter sharing, due to the hard cutting of the relationship between agents that do not participate in parameter sharing, although better results can be achieved by relying on the selective parameter sharing mechanism, the agents cannot form a sufficiently stable cooperative relationship, which makes the effect fluctuate significantly. See Appendix C for the specific results.
In both scenarios, we can see that RODE differentiated the roles of the agents by limiting their action space, but this limitation also prevented the agents from achieving further training results. Through the experimental results, we can find that these baseline algorithms drop the ball in these two academy-scenarios, indicating that the agents do not form a sufficiently complementary collaborative relationship.
4.2 PERFORMANCE ON STARCRAFT II
As can be seen from Figure 3, our method is superior to other baseline methods on these maps, especially in maps with a larger number of agents, where our method is able to maintain its advantage. Of course, EOI and RODE also show good stability in these maps, and the full parameter sharing version of QMIX is still better than the no parameter sharing version.
Specifically, our method has more advantages than the CDS, which emphasizes the balance of personalities and commonalities of agents. As we mentioned, exploring the dynamic balance between individual personalities and commonalities is essential. Still, this balance is difficult to define, so we do not give factors in Equation 6 that can adjust the different importance at the individual and team levels. This balance is dynamic and hides in the process of sharing dynamic parameters we design. We will describe it in detail in the ablation experiment in Section 4.3.
4.3 ABLATION STUDY
In this section, we conduct ablation studies to understand the actual utility of each level in our bilevel dynamic parameter sharing mechanism. In addition to showing the winning rate of maps in SMAC, we calculate the entropy difference between individuals and teams to quantify the advantages of different components.
Figure 4: Comparison of different parameter sharing mechanisms in 5m vs 6m.
Figure 5: Comparison of different virtual roles in MMM2 (Only individual level).
We first use the homogeneous agents’ map 5m vs 6m in SMAC to conduct ablation research to analyze the advantages of dynamic parameter sharing over full and no parameter sharing. As shown in Figure 4, we can see that the method of applying parameter sharing has apparent advantages over the method without parameter sharing. Of course, our method is better than QMIX’s full static parameter sharing. Specifically, the single-level dynamic parameter sharing has a weak advantage over the full static parameter sharing. Our bi-level dynamic parameter sharing is more advantageous than the single-level dynamic and full static, which is exactly the goal of our proposed method.
As shown in Figure 5, we also compare the impact of different numbers of virtual roles on the collaboration of agents in the heterogeneous agents’ map MMM2. From the curve in the figure, we can find that the winning rate in the MMM2 map increases first and then decreases with the number of virtual roles, which indicates that blindly increasing the number of virtual roles can adversely affect the collaboration of agents.
To quantify the difference between our single-level and bi-level dynamic parameter sharing, we consider analyzing from the perspective of information theory. First, we introduce the evaluation indicator: Entropy difference.
Entropy describes uncertainty. We describe the capabilities of agents in different layers by comparing the entropy of action value between the two layers.
∆H = 1
N N∑ i=1 (∑ ai piV · log 1 piV − ∑ ai piU · log 1 piU ) , (10)
where, piV = softmaxQ i V ( τ i, ait−1, u i U ) and piU = softmaxQ i U ( τ i, ait−1 ) .
As shown in Figure 6, we can see that in 5m vs 6m, the information entropy at the team level is always smaller than that at the individual level, and the difference between the two is increasing. The change of entropy shows that the agents at the team level are more orderly than strategies
formed by agents only at the individual level, which also confirms that our team-level design can find commonalities among agents with different virtual roles to establish correct collaborations between agents. For MMM2 in Figure 6, our conclusion is still valid. No matter whether k = 3 or k = 5, the entropy difference between the team level and the individual level is constantly expanding with the training, and the entropy of the team level is constantly developing in a direction smaller than that of the individual level. And this phenomenon is consistent with our winning rate in Figure 4 and Figure 5.
5 RELATED WORK
5.1 PARAMETER SHARING
Parameter sharing plays an important role in MARL. Tan (1993) first studied the positive role of “sharing” in promoting collaborative agents in classical reinforcement learning algorithms. Gupta et al. (2017) proposed a parameter sharing variant of the single agent DRL algorithms, which introduced the parameter sharing mechanism into homogeneous MARL. Obviously, parameter sharing has been widely used as the implementation details of homogeneous multi-agent algorithms, such as QMIX (Rashid et al., 2018), QTRAN (Son et al., 2019), Qatten (Yang et al., 2020), etc. Recently, Terry et al. (2020) applied parameter sharing extensions to heterogeneous agents algorithm by padding based method, demonstrating again the important use of parameter sharing for multiagent algorithms. Of course, recent works have pointed out that full parameter sharing mechanism tend to make the behavior strategies of agents more same, Christianos et al. (2021) proposed a selective parameter sharing mechanism to eliminate the limitations of full parameter sharing.
5.2 INDIVIDUALS AND TEAMS
Focusing on the expected development of individuals and teams will not only help agents to maintain a variety of policies but also help them form a more stable collaboration. At the individual level, Wang et al. (2020; 2021b) focus on discovering the character traits behind the agents. Jiang & Lu (2021) studies the identifiability of agent trajectories and fixed identities. Du et al. (2019) proposes the use of internal rewards to stimulate diverse behaviors among agents. At the team level, Iqbal et al. (2021) randomly groups agents into related and unrelated groups, allowing agents to explore only specific entities in their environment. Wang et al. (2022) implements a dynamic grouping method by extracting the potential intentions of agents as tags. Li et al. (2021) divides agents’ value functions into shared and unshared parts to focus on the agents’ personality and commonality. In this paper, we focus on the impact of both individual and team levels on the complementary collaboration of agents. Of course, our approach is not to overlap the two layers of individuals and teams but to fully extend and combine the individual identities acquired by the agents.
6 CONCLUSION AND FUTURE WORK
In this paper, we proposed BDPS, a novel bi-level dynamic parameter sharing mechanism in MARL. By maintaining a role selector and a group selector, BDPS provides a new solution for agents to select the partners for parameter sharing in a timely and dynamic manner at both individual and team levels. And we integrate the roles and groups of agents into a whole to achieve a dynamic balance between their personalities and commonalities. Our experiments on SMAC and GRF show that BDPS can significantly facilitate the formation of complementary and reliable collaboration between agents.
Additionally, although we defined that agents have the opportunity to be in any role or grouping, they cannot explicitly utilize past knowledge when they were in other roles or groups. Therefore, how to make explicit use of the role knowledge they have learned and how to make use of other similar knowledge are issues that we need to explore further. We will present this paradigm of knowledge flow across roles, groups, and agents in future work.
A MULTI-AGENT ENVIRONMENTS
In this paper, we use three multi-agent environments, Starcraft II Multi-agent Challenge (SMAC)(Samvelyan et al., 2019), Google Research Football (GRF) (Kurach et al., 2020) and LevelBased Foraging (LBF) Papoudakis et al. (2021); Albrecht & Ramamoorthy (2015); Albrecht & Stone (2019), to conduct verification experiments.
A.1 SMAC
The starcraft multi-agent challenge is a fully collaborative, partially observable set of multiagent tasks. This environment implements various micro-management tasks based on the popular realtime strategy game StarCraft II. Each mission is a specific battle scenario in which a group of agents, each of whom controls a single unit, fight against an army controlled by the built-in AI in StarCraft II.
Compared with the LBF, SMAC provides more abundant battle scenes for agents. We conducted comparative experiments with baseline methods on hard and super hard maps. The characteristics of relevant maps are shown in Table 1.
In Google Research Football (GRF) tasks, agents need to cooperate according to the rules of football matches to win games or related training tasks. The reward settings of GRF can be divided into two types. One is that only goals can be rewarded (scoring), and the other is that when an agent reaches a specific coordinate point (checkpoints), it will get a certain reward. Compared with the two types of reward, the setting of the first type is more sparse. In the official multi-agent example, agents are allowed to observe all information on the field, which is contrary to our research hypothesis. We provide an observation space setting for agents in the Dec-POMDP version. The observation space of agents can dynamically change with their motion vectors, as shown in Figure 7. Considering the size of the field in the environment, we set the visual distance of the agents to 0.84, and the forward visual angle to 200◦. The agents can observe the position and direction of the ball, and within the range of viewing them, we also allow the agents to observe the position of their teammates and the position of their opponents.
In our experiment, we control all the players on the left side and set their movement state to be lazy (when not touching the ball, there is no built-in AI intervention), and the built-in AI in the system completely controls all the players on the right side.
A.3 LBF
The LBF is a multi-agent collection task built on a grid world where each agent and object is assigned a level. The criteria for successful collection by agents is the sum of the levels of the related agent needs to be equal to or greater than the level of the item. The agent will receive a reward at the relative item level if the item is successfully collected. LBF provides a more sparse reward environment and collaboration constraints than SMAC, which puts forward higher requirements for the effectiveness of multi-agent algorithms.
The difficulty of the LBF can be adjusted by the number of agents, the size of the grid world, the type of items, and the hard collaboration constraints. In this paper, we select 15×15-4p-5f as a supplement to the ablation experiment to verify whether the different components of our method are still valid in a more challenging collaborative setting. The 15×15-4p-5f scene indicates that 4 agents in a 15×15 grid world are involved in collecting 5 items.
B CASE STUDY IN LBF
As discussed in Appendix A.3, the constraints from level and the sparse reward environment pose a greater challenge for agents to establish complementary and reliable collaborations. In order to verify the effectiveness of our proposed dynamic parameter sharing mechanism for the formation of complementary relationships between agents, we verify the performance of different parameter sharing mechanisms in 15×15-4p-5f, including single-layer dynamic parameter sharing (ours with only individual level), bi-layer dynamic parameter sharing (ours), full static parameter sharing (classic QMIX) and no parameter sharing (QMIX with non-sharing) at all.
As shown in Figure 9, our method still yields the best return, demonstrating that our approach promotes more collaboration among agents. Although single-layer dynamic parameter sharing can promote agent collaboration, it can not always form complementary collaborative relationships. The result again confirms that forming complementary collaborations among agents requires a dynamic balance of personality and commonality, which is hidden in dynamic parameter sharing at both the individual and team levels.
C ABLATION STUDY IN GRF
As shown in Figure 8, we additionally verified the performance of the single-layer dynamic parameter sharing mechanism (only individual level) in two GRF scenarios.
Unlike the experimental results of SMAC and LBF, the algorithm of single-layer dynamic parameter sharing is better than the QMIX algorithm of full static parameter sharing and even can achieve the result of bi-level parameter sharing in academy 3 vs 1 with keeper. The experimental phenomena is because, in the GRF environment, agents have a more extensive range of actions, which makes them have a unique observation space than the other two environments, which proves that the full parameter is more effective because agents have similar observation spaces(Christianos et al., 2021).
We need to point out that the single-layer dynamic parameter sharing mechanism in the two scenarios is not as stable as our bi-level dynamic parameter sharing mechanism. This is because, on the one hand, agents are unstable due to changes in shared partners; on the other hand, agents that do not share parameters cannot establish complementary cooperation relationships, which again proves that our approach is correct in considering both individual and team levels.
D THE EFFECT OF THE NUMBER OF VIRTUAL ROLES
During the experiment, we found that setting different numbers of virtual roles would affect the performance of the entire algorithm. As shown in Figure 11, we show that in the SMAC environment, we only consider the impact of different numbers of virtual roles on the experimental results.
As we analyzed in the text, setting too many virtual roles will not improve the performance of the algorithm.
E FULL EXPERIMENTAL RESULTS IN SMAC
F HYPERPARAMETERS SETTING
In our method, we design a role selector and a group selector to guide agents in selecting appropriate partners for dynamic parameter sharing at the individual and team levels. The core of the role selector is a gated recurrent unit variational autoencoders (GRU-VAE). Besides the GRU unit, the encoder and decoder parts that makeup VAE are composed of simple linear layers and activation functions. The design of the group selector can be divided into a graph attention network with a 128- dimensional output state and other corresponding dimensions of the network layers (a linear layer and a GRU unit with a 64-dimensional hidden state). To expand, we use the softmax function to get the roles of agents at the individual level through the bottleneck of the GRU-VAE. For the grouping of agents at the team level, considering that our grouping task is an extra-designed reinforcement learning task, we use the same ε− greedy method as agents’ selection of actions. We need to state that, in addition to the newly designed components of our method, we maintain as much agreement as possible with the baseline method QMIX on the structure, parameters, and optimization methods that make up the rest of our method, so as to fully demonstrate in ablation experiments that the real utility of the different components that make up our method comes entirely from our design.
G EXPERIMENTAL DETAILS
For our method and baseline algorithms used, we tested 2M and 5M steps on all hard maps and all super hard maps of SMAC using five random seeds, respectively. Due to the use of the ε− greedy method, we set ε to be linearly annealed from 1.0 to 0.05 in 70K time steps in all hard maps and three super hard maps and kept constant during the rest of the training period. We set the annealing time for the remaining super hard map 6h vs 8z to 500K.
In GRF, we respectively tested 10M and 8M steps on academy 3 vs 1 with keeper and academy pass and shoot with keeper using three random seeds. We set ε to be linearly annealed from 1.0 to 0.05 in 100K time steps in both scenarios. For part of the observable space we designed, it contains the location information of the agent itself, the position, direction and attribution information of the football, as well as the player information and opponent information within the visual range.
To test our method components’ effectiveness in the 15×15-4p-5f scenario of the LBF environment, we set the experimental test step as 20M and the annealing time of ε as 200K. The setting of ε remains the same in our method and baseline algorithms.
For the section on the virtual roles and groups setup in our method, in order to ensure that the collaborative relationship between agents is sufficiently complementary, we set the number of agents n, the number of virtual roles k, and the number of groups m satisfying the relationship n ≥ k ≥ m ≥ 2. Of course, as we obtained plots of the number of virtual roles and the win rate in our ablation experiments, the number of virtual roles could not be increased blindly. Blindly increasing the number of virtual roles not only brings a vast amount of parameters, but also makes complementary collaborations between agents impossible. In the SMAC and the LBF, the number of virtual roles we set is shown in Table 2. In response to agents’ virtual roles and groups update periods, we have TV = e ·TU . To guarantee that the virtual roles and groups are updated normally in each episode, we require that the update periods of the virtual roles and the groups satisfies TU , TV < len(episode). We set update periods for virtual roles and groups based on the average length of a episode. In general, we set the update periods of 5, 7, 8, 10 or 14 with e = 1 or 2.
Our experiments are completed at Ubuntu 18.04 with a GPU of NVIDIA GTX 3090, a CPU of Intel i9-12900k and a memory size of 128G. In addition, the SMAC version is 4.10, the experimental version of 15×15-4p-5f in LBF is v2 and the version of the GRF is 2.10. We will open source my algorithm and experimental environments code on time. | 1. What is the focus and contribution of the paper regarding multi-agent reinforcement learning?
2. What are the strengths and weaknesses of the proposed bi-level parameter-sharing mechanism?
3. Do you have any concerns about the method's soundness or the empirical evaluations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or issues that the reviewer would like to see addressed in future work related to this topic? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper focuses on parameter-sharing issues in multi-agent reinforcement learning. The core idea of this paper is that previous methods of parameter sharing only consider the sharing between different agents, which is a single-level sharing. This work proposes a bi-level parameter-sharing mechanism that not only shares parameters among agents but also among teams. To achieve this, they defined roles based on the long-term cumulative advantages and share parameters of agents in the same team. Empirically, they evaluated the proposed method in SC2 and Google Footballs. And the results show some strengths over baselines.
Strengths And Weaknesses
Strengths
The problem this paper considers is very important. Parameter sharing could largely improve the efficiency of policy learning in MARL.
The literature review is sufficient. Previous works are discussed and compared empirically.
The motivation for this work is easy to understand. Previous works only considered single-level parameter sharing, and this paper novelly considers bi-level sharing.
Weaknesses
The proposed method is not sound enough. (1) In Definition 1, it seems that a role can only belong to one team, but why is that? And what are the definitions of roles and teams? (2) Individual-level parameter sharing depends on advantages, which are calculated using
Q
values. What if these
Q
values are not accurate at the beginning of training? Will the false role selection further hamper policy learning? (3) Advantages are computed as
A
=
max
(
Q
)
−
Q
, then agents with similar observations will always have the same role. What if two similar agents need to take different sub-tasks?
The quality of empirical evaluations needs to be improved. (1) Figure 2 is not clear. Do not use dotted lines for curves with similar performance. (2) In Figure 3, the performances of CDS and RODE in some tasks are very different from their original paper. (3) The ablation studies are not enough. More results are expected. What is the functionality of individual-level/team-level parameter sharing? How do the roles and teams evolve in an episode or in the process of training?
The results are not significant. This paper tested two tasks in Google Football, but the proposed method only outperforms others in one task.
This paper does not provide any discussions of limitations.
Clarity, Quality, Novelty And Reproducibility
The clarity of this paper can be improved. (1) Figure 1 can be simplified and some irrelevant components should be removed to make it more clear. (2) Figure 2 is not clear. (3) The definition of roles and teams is missing. |
ICLR | Title
Bi-Level Dynamic Parameter Sharing among Individuals and Teams for Promoting Collaborations in Multi-Agent Reinforcement Learning
Abstract
Parameter sharing has greatly contributed to the success of multi-agent reinforcement learning in recent years. However, most existing parameter sharing mechanisms are static, and parameters are indiscriminately shared among individuals, ignoring the dynamic environments and different roles of multiple agents. In addition, although a single-level selective parameter sharing mechanism can promote the diversity of strategies, it is hard to establish complementary and cooperative relationships between agents. To address these issues, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams for promoting effective collaborations (BDPS). Specifically, at the individual level, we define virtual dynamic roles based on the long-term cumulative advantages of agents and share parameters among agents in the same role. At the team level, we combine agents of different virtual roles and share parameters of agents in the same group. Through the joint efforts of these two levels, we achieve a dynamic balance between the individuality and commonality of agents, enabling agents to learn more complex and complementary collaborative relationships. We evaluate BDPS on a challenging set of StarCraft II micromanagement tasks. The experimental results show that our method outperforms the current state-of-the-art baselines, and we demonstrate the reliability of our proposed structure through ablation experiments.
N/A
Parameter sharing has greatly contributed to the success of multi-agent reinforcement learning in recent years. However, most existing parameter sharing mechanisms are static, and parameters are indiscriminately shared among individuals, ignoring the dynamic environments and different roles of multiple agents. In addition, although a single-level selective parameter sharing mechanism can promote the diversity of strategies, it is hard to establish complementary and cooperative relationships between agents. To address these issues, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams for promoting effective collaborations (BDPS). Specifically, at the individual level, we define virtual dynamic roles based on the long-term cumulative advantages of agents and share parameters among agents in the same role. At the team level, we combine agents of different virtual roles and share parameters of agents in the same group. Through the joint efforts of these two levels, we achieve a dynamic balance between the individuality and commonality of agents, enabling agents to learn more complex and complementary collaborative relationships. We evaluate BDPS on a challenging set of StarCraft II micromanagement tasks. The experimental results show that our method outperforms the current state-of-the-art baselines, and we demonstrate the reliability of our proposed structure through ablation experiments.
1 INTRODUCTION
In many areas, collaborative Multi-Agent Reinforcement Learning (MARL) has broad application prospects, such as robots cluster control (Buşoniu et al., 2010), multi-vehicle auto-driving (Bhalla et al., 2020), and shop scheduling (Jiménez, 2012). In a multi-agent environment, an agent should observe the environment’s dynamics and understand the learning policies of other agents to form good collaborations. Real-world scenarios usually have a large number of agents with different identities or capabilities, which puts forward higher requirements for collaborations among agents. Therefore, how to solve the large-scale MARL problem and promote to form stable and complementary cooperation among agents with different identities and capabilities are particularly important.
To solve the large-scale agents issue, we can find that many collaborative MARL works adopting the centralized training paradigm use the full static parameter sharing mechanism (Gupta et al., 2017), which allows agents to share parameters of agents’ policy networks, thus simplifying the algorithm structure and improving performance efficiency. This mechanism is effective because agents generally receive similar observation information in the existing narrow and simple multiagent environments. In our Google Research Football (GRF) (Kurach et al., 2020) experiments, we can find that blindly applying the full parameter sharing mechanism does not improve the performance of algorithms because the observation information is very different due to the movement of different players. At the same time, because the full static parameter sharing mechanism ignores the identities and abilities of different agents, it constantly limits the diversity of agents’ behavior policies (Li et al., 2021; Yang et al., 2022), which makes it difficult to promote complementarity and reliable cooperation between agents in complex scenarios.
Recently, in order to eliminate the disadvantage of full parameter sharing, a single-layer selective parameter sharing mechanism has been proposed (Christianos et al., 2021; Wang et al., 2022), that is, extracting deep features from agents’ observation information by an encoder and clustering them to achieve the combination of different agents in order to select different agents for parameter sharing.
Although the single-level selective parameter sharing mechanism can promote the diversity of agents’ policies, it causes the relationship between agents that do not share parameters simultaneously fragmented. So that agents cannot establish complementary cooperative relationships in a broader range. More importantly, designing an effective selector is the key to sharing selective parameters, especially for the single-level dynamic selective parameter sharing mechanism, which needs to be done within several rounds of the selection operation. Most methods only use the realtime observation information of agents, which loses attention to agents’ history and is not conducive to correctly mining the implicit identity characteristics of the agents. For a football team, special training for different players, such as shooting training for the forwards and defense training for the defenders, will be carried out. However, winning a game requires special training and coordination between players of different roles. That is, not only do we need to share parameters with agents of the same role, but we also need to combine agents of different identities to ensure that they can form robust and complementary collaboration on a larger scale.
To address these issues, in this paper, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams (BDPS). The advantage functions of agents can be expressed as the advantages of taking action relative to the average in the current state. We consider that the advantage function can better represent the actual roles of agents in current identity and grouping. In order to more accurately identify the role of agents, at the individual level, we compute the long-term advantage information of the agents as the key to virtual role identification and use the variational autoencoders (VAE) (Kingma & Welling, 2014) to learn the distribution of the role characteristics of the agents, and obtain more accurate virtual role directly by sampling the distribution of role features. To alleviate the split of the relationship between different virtual role agents by the singlelayer dynamic parameter sharing mechanism, we further use the graph attention network (GAT) (Velickovic et al., 2018) to learn the topological relationships between different roles based on the roles obtained at the individual level, so as to achieve the combination of different identity agents at a higher level and a broader range. Through the method we designed, we achieve dynamic and selective parameter sharing for agents at two different levels, individual and team, and achieve the goal of stabilizing complementary collaboration among agents in a more extensive scope while achieving the diversity of agents’ policies.
We test BDPS and algorithms using different parameter sharing mechanisms on the StarCraft II micromanagement environments (SMAC) (Samvelyan et al., 2019) and the Google Research Football (GRF) (Kurach et al., 2020). The experimental results show that our method not only outperforms other methods with single-level selective parameter sharing mechanism and full parameter sharing mechanism in general on all super hard maps and four hard maps of SMAC, but also performs well in the experimental scenarios in the used GRF. In addition, we carried out ablation experiments to verify the influence of different parameter sharing settings on the formation of complementary cooperation between agents, which fully proves the reliability of our proposed method.
2 BACKGROUND
2.1 DECENTRALIZED PARTIALLY OBSERVABLE MARKOV PROCESS
A full cooperative MARL task can usually be modeled as a decentralized partially observable markov process (Dec-POMDP) (Oliehoek & Amato, 2016). We can define a tuple M = ⟨N ,S,A, R, P,Ω, O, γ⟩ to represent it, where N is the finite set of n agents, s ∈ S is a finite set of global state, and γ ∈ [0, 1). At each time step t, each agent i ∈ N receives a local observation oi ∈ Ω according to the observation function O (s, i), takes an action ai ∈ A to form a joint action a ∈ An, and gets a shared global reward r = R (s,a). Due to the limitation of partial observability, each agent conditions its policy πi (ai|τi) on its own local action-observation history τi ∈ T ≡ (Ω×A)∗. Agents together aim to maximize the expected return, that is, to find a joint policy π = ⟨π1, ..., πn⟩ to maximize a joint action-value function Qπ = Es0:∞,a0:∞ [ ∑∞ t=0 γ trt|s0 = s,a0 = a,π].
2.2 CENTRALIZED TRAINING WITH DECENTRALIZED EXECUTION
In many MARL settings, partial observability problems can be solved by centralized training with decentralized execution (CTDE) paradigm (Lowe et al., 2017; Foerster et al., 2018; Rashid et al., 2020; Wang et al., 2021a; Son et al., 2019), which is currently the mainstream of MARL methods. The centralized mode is adopted in training, and after the training, agents only make decisions based on their local observation and the trained policy network. Thus, the problems of unstable environment and large-scale agents can be overcome at the same time to a certain extent.
2.3 VARIATIONAL AUTOENCODER
Variational autoencoder (VAE) (Kingma & Welling, 2014) is a generative network model based on Variational Bayesian (VB) (Fox & Roberts, 2012) inference. Two probability density distribution models are established by the inference network qϕ and the generative network pξ. The inference network is used for the variational inference of the original input data x to generate the variational probability distribution of hidden variables z. The generative network restores the approximate probability distribution of the original data according to the generated implicit variable variational probability distribution. VAE model can be divided into the following two processes: the approximate inference process of posterior distribution of hidden variables: qϕ (z|x) and the generation process of conditional distribution of generated variables: pξ (z) pξ (x̂|z). In order to make qϕ (z|x) and the true posterior distribution pξ (z|x) approximately equal, VAE uses KL-divergence to measure the similarity between them:
DKL(qϕ (z|x) || pξ (z|x)) = log pξ (x) + Eqϕ(z|x) [log qϕ (z|x)− log pξ (x, z)] , (1) where the term log pξ (x) is called log-evidence and it is constant. Another term is the negative evidence lower bound (ELBO). The VAE with additional GRU network is widely used in sequence anomaly detection (Su et al., 2019). In this paper, we use it to identify the identity feature distribution mapped behind the advantage information sequence of agents for a period of time to better guide them in choosing appropriate parameter sharing partners.
2.4 GRAPH ATTENTION NETWORK
Graph attention network (GAT) (Velickovic et al., 2018) is a network architecture based on an attention mechanism, which can learn different weights for different neighbors through the attention mechanism. For calculating the output characteristics of node i, GAT first trains a shared weight matrix W for all nodes to obtain the weight of each neighbor node of node i. Then, according to the weight, the attention coefficients between node i and its neighbor nodes are calculated. Finally, these attention coefficients are weighted and summed to obtain the output features h′i of node i:
h′i = σ1 ∑ j∈Ni
exp ( σ2 ( aT [Whi||hj ] ))∑ k∈Ni exp (σ2 (a T [Whi||hk])) Whj (1−Head) , (2)
where σ1 and σ2 represent nonlinear functions, Ni represents the first-order neighborhood of node i, aT represents the transpose of the weight vector a and || is the concatenation operation. In recent years, MARL with communication has widely used GAT to determine the explicit communication goals of agents (Niu et al., 2021; Seraj et al., 2021). In this paper, we use the GAT to establish the topological relationship between different virtual roles, and further combine the agents under different virtual role mappings to form complementary and reliable cooperative relationships.
3 OUR METHOD
In this section, we introduce the proposed BDPS in detail. BDPS mainly comprises individuals and teams, as shown in Figure 1. We get inspiration from people: a person may play multiple roles and can freely switch roles in different scenes, which promotes the stable development of people, and the same is true for agents. Therefore, we choose to sacrifice some network parameters for agents to maintain multiple roles and groups at the same time. And according to the role selector U and the group selector V , choose the most suitable role and group for the agents.
First, we explain the relationship between agents and individual virtual roles and groups.
Definition 1 Given n (n≥2) agents, k (2≤k≤n) roles and m (2≤m≤k) groups, we have the following mapping relationships: fU : N 7→ K and fV : K 7→ M, where K and M represent finite sets of k roles and m groups, respectively.
From the fV , we can see the agents’ groups depend on agents’ roles, so we give the time relationship for agents to update the groups and roles: TV = e · TU , where TV represents the update period of the group, TU represents the identity’s update period and e ∈ Z+.
3.1 INDIVIDUAL LEVEL DESIGN
Different from the existing methods of determining the roles of agents based on real-time observation information, we consider the impact of the long-term cumulative advantages of agents on the definition of the agents’ roles, because the advantage information can better represent the current state of agents than the observation information. At the same time, unlike the existing methods that only use the encoder to learn the role characteristics of agents for clustering, we learn the long-term advantages of agents through the VAE to obtain the distribution information of role. Because in many cases, the role of an agent will not change just because it makes a unique action. At the individual level in Figure 1, we first maintain the role selector U to select appropriate roles uU for agents, then apply parameter sharing to agents according to the same role.
3.1.1 CHOOSE APPROPRIATE ROLES FOR AGENTS
Firstly, the role selector U inputs the pre-calculated agents’ advantage information sequence At−TU :t−1 = {At−TU ,At−TU+1, ...,At−1} into a GRU network to capture complex temporal dependence between agents’ advantage information in the virtual roles’ update period. Secondly, when the time meets the periodic condition of updating virtual roles, we take the implicit output information ht−1 at this time as the characteristics of the agents and input it into the Encoder Eϕ of the VAE. Finally, we obtain the virtual roles uU of agents from the k-dimensional Gaussian distribution output by the Encoder Eϕ.
For agent i, its advantage information at time t can be obtained by Ait = max { Qit } − Qit, where Qit represents the agent i’s local Q-function at time t, and unlike the classical calculating advantage function A = Q − V , ours Ait ≥ 0 is convenient for us to use the Decoder Dξ to reconstruct the input information. When the virtual role uiU of agent i needs to update at time t, its virtual role can be obtained from the VAE’s bottleneck:
uiU = argmax{ziU}, (3)
where zU = µU + σU · ϵU , ϵU ∼ N (0, I).
3.2 TEAM LEVEL DESIGN
The purpose of introducing grouping is to eliminate the fragmentation of the relationship between agents caused by the single-layer parameter sharing mechanism, and to establish the cooperative relationship between agents of different roles in a wider range. As shown in Figure 1, the team level is mainly composed of the group selector V and grouping cooperative networks. The group selector V is realized through an additional reinforcement learning task. We introduce a GAT for this task to find the correlation between different dynamic roles and promote the role composition in the grouping results to be more comprehensive and complete. In the part of parameter sharing, we add additional implicit information hUt from the individual role level to give full play to the positive impact of the individual identities on the team results.
3.2.1 COMBINE DIFFERENT ROLES TO ACHIEVE GROUPING
The group selector V takes the virtual roles uU as inputs. This input information is encoded into the feature vector of the agents via a multi-layer perceptron (MLP). Subsequently, these feature vectors form a set { x1t ,x 2 t , ...,x N t } to be input into the GAT. We use the self-attention mechanism in the GAT to calculate the attention coefficient eij between agents and use it as an essential basis for grouping:
eij = LeakyReLU ( aT [ Wxit||Wx j t ]) , (4)
where i, j ∈ N . In our method, considering the similar observation information and close location arrangement between agents, and there is no rigid separation restriction between roles of different agents, we allow the agent’s attention to come from all undead agents (including itself) at the current moment. Of course, to facilitate the comparison of the attention coefficient between agents, we also use the softmax function for normalization.
xit ′ =
K
|| k=1 σ ∑ j∈Ni αkijW kxjt , (5) where k represents the number of attention heads, which is consistent with the number of virtual roles we set. We hope to find the influence of different agents’ roles as the dominant factors on the formation of teams by agents. For other parts in Equation 5, Ni represents the first-order neighborhood of agent i, and αij = softmaxj (eij) represents the normalized attention coefficient that indicates the importance of agent j’s features to agent i.
A new set of feature vectors x′t = { x1t ′ ,x2t ′ , ...,xNt ′ }
is obtained after the GAT, which is input into the GRU network to get the hidden state affecting the grouping of agents. When the time t increment is equal to the period for agents to update groups TV , we use the hidden state to output the groups’ value and use the ε−greedy to select groups vV for agents.
3.3 OVERALL OBJECTIVES
3.3.1 GET THE LOCAL Q-FUNCTIONS REQUIRED BY AGENTS
In previous sections, we use the role selector U at the individual level and the group selector V at the team level to map agents from actual individuals to virtual roles uU and groups vV , respectively. The bi-level parameter sharing mechanism is also established for agents using roles and groups information obtained. Although contributions from hidden states at the individual level are used in calculating the local Q-functions of agents at the team level, we still may not completely discard explicit efforts at the individual level of agents. So we give the local Q-function:
Q (τ ,a) = QU (τ ,a) +QV (τ ,a) . (6)
3.3.2 TRAIN THE MODULES INCLUDED IN OUR METHOD
Starting from the individual level, we need to train the role selector U to select appropriate virtual roles for agents. The training objectives of the Role Selector U include the corrected reconstruction item and the KL-divergence item, and our goal is to minimize these two items:
LU (ξ, ϕ;AU ) = Lmse ( AU , ÂU ) + λDKL [qϕ (zlU |hlU ) ||pξ (zlU )] , (7)
where Lmse (·) is the mean squared error term for calculate the reconstruction loss of the advantage sequence, lU is the length of the AU , λ is a scaling factor and ÂU is the agents’ advantage information reconstruction sequence, which is obtained by the Decoder’s output ĥlU .
For the group selector V at the team level, we used the QMIX (Rashid et al., 2018) and RODE (Wang et al., 2021b) methods because we introduced an additional deep reinforcement learning task to generate groups for agents:
LV (θν) = ∑ b (TV −1∑ ∆t=0 rt+∆t + γmaxv′V Q̄ V tot (τ ′,a′,u′U , s ′)−QVtot (τ ,a,uU , s) )2 , (8) where b is the batch size of transitions sampled from the replay buffer.
In order for agents to use global rewards to learn local Q-functions, we input the local Q-value into the mixing network of QMIX (Rashid et al., 2018) again to estimate the global action value Qtot (τ ,a):
LTD(θµ) = ∑ b [( r + γmaxa′Q̄tot (τ ′,a′, s′)−Qtot (τ ,a, s) )2] . (9)
4 EXPERIMENTS
In this section, we demonstrate and evaluate the advantages of our proposed BDPS using the challenging tasks in the StarCraft II micromanagement enviroments (SMAC) and Google Research Football (GRF). We not only compare our proposed method with QMIX (Rashid et al., 2018) which adopts full parameter sharing mechanism and no parameter sharing mechanism, but also further compare with several baseline methods aimed at promoting the diversity of agents’ policies, such as CDS (Li et al., 2021), EOI (Jiang & Lu, 2021) and RODE(Wang et al., 2021b). Finally, we ablate our method in SMAC, verifying the true utility of the components in our method.
4.1 PERFORMANCE ON GOOGLE RESEARCH FOOTBALL (DEC-POMDP)
In the official multi-agent example, agents are allowed to observe all information on the field, which is contrary to our research hypothesis. We provide an observation space setting for agents in the Dec-POMDP version. The observation space of agents can dynamically change with their motion vectors. See Appendix A.2 for details.
First, we compare our method and baseline methods in the GRF of the Dec-POMDP version we provide. Compared with the experimental scenario provided by CDS (Li et al., 2021), we use the algorithm to control all agents in the left team, use the built-in scoring and checkpoints reward settings of GRF, and set that the agents in the left team cannot be moved by the built-in AI of the system when they do not touch the ball.
As shown in Figure 2, we select two scenarios academy 3 vs 1 with keeper and academy pass and shoot with keeper for comparison. We compare the actual reward received by the agents with the goal score in both scenarios. In general, our method is able to achieve better results than other baseline algorithms in both scenarios. Among them, the full parameter sharing version of QMIX is better than the non-sharing version, and other improved baseline algorithms based on full parameter sharing QMIX can also achieve goals, but the effect is not obvious.
As we mentioned in Section 1, the full parameter sharing mechanism ignores the identities and capabilities of agents, which makes them lose the diversity of behavior policies. As a result, full parameter sharing cannot improve the effectiveness of the algorithm when agents observe scenes with large differences in information. For the single-layer dynamic selective parameter sharing, due to the hard cutting of the relationship between agents that do not participate in parameter sharing, although better results can be achieved by relying on the selective parameter sharing mechanism, the agents cannot form a sufficiently stable cooperative relationship, which makes the effect fluctuate significantly. See Appendix C for the specific results.
In both scenarios, we can see that RODE differentiated the roles of the agents by limiting their action space, but this limitation also prevented the agents from achieving further training results. Through the experimental results, we can find that these baseline algorithms drop the ball in these two academy-scenarios, indicating that the agents do not form a sufficiently complementary collaborative relationship.
4.2 PERFORMANCE ON STARCRAFT II
As can be seen from Figure 3, our method is superior to other baseline methods on these maps, especially in maps with a larger number of agents, where our method is able to maintain its advantage. Of course, EOI and RODE also show good stability in these maps, and the full parameter sharing version of QMIX is still better than the no parameter sharing version.
Specifically, our method has more advantages than the CDS, which emphasizes the balance of personalities and commonalities of agents. As we mentioned, exploring the dynamic balance between individual personalities and commonalities is essential. Still, this balance is difficult to define, so we do not give factors in Equation 6 that can adjust the different importance at the individual and team levels. This balance is dynamic and hides in the process of sharing dynamic parameters we design. We will describe it in detail in the ablation experiment in Section 4.3.
4.3 ABLATION STUDY
In this section, we conduct ablation studies to understand the actual utility of each level in our bilevel dynamic parameter sharing mechanism. In addition to showing the winning rate of maps in SMAC, we calculate the entropy difference between individuals and teams to quantify the advantages of different components.
Figure 4: Comparison of different parameter sharing mechanisms in 5m vs 6m.
Figure 5: Comparison of different virtual roles in MMM2 (Only individual level).
We first use the homogeneous agents’ map 5m vs 6m in SMAC to conduct ablation research to analyze the advantages of dynamic parameter sharing over full and no parameter sharing. As shown in Figure 4, we can see that the method of applying parameter sharing has apparent advantages over the method without parameter sharing. Of course, our method is better than QMIX’s full static parameter sharing. Specifically, the single-level dynamic parameter sharing has a weak advantage over the full static parameter sharing. Our bi-level dynamic parameter sharing is more advantageous than the single-level dynamic and full static, which is exactly the goal of our proposed method.
As shown in Figure 5, we also compare the impact of different numbers of virtual roles on the collaboration of agents in the heterogeneous agents’ map MMM2. From the curve in the figure, we can find that the winning rate in the MMM2 map increases first and then decreases with the number of virtual roles, which indicates that blindly increasing the number of virtual roles can adversely affect the collaboration of agents.
To quantify the difference between our single-level and bi-level dynamic parameter sharing, we consider analyzing from the perspective of information theory. First, we introduce the evaluation indicator: Entropy difference.
Entropy describes uncertainty. We describe the capabilities of agents in different layers by comparing the entropy of action value between the two layers.
∆H = 1
N N∑ i=1 (∑ ai piV · log 1 piV − ∑ ai piU · log 1 piU ) , (10)
where, piV = softmaxQ i V ( τ i, ait−1, u i U ) and piU = softmaxQ i U ( τ i, ait−1 ) .
As shown in Figure 6, we can see that in 5m vs 6m, the information entropy at the team level is always smaller than that at the individual level, and the difference between the two is increasing. The change of entropy shows that the agents at the team level are more orderly than strategies
formed by agents only at the individual level, which also confirms that our team-level design can find commonalities among agents with different virtual roles to establish correct collaborations between agents. For MMM2 in Figure 6, our conclusion is still valid. No matter whether k = 3 or k = 5, the entropy difference between the team level and the individual level is constantly expanding with the training, and the entropy of the team level is constantly developing in a direction smaller than that of the individual level. And this phenomenon is consistent with our winning rate in Figure 4 and Figure 5.
5 RELATED WORK
5.1 PARAMETER SHARING
Parameter sharing plays an important role in MARL. Tan (1993) first studied the positive role of “sharing” in promoting collaborative agents in classical reinforcement learning algorithms. Gupta et al. (2017) proposed a parameter sharing variant of the single agent DRL algorithms, which introduced the parameter sharing mechanism into homogeneous MARL. Obviously, parameter sharing has been widely used as the implementation details of homogeneous multi-agent algorithms, such as QMIX (Rashid et al., 2018), QTRAN (Son et al., 2019), Qatten (Yang et al., 2020), etc. Recently, Terry et al. (2020) applied parameter sharing extensions to heterogeneous agents algorithm by padding based method, demonstrating again the important use of parameter sharing for multiagent algorithms. Of course, recent works have pointed out that full parameter sharing mechanism tend to make the behavior strategies of agents more same, Christianos et al. (2021) proposed a selective parameter sharing mechanism to eliminate the limitations of full parameter sharing.
5.2 INDIVIDUALS AND TEAMS
Focusing on the expected development of individuals and teams will not only help agents to maintain a variety of policies but also help them form a more stable collaboration. At the individual level, Wang et al. (2020; 2021b) focus on discovering the character traits behind the agents. Jiang & Lu (2021) studies the identifiability of agent trajectories and fixed identities. Du et al. (2019) proposes the use of internal rewards to stimulate diverse behaviors among agents. At the team level, Iqbal et al. (2021) randomly groups agents into related and unrelated groups, allowing agents to explore only specific entities in their environment. Wang et al. (2022) implements a dynamic grouping method by extracting the potential intentions of agents as tags. Li et al. (2021) divides agents’ value functions into shared and unshared parts to focus on the agents’ personality and commonality. In this paper, we focus on the impact of both individual and team levels on the complementary collaboration of agents. Of course, our approach is not to overlap the two layers of individuals and teams but to fully extend and combine the individual identities acquired by the agents.
6 CONCLUSION AND FUTURE WORK
In this paper, we proposed BDPS, a novel bi-level dynamic parameter sharing mechanism in MARL. By maintaining a role selector and a group selector, BDPS provides a new solution for agents to select the partners for parameter sharing in a timely and dynamic manner at both individual and team levels. And we integrate the roles and groups of agents into a whole to achieve a dynamic balance between their personalities and commonalities. Our experiments on SMAC and GRF show that BDPS can significantly facilitate the formation of complementary and reliable collaboration between agents.
Additionally, although we defined that agents have the opportunity to be in any role or grouping, they cannot explicitly utilize past knowledge when they were in other roles or groups. Therefore, how to make explicit use of the role knowledge they have learned and how to make use of other similar knowledge are issues that we need to explore further. We will present this paradigm of knowledge flow across roles, groups, and agents in future work.
A MULTI-AGENT ENVIRONMENTS
In this paper, we use three multi-agent environments, Starcraft II Multi-agent Challenge (SMAC)(Samvelyan et al., 2019), Google Research Football (GRF) (Kurach et al., 2020) and LevelBased Foraging (LBF) Papoudakis et al. (2021); Albrecht & Ramamoorthy (2015); Albrecht & Stone (2019), to conduct verification experiments.
A.1 SMAC
The starcraft multi-agent challenge is a fully collaborative, partially observable set of multiagent tasks. This environment implements various micro-management tasks based on the popular realtime strategy game StarCraft II. Each mission is a specific battle scenario in which a group of agents, each of whom controls a single unit, fight against an army controlled by the built-in AI in StarCraft II.
Compared with the LBF, SMAC provides more abundant battle scenes for agents. We conducted comparative experiments with baseline methods on hard and super hard maps. The characteristics of relevant maps are shown in Table 1.
In Google Research Football (GRF) tasks, agents need to cooperate according to the rules of football matches to win games or related training tasks. The reward settings of GRF can be divided into two types. One is that only goals can be rewarded (scoring), and the other is that when an agent reaches a specific coordinate point (checkpoints), it will get a certain reward. Compared with the two types of reward, the setting of the first type is more sparse. In the official multi-agent example, agents are allowed to observe all information on the field, which is contrary to our research hypothesis. We provide an observation space setting for agents in the Dec-POMDP version. The observation space of agents can dynamically change with their motion vectors, as shown in Figure 7. Considering the size of the field in the environment, we set the visual distance of the agents to 0.84, and the forward visual angle to 200◦. The agents can observe the position and direction of the ball, and within the range of viewing them, we also allow the agents to observe the position of their teammates and the position of their opponents.
In our experiment, we control all the players on the left side and set their movement state to be lazy (when not touching the ball, there is no built-in AI intervention), and the built-in AI in the system completely controls all the players on the right side.
A.3 LBF
The LBF is a multi-agent collection task built on a grid world where each agent and object is assigned a level. The criteria for successful collection by agents is the sum of the levels of the related agent needs to be equal to or greater than the level of the item. The agent will receive a reward at the relative item level if the item is successfully collected. LBF provides a more sparse reward environment and collaboration constraints than SMAC, which puts forward higher requirements for the effectiveness of multi-agent algorithms.
The difficulty of the LBF can be adjusted by the number of agents, the size of the grid world, the type of items, and the hard collaboration constraints. In this paper, we select 15×15-4p-5f as a supplement to the ablation experiment to verify whether the different components of our method are still valid in a more challenging collaborative setting. The 15×15-4p-5f scene indicates that 4 agents in a 15×15 grid world are involved in collecting 5 items.
B CASE STUDY IN LBF
As discussed in Appendix A.3, the constraints from level and the sparse reward environment pose a greater challenge for agents to establish complementary and reliable collaborations. In order to verify the effectiveness of our proposed dynamic parameter sharing mechanism for the formation of complementary relationships between agents, we verify the performance of different parameter sharing mechanisms in 15×15-4p-5f, including single-layer dynamic parameter sharing (ours with only individual level), bi-layer dynamic parameter sharing (ours), full static parameter sharing (classic QMIX) and no parameter sharing (QMIX with non-sharing) at all.
As shown in Figure 9, our method still yields the best return, demonstrating that our approach promotes more collaboration among agents. Although single-layer dynamic parameter sharing can promote agent collaboration, it can not always form complementary collaborative relationships. The result again confirms that forming complementary collaborations among agents requires a dynamic balance of personality and commonality, which is hidden in dynamic parameter sharing at both the individual and team levels.
C ABLATION STUDY IN GRF
As shown in Figure 8, we additionally verified the performance of the single-layer dynamic parameter sharing mechanism (only individual level) in two GRF scenarios.
Unlike the experimental results of SMAC and LBF, the algorithm of single-layer dynamic parameter sharing is better than the QMIX algorithm of full static parameter sharing and even can achieve the result of bi-level parameter sharing in academy 3 vs 1 with keeper. The experimental phenomena is because, in the GRF environment, agents have a more extensive range of actions, which makes them have a unique observation space than the other two environments, which proves that the full parameter is more effective because agents have similar observation spaces(Christianos et al., 2021).
We need to point out that the single-layer dynamic parameter sharing mechanism in the two scenarios is not as stable as our bi-level dynamic parameter sharing mechanism. This is because, on the one hand, agents are unstable due to changes in shared partners; on the other hand, agents that do not share parameters cannot establish complementary cooperation relationships, which again proves that our approach is correct in considering both individual and team levels.
D THE EFFECT OF THE NUMBER OF VIRTUAL ROLES
During the experiment, we found that setting different numbers of virtual roles would affect the performance of the entire algorithm. As shown in Figure 11, we show that in the SMAC environment, we only consider the impact of different numbers of virtual roles on the experimental results.
As we analyzed in the text, setting too many virtual roles will not improve the performance of the algorithm.
E FULL EXPERIMENTAL RESULTS IN SMAC
F HYPERPARAMETERS SETTING
In our method, we design a role selector and a group selector to guide agents in selecting appropriate partners for dynamic parameter sharing at the individual and team levels. The core of the role selector is a gated recurrent unit variational autoencoders (GRU-VAE). Besides the GRU unit, the encoder and decoder parts that makeup VAE are composed of simple linear layers and activation functions. The design of the group selector can be divided into a graph attention network with a 128- dimensional output state and other corresponding dimensions of the network layers (a linear layer and a GRU unit with a 64-dimensional hidden state). To expand, we use the softmax function to get the roles of agents at the individual level through the bottleneck of the GRU-VAE. For the grouping of agents at the team level, considering that our grouping task is an extra-designed reinforcement learning task, we use the same ε− greedy method as agents’ selection of actions. We need to state that, in addition to the newly designed components of our method, we maintain as much agreement as possible with the baseline method QMIX on the structure, parameters, and optimization methods that make up the rest of our method, so as to fully demonstrate in ablation experiments that the real utility of the different components that make up our method comes entirely from our design.
G EXPERIMENTAL DETAILS
For our method and baseline algorithms used, we tested 2M and 5M steps on all hard maps and all super hard maps of SMAC using five random seeds, respectively. Due to the use of the ε− greedy method, we set ε to be linearly annealed from 1.0 to 0.05 in 70K time steps in all hard maps and three super hard maps and kept constant during the rest of the training period. We set the annealing time for the remaining super hard map 6h vs 8z to 500K.
In GRF, we respectively tested 10M and 8M steps on academy 3 vs 1 with keeper and academy pass and shoot with keeper using three random seeds. We set ε to be linearly annealed from 1.0 to 0.05 in 100K time steps in both scenarios. For part of the observable space we designed, it contains the location information of the agent itself, the position, direction and attribution information of the football, as well as the player information and opponent information within the visual range.
To test our method components’ effectiveness in the 15×15-4p-5f scenario of the LBF environment, we set the experimental test step as 20M and the annealing time of ε as 200K. The setting of ε remains the same in our method and baseline algorithms.
For the section on the virtual roles and groups setup in our method, in order to ensure that the collaborative relationship between agents is sufficiently complementary, we set the number of agents n, the number of virtual roles k, and the number of groups m satisfying the relationship n ≥ k ≥ m ≥ 2. Of course, as we obtained plots of the number of virtual roles and the win rate in our ablation experiments, the number of virtual roles could not be increased blindly. Blindly increasing the number of virtual roles not only brings a vast amount of parameters, but also makes complementary collaborations between agents impossible. In the SMAC and the LBF, the number of virtual roles we set is shown in Table 2. In response to agents’ virtual roles and groups update periods, we have TV = e ·TU . To guarantee that the virtual roles and groups are updated normally in each episode, we require that the update periods of the virtual roles and the groups satisfies TU , TV < len(episode). We set update periods for virtual roles and groups based on the average length of a episode. In general, we set the update periods of 5, 7, 8, 10 or 14 with e = 1 or 2.
Our experiments are completed at Ubuntu 18.04 with a GPU of NVIDIA GTX 3090, a CPU of Intel i9-12900k and a memory size of 128G. In addition, the SMAC version is 4.10, the experimental version of 15×15-4p-5f in LBF is v2 and the version of the GRF is 2.10. We will open source my algorithm and experimental environments code on time. | 1. What is the focus and contribution of the paper on cooperative multi-agent tasks?
2. What are the strengths and weaknesses of the proposed bi-level dynamic parameter sharing mechanism?
3. Do you have any concerns regarding the motivation and key insights behind the proposed method?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific questions regarding the proposed method that the reviewer would like to know more about? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a bi-level dynamic parameter sharing mechanism (BDPS) to achieve better coordination among agents in cooperative multi-agent tasks. The key idea is rather than blindly sharing parameters among all agents, they share parameters among agents based on their roles and groups. At the agent level, they determine the roles of different agents based on the advantage sequence information and agents with the same role share parameters. At the team level, they determine the groups of different roles using a graph attention network and agents within the same group share parameters.
Strengths And Weaknesses
Strengths:
The proposed method seems to achieve some good experimental results in different cooperative tasks.
Weaknesses:
The proposed method is not very well motivated and the key insights/intuitions behind it are not very clear.
Many details of the proposed method are not clear to me, making it hard for me to judge the technical contributions.
Clarity, Quality, Novelty And Reproducibility
The paper is not very well-written and hard for me to follow. Many descriptions about the proposed method are ambiguous or confusing to me (see my detailed comments below).
The proposed method does not look novel enough to me. Variational autoencoder and graph attention network have been widely used in existing works and the proposed method seems to be using them in a rather naive way. The idea of using the advantage sequence information to select individual roles for the agents seems to be new. But I do not quite understand the intuition behind it. |
ICLR | Title
Bi-Level Dynamic Parameter Sharing among Individuals and Teams for Promoting Collaborations in Multi-Agent Reinforcement Learning
Abstract
Parameter sharing has greatly contributed to the success of multi-agent reinforcement learning in recent years. However, most existing parameter sharing mechanisms are static, and parameters are indiscriminately shared among individuals, ignoring the dynamic environments and different roles of multiple agents. In addition, although a single-level selective parameter sharing mechanism can promote the diversity of strategies, it is hard to establish complementary and cooperative relationships between agents. To address these issues, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams for promoting effective collaborations (BDPS). Specifically, at the individual level, we define virtual dynamic roles based on the long-term cumulative advantages of agents and share parameters among agents in the same role. At the team level, we combine agents of different virtual roles and share parameters of agents in the same group. Through the joint efforts of these two levels, we achieve a dynamic balance between the individuality and commonality of agents, enabling agents to learn more complex and complementary collaborative relationships. We evaluate BDPS on a challenging set of StarCraft II micromanagement tasks. The experimental results show that our method outperforms the current state-of-the-art baselines, and we demonstrate the reliability of our proposed structure through ablation experiments.
N/A
Parameter sharing has greatly contributed to the success of multi-agent reinforcement learning in recent years. However, most existing parameter sharing mechanisms are static, and parameters are indiscriminately shared among individuals, ignoring the dynamic environments and different roles of multiple agents. In addition, although a single-level selective parameter sharing mechanism can promote the diversity of strategies, it is hard to establish complementary and cooperative relationships between agents. To address these issues, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams for promoting effective collaborations (BDPS). Specifically, at the individual level, we define virtual dynamic roles based on the long-term cumulative advantages of agents and share parameters among agents in the same role. At the team level, we combine agents of different virtual roles and share parameters of agents in the same group. Through the joint efforts of these two levels, we achieve a dynamic balance between the individuality and commonality of agents, enabling agents to learn more complex and complementary collaborative relationships. We evaluate BDPS on a challenging set of StarCraft II micromanagement tasks. The experimental results show that our method outperforms the current state-of-the-art baselines, and we demonstrate the reliability of our proposed structure through ablation experiments.
1 INTRODUCTION
In many areas, collaborative Multi-Agent Reinforcement Learning (MARL) has broad application prospects, such as robots cluster control (Buşoniu et al., 2010), multi-vehicle auto-driving (Bhalla et al., 2020), and shop scheduling (Jiménez, 2012). In a multi-agent environment, an agent should observe the environment’s dynamics and understand the learning policies of other agents to form good collaborations. Real-world scenarios usually have a large number of agents with different identities or capabilities, which puts forward higher requirements for collaborations among agents. Therefore, how to solve the large-scale MARL problem and promote to form stable and complementary cooperation among agents with different identities and capabilities are particularly important.
To solve the large-scale agents issue, we can find that many collaborative MARL works adopting the centralized training paradigm use the full static parameter sharing mechanism (Gupta et al., 2017), which allows agents to share parameters of agents’ policy networks, thus simplifying the algorithm structure and improving performance efficiency. This mechanism is effective because agents generally receive similar observation information in the existing narrow and simple multiagent environments. In our Google Research Football (GRF) (Kurach et al., 2020) experiments, we can find that blindly applying the full parameter sharing mechanism does not improve the performance of algorithms because the observation information is very different due to the movement of different players. At the same time, because the full static parameter sharing mechanism ignores the identities and abilities of different agents, it constantly limits the diversity of agents’ behavior policies (Li et al., 2021; Yang et al., 2022), which makes it difficult to promote complementarity and reliable cooperation between agents in complex scenarios.
Recently, in order to eliminate the disadvantage of full parameter sharing, a single-layer selective parameter sharing mechanism has been proposed (Christianos et al., 2021; Wang et al., 2022), that is, extracting deep features from agents’ observation information by an encoder and clustering them to achieve the combination of different agents in order to select different agents for parameter sharing.
Although the single-level selective parameter sharing mechanism can promote the diversity of agents’ policies, it causes the relationship between agents that do not share parameters simultaneously fragmented. So that agents cannot establish complementary cooperative relationships in a broader range. More importantly, designing an effective selector is the key to sharing selective parameters, especially for the single-level dynamic selective parameter sharing mechanism, which needs to be done within several rounds of the selection operation. Most methods only use the realtime observation information of agents, which loses attention to agents’ history and is not conducive to correctly mining the implicit identity characteristics of the agents. For a football team, special training for different players, such as shooting training for the forwards and defense training for the defenders, will be carried out. However, winning a game requires special training and coordination between players of different roles. That is, not only do we need to share parameters with agents of the same role, but we also need to combine agents of different identities to ensure that they can form robust and complementary collaboration on a larger scale.
To address these issues, in this paper, we propose a bi-level dynamic parameter sharing mechanism among individuals and teams (BDPS). The advantage functions of agents can be expressed as the advantages of taking action relative to the average in the current state. We consider that the advantage function can better represent the actual roles of agents in current identity and grouping. In order to more accurately identify the role of agents, at the individual level, we compute the long-term advantage information of the agents as the key to virtual role identification and use the variational autoencoders (VAE) (Kingma & Welling, 2014) to learn the distribution of the role characteristics of the agents, and obtain more accurate virtual role directly by sampling the distribution of role features. To alleviate the split of the relationship between different virtual role agents by the singlelayer dynamic parameter sharing mechanism, we further use the graph attention network (GAT) (Velickovic et al., 2018) to learn the topological relationships between different roles based on the roles obtained at the individual level, so as to achieve the combination of different identity agents at a higher level and a broader range. Through the method we designed, we achieve dynamic and selective parameter sharing for agents at two different levels, individual and team, and achieve the goal of stabilizing complementary collaboration among agents in a more extensive scope while achieving the diversity of agents’ policies.
We test BDPS and algorithms using different parameter sharing mechanisms on the StarCraft II micromanagement environments (SMAC) (Samvelyan et al., 2019) and the Google Research Football (GRF) (Kurach et al., 2020). The experimental results show that our method not only outperforms other methods with single-level selective parameter sharing mechanism and full parameter sharing mechanism in general on all super hard maps and four hard maps of SMAC, but also performs well in the experimental scenarios in the used GRF. In addition, we carried out ablation experiments to verify the influence of different parameter sharing settings on the formation of complementary cooperation between agents, which fully proves the reliability of our proposed method.
2 BACKGROUND
2.1 DECENTRALIZED PARTIALLY OBSERVABLE MARKOV PROCESS
A full cooperative MARL task can usually be modeled as a decentralized partially observable markov process (Dec-POMDP) (Oliehoek & Amato, 2016). We can define a tuple M = ⟨N ,S,A, R, P,Ω, O, γ⟩ to represent it, where N is the finite set of n agents, s ∈ S is a finite set of global state, and γ ∈ [0, 1). At each time step t, each agent i ∈ N receives a local observation oi ∈ Ω according to the observation function O (s, i), takes an action ai ∈ A to form a joint action a ∈ An, and gets a shared global reward r = R (s,a). Due to the limitation of partial observability, each agent conditions its policy πi (ai|τi) on its own local action-observation history τi ∈ T ≡ (Ω×A)∗. Agents together aim to maximize the expected return, that is, to find a joint policy π = ⟨π1, ..., πn⟩ to maximize a joint action-value function Qπ = Es0:∞,a0:∞ [ ∑∞ t=0 γ trt|s0 = s,a0 = a,π].
2.2 CENTRALIZED TRAINING WITH DECENTRALIZED EXECUTION
In many MARL settings, partial observability problems can be solved by centralized training with decentralized execution (CTDE) paradigm (Lowe et al., 2017; Foerster et al., 2018; Rashid et al., 2020; Wang et al., 2021a; Son et al., 2019), which is currently the mainstream of MARL methods. The centralized mode is adopted in training, and after the training, agents only make decisions based on their local observation and the trained policy network. Thus, the problems of unstable environment and large-scale agents can be overcome at the same time to a certain extent.
2.3 VARIATIONAL AUTOENCODER
Variational autoencoder (VAE) (Kingma & Welling, 2014) is a generative network model based on Variational Bayesian (VB) (Fox & Roberts, 2012) inference. Two probability density distribution models are established by the inference network qϕ and the generative network pξ. The inference network is used for the variational inference of the original input data x to generate the variational probability distribution of hidden variables z. The generative network restores the approximate probability distribution of the original data according to the generated implicit variable variational probability distribution. VAE model can be divided into the following two processes: the approximate inference process of posterior distribution of hidden variables: qϕ (z|x) and the generation process of conditional distribution of generated variables: pξ (z) pξ (x̂|z). In order to make qϕ (z|x) and the true posterior distribution pξ (z|x) approximately equal, VAE uses KL-divergence to measure the similarity between them:
DKL(qϕ (z|x) || pξ (z|x)) = log pξ (x) + Eqϕ(z|x) [log qϕ (z|x)− log pξ (x, z)] , (1) where the term log pξ (x) is called log-evidence and it is constant. Another term is the negative evidence lower bound (ELBO). The VAE with additional GRU network is widely used in sequence anomaly detection (Su et al., 2019). In this paper, we use it to identify the identity feature distribution mapped behind the advantage information sequence of agents for a period of time to better guide them in choosing appropriate parameter sharing partners.
2.4 GRAPH ATTENTION NETWORK
Graph attention network (GAT) (Velickovic et al., 2018) is a network architecture based on an attention mechanism, which can learn different weights for different neighbors through the attention mechanism. For calculating the output characteristics of node i, GAT first trains a shared weight matrix W for all nodes to obtain the weight of each neighbor node of node i. Then, according to the weight, the attention coefficients between node i and its neighbor nodes are calculated. Finally, these attention coefficients are weighted and summed to obtain the output features h′i of node i:
h′i = σ1 ∑ j∈Ni
exp ( σ2 ( aT [Whi||hj ] ))∑ k∈Ni exp (σ2 (a T [Whi||hk])) Whj (1−Head) , (2)
where σ1 and σ2 represent nonlinear functions, Ni represents the first-order neighborhood of node i, aT represents the transpose of the weight vector a and || is the concatenation operation. In recent years, MARL with communication has widely used GAT to determine the explicit communication goals of agents (Niu et al., 2021; Seraj et al., 2021). In this paper, we use the GAT to establish the topological relationship between different virtual roles, and further combine the agents under different virtual role mappings to form complementary and reliable cooperative relationships.
3 OUR METHOD
In this section, we introduce the proposed BDPS in detail. BDPS mainly comprises individuals and teams, as shown in Figure 1. We get inspiration from people: a person may play multiple roles and can freely switch roles in different scenes, which promotes the stable development of people, and the same is true for agents. Therefore, we choose to sacrifice some network parameters for agents to maintain multiple roles and groups at the same time. And according to the role selector U and the group selector V , choose the most suitable role and group for the agents.
First, we explain the relationship between agents and individual virtual roles and groups.
Definition 1 Given n (n≥2) agents, k (2≤k≤n) roles and m (2≤m≤k) groups, we have the following mapping relationships: fU : N 7→ K and fV : K 7→ M, where K and M represent finite sets of k roles and m groups, respectively.
From the fV , we can see the agents’ groups depend on agents’ roles, so we give the time relationship for agents to update the groups and roles: TV = e · TU , where TV represents the update period of the group, TU represents the identity’s update period and e ∈ Z+.
3.1 INDIVIDUAL LEVEL DESIGN
Different from the existing methods of determining the roles of agents based on real-time observation information, we consider the impact of the long-term cumulative advantages of agents on the definition of the agents’ roles, because the advantage information can better represent the current state of agents than the observation information. At the same time, unlike the existing methods that only use the encoder to learn the role characteristics of agents for clustering, we learn the long-term advantages of agents through the VAE to obtain the distribution information of role. Because in many cases, the role of an agent will not change just because it makes a unique action. At the individual level in Figure 1, we first maintain the role selector U to select appropriate roles uU for agents, then apply parameter sharing to agents according to the same role.
3.1.1 CHOOSE APPROPRIATE ROLES FOR AGENTS
Firstly, the role selector U inputs the pre-calculated agents’ advantage information sequence At−TU :t−1 = {At−TU ,At−TU+1, ...,At−1} into a GRU network to capture complex temporal dependence between agents’ advantage information in the virtual roles’ update period. Secondly, when the time meets the periodic condition of updating virtual roles, we take the implicit output information ht−1 at this time as the characteristics of the agents and input it into the Encoder Eϕ of the VAE. Finally, we obtain the virtual roles uU of agents from the k-dimensional Gaussian distribution output by the Encoder Eϕ.
For agent i, its advantage information at time t can be obtained by Ait = max { Qit } − Qit, where Qit represents the agent i’s local Q-function at time t, and unlike the classical calculating advantage function A = Q − V , ours Ait ≥ 0 is convenient for us to use the Decoder Dξ to reconstruct the input information. When the virtual role uiU of agent i needs to update at time t, its virtual role can be obtained from the VAE’s bottleneck:
uiU = argmax{ziU}, (3)
where zU = µU + σU · ϵU , ϵU ∼ N (0, I).
3.2 TEAM LEVEL DESIGN
The purpose of introducing grouping is to eliminate the fragmentation of the relationship between agents caused by the single-layer parameter sharing mechanism, and to establish the cooperative relationship between agents of different roles in a wider range. As shown in Figure 1, the team level is mainly composed of the group selector V and grouping cooperative networks. The group selector V is realized through an additional reinforcement learning task. We introduce a GAT for this task to find the correlation between different dynamic roles and promote the role composition in the grouping results to be more comprehensive and complete. In the part of parameter sharing, we add additional implicit information hUt from the individual role level to give full play to the positive impact of the individual identities on the team results.
3.2.1 COMBINE DIFFERENT ROLES TO ACHIEVE GROUPING
The group selector V takes the virtual roles uU as inputs. This input information is encoded into the feature vector of the agents via a multi-layer perceptron (MLP). Subsequently, these feature vectors form a set { x1t ,x 2 t , ...,x N t } to be input into the GAT. We use the self-attention mechanism in the GAT to calculate the attention coefficient eij between agents and use it as an essential basis for grouping:
eij = LeakyReLU ( aT [ Wxit||Wx j t ]) , (4)
where i, j ∈ N . In our method, considering the similar observation information and close location arrangement between agents, and there is no rigid separation restriction between roles of different agents, we allow the agent’s attention to come from all undead agents (including itself) at the current moment. Of course, to facilitate the comparison of the attention coefficient between agents, we also use the softmax function for normalization.
xit ′ =
K
|| k=1 σ ∑ j∈Ni αkijW kxjt , (5) where k represents the number of attention heads, which is consistent with the number of virtual roles we set. We hope to find the influence of different agents’ roles as the dominant factors on the formation of teams by agents. For other parts in Equation 5, Ni represents the first-order neighborhood of agent i, and αij = softmaxj (eij) represents the normalized attention coefficient that indicates the importance of agent j’s features to agent i.
A new set of feature vectors x′t = { x1t ′ ,x2t ′ , ...,xNt ′ }
is obtained after the GAT, which is input into the GRU network to get the hidden state affecting the grouping of agents. When the time t increment is equal to the period for agents to update groups TV , we use the hidden state to output the groups’ value and use the ε−greedy to select groups vV for agents.
3.3 OVERALL OBJECTIVES
3.3.1 GET THE LOCAL Q-FUNCTIONS REQUIRED BY AGENTS
In previous sections, we use the role selector U at the individual level and the group selector V at the team level to map agents from actual individuals to virtual roles uU and groups vV , respectively. The bi-level parameter sharing mechanism is also established for agents using roles and groups information obtained. Although contributions from hidden states at the individual level are used in calculating the local Q-functions of agents at the team level, we still may not completely discard explicit efforts at the individual level of agents. So we give the local Q-function:
Q (τ ,a) = QU (τ ,a) +QV (τ ,a) . (6)
3.3.2 TRAIN THE MODULES INCLUDED IN OUR METHOD
Starting from the individual level, we need to train the role selector U to select appropriate virtual roles for agents. The training objectives of the Role Selector U include the corrected reconstruction item and the KL-divergence item, and our goal is to minimize these two items:
LU (ξ, ϕ;AU ) = Lmse ( AU , ÂU ) + λDKL [qϕ (zlU |hlU ) ||pξ (zlU )] , (7)
where Lmse (·) is the mean squared error term for calculate the reconstruction loss of the advantage sequence, lU is the length of the AU , λ is a scaling factor and ÂU is the agents’ advantage information reconstruction sequence, which is obtained by the Decoder’s output ĥlU .
For the group selector V at the team level, we used the QMIX (Rashid et al., 2018) and RODE (Wang et al., 2021b) methods because we introduced an additional deep reinforcement learning task to generate groups for agents:
LV (θν) = ∑ b (TV −1∑ ∆t=0 rt+∆t + γmaxv′V Q̄ V tot (τ ′,a′,u′U , s ′)−QVtot (τ ,a,uU , s) )2 , (8) where b is the batch size of transitions sampled from the replay buffer.
In order for agents to use global rewards to learn local Q-functions, we input the local Q-value into the mixing network of QMIX (Rashid et al., 2018) again to estimate the global action value Qtot (τ ,a):
LTD(θµ) = ∑ b [( r + γmaxa′Q̄tot (τ ′,a′, s′)−Qtot (τ ,a, s) )2] . (9)
4 EXPERIMENTS
In this section, we demonstrate and evaluate the advantages of our proposed BDPS using the challenging tasks in the StarCraft II micromanagement enviroments (SMAC) and Google Research Football (GRF). We not only compare our proposed method with QMIX (Rashid et al., 2018) which adopts full parameter sharing mechanism and no parameter sharing mechanism, but also further compare with several baseline methods aimed at promoting the diversity of agents’ policies, such as CDS (Li et al., 2021), EOI (Jiang & Lu, 2021) and RODE(Wang et al., 2021b). Finally, we ablate our method in SMAC, verifying the true utility of the components in our method.
4.1 PERFORMANCE ON GOOGLE RESEARCH FOOTBALL (DEC-POMDP)
In the official multi-agent example, agents are allowed to observe all information on the field, which is contrary to our research hypothesis. We provide an observation space setting for agents in the Dec-POMDP version. The observation space of agents can dynamically change with their motion vectors. See Appendix A.2 for details.
First, we compare our method and baseline methods in the GRF of the Dec-POMDP version we provide. Compared with the experimental scenario provided by CDS (Li et al., 2021), we use the algorithm to control all agents in the left team, use the built-in scoring and checkpoints reward settings of GRF, and set that the agents in the left team cannot be moved by the built-in AI of the system when they do not touch the ball.
As shown in Figure 2, we select two scenarios academy 3 vs 1 with keeper and academy pass and shoot with keeper for comparison. We compare the actual reward received by the agents with the goal score in both scenarios. In general, our method is able to achieve better results than other baseline algorithms in both scenarios. Among them, the full parameter sharing version of QMIX is better than the non-sharing version, and other improved baseline algorithms based on full parameter sharing QMIX can also achieve goals, but the effect is not obvious.
As we mentioned in Section 1, the full parameter sharing mechanism ignores the identities and capabilities of agents, which makes them lose the diversity of behavior policies. As a result, full parameter sharing cannot improve the effectiveness of the algorithm when agents observe scenes with large differences in information. For the single-layer dynamic selective parameter sharing, due to the hard cutting of the relationship between agents that do not participate in parameter sharing, although better results can be achieved by relying on the selective parameter sharing mechanism, the agents cannot form a sufficiently stable cooperative relationship, which makes the effect fluctuate significantly. See Appendix C for the specific results.
In both scenarios, we can see that RODE differentiated the roles of the agents by limiting their action space, but this limitation also prevented the agents from achieving further training results. Through the experimental results, we can find that these baseline algorithms drop the ball in these two academy-scenarios, indicating that the agents do not form a sufficiently complementary collaborative relationship.
4.2 PERFORMANCE ON STARCRAFT II
As can be seen from Figure 3, our method is superior to other baseline methods on these maps, especially in maps with a larger number of agents, where our method is able to maintain its advantage. Of course, EOI and RODE also show good stability in these maps, and the full parameter sharing version of QMIX is still better than the no parameter sharing version.
Specifically, our method has more advantages than the CDS, which emphasizes the balance of personalities and commonalities of agents. As we mentioned, exploring the dynamic balance between individual personalities and commonalities is essential. Still, this balance is difficult to define, so we do not give factors in Equation 6 that can adjust the different importance at the individual and team levels. This balance is dynamic and hides in the process of sharing dynamic parameters we design. We will describe it in detail in the ablation experiment in Section 4.3.
4.3 ABLATION STUDY
In this section, we conduct ablation studies to understand the actual utility of each level in our bilevel dynamic parameter sharing mechanism. In addition to showing the winning rate of maps in SMAC, we calculate the entropy difference between individuals and teams to quantify the advantages of different components.
Figure 4: Comparison of different parameter sharing mechanisms in 5m vs 6m.
Figure 5: Comparison of different virtual roles in MMM2 (Only individual level).
We first use the homogeneous agents’ map 5m vs 6m in SMAC to conduct ablation research to analyze the advantages of dynamic parameter sharing over full and no parameter sharing. As shown in Figure 4, we can see that the method of applying parameter sharing has apparent advantages over the method without parameter sharing. Of course, our method is better than QMIX’s full static parameter sharing. Specifically, the single-level dynamic parameter sharing has a weak advantage over the full static parameter sharing. Our bi-level dynamic parameter sharing is more advantageous than the single-level dynamic and full static, which is exactly the goal of our proposed method.
As shown in Figure 5, we also compare the impact of different numbers of virtual roles on the collaboration of agents in the heterogeneous agents’ map MMM2. From the curve in the figure, we can find that the winning rate in the MMM2 map increases first and then decreases with the number of virtual roles, which indicates that blindly increasing the number of virtual roles can adversely affect the collaboration of agents.
To quantify the difference between our single-level and bi-level dynamic parameter sharing, we consider analyzing from the perspective of information theory. First, we introduce the evaluation indicator: Entropy difference.
Entropy describes uncertainty. We describe the capabilities of agents in different layers by comparing the entropy of action value between the two layers.
∆H = 1
N N∑ i=1 (∑ ai piV · log 1 piV − ∑ ai piU · log 1 piU ) , (10)
where, piV = softmaxQ i V ( τ i, ait−1, u i U ) and piU = softmaxQ i U ( τ i, ait−1 ) .
As shown in Figure 6, we can see that in 5m vs 6m, the information entropy at the team level is always smaller than that at the individual level, and the difference between the two is increasing. The change of entropy shows that the agents at the team level are more orderly than strategies
formed by agents only at the individual level, which also confirms that our team-level design can find commonalities among agents with different virtual roles to establish correct collaborations between agents. For MMM2 in Figure 6, our conclusion is still valid. No matter whether k = 3 or k = 5, the entropy difference between the team level and the individual level is constantly expanding with the training, and the entropy of the team level is constantly developing in a direction smaller than that of the individual level. And this phenomenon is consistent with our winning rate in Figure 4 and Figure 5.
5 RELATED WORK
5.1 PARAMETER SHARING
Parameter sharing plays an important role in MARL. Tan (1993) first studied the positive role of “sharing” in promoting collaborative agents in classical reinforcement learning algorithms. Gupta et al. (2017) proposed a parameter sharing variant of the single agent DRL algorithms, which introduced the parameter sharing mechanism into homogeneous MARL. Obviously, parameter sharing has been widely used as the implementation details of homogeneous multi-agent algorithms, such as QMIX (Rashid et al., 2018), QTRAN (Son et al., 2019), Qatten (Yang et al., 2020), etc. Recently, Terry et al. (2020) applied parameter sharing extensions to heterogeneous agents algorithm by padding based method, demonstrating again the important use of parameter sharing for multiagent algorithms. Of course, recent works have pointed out that full parameter sharing mechanism tend to make the behavior strategies of agents more same, Christianos et al. (2021) proposed a selective parameter sharing mechanism to eliminate the limitations of full parameter sharing.
5.2 INDIVIDUALS AND TEAMS
Focusing on the expected development of individuals and teams will not only help agents to maintain a variety of policies but also help them form a more stable collaboration. At the individual level, Wang et al. (2020; 2021b) focus on discovering the character traits behind the agents. Jiang & Lu (2021) studies the identifiability of agent trajectories and fixed identities. Du et al. (2019) proposes the use of internal rewards to stimulate diverse behaviors among agents. At the team level, Iqbal et al. (2021) randomly groups agents into related and unrelated groups, allowing agents to explore only specific entities in their environment. Wang et al. (2022) implements a dynamic grouping method by extracting the potential intentions of agents as tags. Li et al. (2021) divides agents’ value functions into shared and unshared parts to focus on the agents’ personality and commonality. In this paper, we focus on the impact of both individual and team levels on the complementary collaboration of agents. Of course, our approach is not to overlap the two layers of individuals and teams but to fully extend and combine the individual identities acquired by the agents.
6 CONCLUSION AND FUTURE WORK
In this paper, we proposed BDPS, a novel bi-level dynamic parameter sharing mechanism in MARL. By maintaining a role selector and a group selector, BDPS provides a new solution for agents to select the partners for parameter sharing in a timely and dynamic manner at both individual and team levels. And we integrate the roles and groups of agents into a whole to achieve a dynamic balance between their personalities and commonalities. Our experiments on SMAC and GRF show that BDPS can significantly facilitate the formation of complementary and reliable collaboration between agents.
Additionally, although we defined that agents have the opportunity to be in any role or grouping, they cannot explicitly utilize past knowledge when they were in other roles or groups. Therefore, how to make explicit use of the role knowledge they have learned and how to make use of other similar knowledge are issues that we need to explore further. We will present this paradigm of knowledge flow across roles, groups, and agents in future work.
A MULTI-AGENT ENVIRONMENTS
In this paper, we use three multi-agent environments, Starcraft II Multi-agent Challenge (SMAC)(Samvelyan et al., 2019), Google Research Football (GRF) (Kurach et al., 2020) and LevelBased Foraging (LBF) Papoudakis et al. (2021); Albrecht & Ramamoorthy (2015); Albrecht & Stone (2019), to conduct verification experiments.
A.1 SMAC
The starcraft multi-agent challenge is a fully collaborative, partially observable set of multiagent tasks. This environment implements various micro-management tasks based on the popular realtime strategy game StarCraft II. Each mission is a specific battle scenario in which a group of agents, each of whom controls a single unit, fight against an army controlled by the built-in AI in StarCraft II.
Compared with the LBF, SMAC provides more abundant battle scenes for agents. We conducted comparative experiments with baseline methods on hard and super hard maps. The characteristics of relevant maps are shown in Table 1.
In Google Research Football (GRF) tasks, agents need to cooperate according to the rules of football matches to win games or related training tasks. The reward settings of GRF can be divided into two types. One is that only goals can be rewarded (scoring), and the other is that when an agent reaches a specific coordinate point (checkpoints), it will get a certain reward. Compared with the two types of reward, the setting of the first type is more sparse. In the official multi-agent example, agents are allowed to observe all information on the field, which is contrary to our research hypothesis. We provide an observation space setting for agents in the Dec-POMDP version. The observation space of agents can dynamically change with their motion vectors, as shown in Figure 7. Considering the size of the field in the environment, we set the visual distance of the agents to 0.84, and the forward visual angle to 200◦. The agents can observe the position and direction of the ball, and within the range of viewing them, we also allow the agents to observe the position of their teammates and the position of their opponents.
In our experiment, we control all the players on the left side and set their movement state to be lazy (when not touching the ball, there is no built-in AI intervention), and the built-in AI in the system completely controls all the players on the right side.
A.3 LBF
The LBF is a multi-agent collection task built on a grid world where each agent and object is assigned a level. The criteria for successful collection by agents is the sum of the levels of the related agent needs to be equal to or greater than the level of the item. The agent will receive a reward at the relative item level if the item is successfully collected. LBF provides a more sparse reward environment and collaboration constraints than SMAC, which puts forward higher requirements for the effectiveness of multi-agent algorithms.
The difficulty of the LBF can be adjusted by the number of agents, the size of the grid world, the type of items, and the hard collaboration constraints. In this paper, we select 15×15-4p-5f as a supplement to the ablation experiment to verify whether the different components of our method are still valid in a more challenging collaborative setting. The 15×15-4p-5f scene indicates that 4 agents in a 15×15 grid world are involved in collecting 5 items.
B CASE STUDY IN LBF
As discussed in Appendix A.3, the constraints from level and the sparse reward environment pose a greater challenge for agents to establish complementary and reliable collaborations. In order to verify the effectiveness of our proposed dynamic parameter sharing mechanism for the formation of complementary relationships between agents, we verify the performance of different parameter sharing mechanisms in 15×15-4p-5f, including single-layer dynamic parameter sharing (ours with only individual level), bi-layer dynamic parameter sharing (ours), full static parameter sharing (classic QMIX) and no parameter sharing (QMIX with non-sharing) at all.
As shown in Figure 9, our method still yields the best return, demonstrating that our approach promotes more collaboration among agents. Although single-layer dynamic parameter sharing can promote agent collaboration, it can not always form complementary collaborative relationships. The result again confirms that forming complementary collaborations among agents requires a dynamic balance of personality and commonality, which is hidden in dynamic parameter sharing at both the individual and team levels.
C ABLATION STUDY IN GRF
As shown in Figure 8, we additionally verified the performance of the single-layer dynamic parameter sharing mechanism (only individual level) in two GRF scenarios.
Unlike the experimental results of SMAC and LBF, the algorithm of single-layer dynamic parameter sharing is better than the QMIX algorithm of full static parameter sharing and even can achieve the result of bi-level parameter sharing in academy 3 vs 1 with keeper. The experimental phenomena is because, in the GRF environment, agents have a more extensive range of actions, which makes them have a unique observation space than the other two environments, which proves that the full parameter is more effective because agents have similar observation spaces(Christianos et al., 2021).
We need to point out that the single-layer dynamic parameter sharing mechanism in the two scenarios is not as stable as our bi-level dynamic parameter sharing mechanism. This is because, on the one hand, agents are unstable due to changes in shared partners; on the other hand, agents that do not share parameters cannot establish complementary cooperation relationships, which again proves that our approach is correct in considering both individual and team levels.
D THE EFFECT OF THE NUMBER OF VIRTUAL ROLES
During the experiment, we found that setting different numbers of virtual roles would affect the performance of the entire algorithm. As shown in Figure 11, we show that in the SMAC environment, we only consider the impact of different numbers of virtual roles on the experimental results.
As we analyzed in the text, setting too many virtual roles will not improve the performance of the algorithm.
E FULL EXPERIMENTAL RESULTS IN SMAC
F HYPERPARAMETERS SETTING
In our method, we design a role selector and a group selector to guide agents in selecting appropriate partners for dynamic parameter sharing at the individual and team levels. The core of the role selector is a gated recurrent unit variational autoencoders (GRU-VAE). Besides the GRU unit, the encoder and decoder parts that makeup VAE are composed of simple linear layers and activation functions. The design of the group selector can be divided into a graph attention network with a 128- dimensional output state and other corresponding dimensions of the network layers (a linear layer and a GRU unit with a 64-dimensional hidden state). To expand, we use the softmax function to get the roles of agents at the individual level through the bottleneck of the GRU-VAE. For the grouping of agents at the team level, considering that our grouping task is an extra-designed reinforcement learning task, we use the same ε− greedy method as agents’ selection of actions. We need to state that, in addition to the newly designed components of our method, we maintain as much agreement as possible with the baseline method QMIX on the structure, parameters, and optimization methods that make up the rest of our method, so as to fully demonstrate in ablation experiments that the real utility of the different components that make up our method comes entirely from our design.
G EXPERIMENTAL DETAILS
For our method and baseline algorithms used, we tested 2M and 5M steps on all hard maps and all super hard maps of SMAC using five random seeds, respectively. Due to the use of the ε− greedy method, we set ε to be linearly annealed from 1.0 to 0.05 in 70K time steps in all hard maps and three super hard maps and kept constant during the rest of the training period. We set the annealing time for the remaining super hard map 6h vs 8z to 500K.
In GRF, we respectively tested 10M and 8M steps on academy 3 vs 1 with keeper and academy pass and shoot with keeper using three random seeds. We set ε to be linearly annealed from 1.0 to 0.05 in 100K time steps in both scenarios. For part of the observable space we designed, it contains the location information of the agent itself, the position, direction and attribution information of the football, as well as the player information and opponent information within the visual range.
To test our method components’ effectiveness in the 15×15-4p-5f scenario of the LBF environment, we set the experimental test step as 20M and the annealing time of ε as 200K. The setting of ε remains the same in our method and baseline algorithms.
For the section on the virtual roles and groups setup in our method, in order to ensure that the collaborative relationship between agents is sufficiently complementary, we set the number of agents n, the number of virtual roles k, and the number of groups m satisfying the relationship n ≥ k ≥ m ≥ 2. Of course, as we obtained plots of the number of virtual roles and the win rate in our ablation experiments, the number of virtual roles could not be increased blindly. Blindly increasing the number of virtual roles not only brings a vast amount of parameters, but also makes complementary collaborations between agents impossible. In the SMAC and the LBF, the number of virtual roles we set is shown in Table 2. In response to agents’ virtual roles and groups update periods, we have TV = e ·TU . To guarantee that the virtual roles and groups are updated normally in each episode, we require that the update periods of the virtual roles and the groups satisfies TU , TV < len(episode). We set update periods for virtual roles and groups based on the average length of a episode. In general, we set the update periods of 5, 7, 8, 10 or 14 with e = 1 or 2.
Our experiments are completed at Ubuntu 18.04 with a GPU of NVIDIA GTX 3090, a CPU of Intel i9-12900k and a memory size of 128G. In addition, the SMAC version is 4.10, the experimental version of 15×15-4p-5f in LBF is v2 and the version of the GRF is 2.10. We will open source my algorithm and experimental environments code on time. | 1. What is the focus and contribution of the paper on dynamic parameter sharing?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its representation and selection mechanisms?
3. Do you have any questions or concerns regarding the method's ability to fall within the decentralized execution paradigm?
4. How do the results of the paper compare to previous works, specifically RODE?
5. Are there any issues with the ablation studies and their ability to justify the design choices?
6. How clear and well-motivated are the decisions made throughout the paper?
7. How novel is the work compared to other hierarchical MARL methods?
8. What is the reviewer's assessment of the paper's clarity, quality, novelty, and reproducibility? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a method for dynamic parameter sharing over individual and group levels. Results show an improvement over standard baselines on two challenging benchmarks (Google Research Football and StarCraft).
Strengths And Weaknesses
Strengths
The method addresses a challenging and important problem.
The results seem to indicate that it is effective in improving upon standard baselines (e.g. QMIX).
Weaknesses
It's unclear what the VAE over "advantages" is representing, especially since the advantages are computed with agent utility values which are not Q-values in the traditional sense since they do not respect the Bellman equation. These agent-wise utilities are fed into a mixing network to produce a
Q
tot
which is a true Q-value.
The choice to select the argmax of the VAE latent as the role (Eqn 3) seems unmotivated.
If I understand correctly, the group selector requires information from all agents (the set of all virtual roles). As such, the method does not fall within the decentralized execution paradigm as suggested by section 2.2, and this should be more explicitly stated.
The group selector appears to be a hierarchical controller which operates over a dilated time scale, but in order to learn it we would need to have a separate mixing network for the group-selection actions. This is vaguely alluded to in the text, but is not shown in Figure 1. Furthermore, the Q-functions in Eqn 8 would have to be defined over the domain of group-selection actions,
v
, but they are not which is especially confusing since the equation is maximizing over those.
The results for RODE are not consistent with the literature. I haven't checked the others carefully, but those stood out since it thoroughly outperforms QMIX in the original paper.
The ablations aren't explained clearly. More granular ablations are required to justify the design decisions (e.g. using attention, VAE, etc.)
Questions
What is meant by "advantage information can better represent the current state of agents than the observation information"? If anything these filter out information about the agents' observations to only that which is relevant to the global expected returns.
Clarity, Quality, Novelty And Reproducibility
While the motivation for the work is clear, the motivation behind the decisions made throughout are not. The work seems reasonably novel, though it does share similarities with RODE and other hierarchical MARL methods. Regarding reproducibility, the architecture is quite complex and code has not been submitted, so it may be difficult to reproduce. |
ICLR | Title
Variational Invariant Learning for Bayesian Domain Generalization
Abstract
Domain generalization addresses the out-of-distribution problem, which is challenging due to the domain shift and the uncertainty caused by the inaccessibility to data from the target domains. In this paper, we propose variational invariant learning, a probabilistic inference framework that jointly models domain invariance and uncertainty. We introduce variational Bayesian approximation into both the feature representation and classifier layers to facilitate invariant learning for better generalization across domains. In the probabilistic modeling framework, we introduce a domain-invariant principle to explore invariance across domains in a unified way. We incorporate the principle into the variational Bayesian layers in neural networks, achieving domain-invariant representations and classifier. We empirically demonstrate the effectiveness of our proposal on four widely used cross-domain visual recognition benchmarks. Ablation studies demonstrate the benefits of our proposal and on all benchmarks our variational invariant learning consistently delivers state-of-the-art performance.
1 INTRODUCTION
Domain generalization (Muandet et al., 2013), as an out-of-distribution problem, aims to train a model on several source domains and have it generalize well to unseen target domains. The major challenge stems from the large distribution shift between the source and target domains, which is further complicated by the prediction uncertainty (Malinin & Gales, 2018) introduced by the inaccessibility to data from target domains during training. Previous approaches focus on learning domain-invariant features using novel loss functions (Muandet et al., 2013; Li et al., 2018a) or specific architectures (Li et al., 2017a; D’Innocente & Caputo, 2018). Meta-learning based methods were proposed to achieve similar goals by leveraging an episodic training strategy (Li et al., 2017b; Balaji et al., 2018; Du et al., 2020). Most of these methods are based on deep neural network backbones (Krizhevsky et al., 2012; He et al., 2016). However, while deep neural networks have achieved remarkable success in various vision tasks, their performance is known to degenerate considerably when the test samples are out of the training data distribution (Nguyen et al., 2015; Ilse et al., 2019), due to their poorly calibrated behavior (Guo et al., 2017; Kristiadi et al., 2020).
As an attractive solution, Bayesian learning naturally represents prediction uncertainty (Kristiadi et al., 2020; MacKay, 1992), possesses better generalizability to out-of-distribution examples (Louizos & Welling, 2017) and provides an elegant formulation to transfer knowledge across different datasets (Nguyen et al., 2018). Further, approximate Bayesian inference has been demonstrated to be able to improve prediction uncertainty (Blundell et al., 2015; Louizos & Welling, 2017; Atanov et al., 2019), even when only applied to the last network layer (Kristiadi et al., 2020). These properties make it appealing to introduce Bayesian learning into the challenging and unexplored scenario of domain generalization.
In this paper, we propose variational invariant learning (VIL), a Bayesian inference framework that jointly models domain invariance and uncertainty for domain generalization. We apply variational Bayesian approximation to the last two network layers for both the representations and classifier by placing prior distributions over their weights, which facilitates generalization. We adopt Bayesian neural networks to domain generalization, which enjoys the representational power of deep neural networks while facilitating better generalization. To further improve the robustness to domain shifts, we introduce the domain-invariant principle under the Bayesian inference framework, which enables
us to explore domain invariance for both feature representations and the classifier in a unified way. We evaluate our method on four widely-used benchmarks for cross-domain visual object classification. Our ablation studies demonstrate the effectiveness of the variational Bayesian domain-invariant features and classifier for domain generalization. Results further show that our method achieves the best performance on all of the four benchmarks.
2 METHODOLOGY
We explore Bayesian inference for domain generalization. In this task, the samples from the target domains are never seen during training, and are usually out of the data distribution of the source domains. This leads to uncertainty when making predictions on the target domains. Bayesian inference offers a principled way to represent the predictive uncertainty in neural networks (MacKay, 1992; Kristiadi et al., 2020). We briefly introduce approximate Bayesian inference, under which we will introduce our variational invariant learning for domain generalization.
2.1 APPROXIMATE BAYESIAN INFERENCE
Given a dataset {x(i),y(i)}Ni=1 of N input-output pairs and a model parameterized by weights θ with a prior distribution p(θ), Bayesian neural networks aim to infer the true posterior distribution p(θ|x,y). As the exact inference of the true posterior is computationally intractable, Hinton & Camp (1993) and Graves (2011) recommended learning a variational distribution q(θ) to approximate p(θ|x,y) by minimizing the Kullback-Leibler (KL) divergence between them:
θ∗ = argmin θ
DKL [ q(θ)||p(θ|x,y) ] . (1)
The above optimization is equivalent to minimizing the loss function:
LBayes = −Eq(θ)[log p(y|x,θ)] + DKL[q(θ)||p(θ)], (2)
which is also known as the negative value of the evidence lower bound (ELBO) (Blei et al., 2017).
2.2 VARIATIONAL DOMAIN-INVARIANT LEARNING
In domain generalization, let D = {Di}|D|i=1 = S ∪ T be a set of domains, where S and T denote source domains and target domains respectively. S and T do not have any overlap with each other but share the same label space. For each domainDi ∈ D, we can define a joint distribution p(xi,yi) in the input space X and the output space Y . We aim to learn a model f : X → Y in the source domains S that can generalize well to the target domains T . The fundamental problem in domain generalization is to achieve robustness to domain shift between source and target domains, that is, we aim to learn a model invariant to the distributional shift between the source and target domains. In this work, we mainly focus on the invariant property across domains instead of exploring general invariance properties (Nalisnick & Smyth, 2018). Therefore, we introduce a formal definition of domain invariance, which is easily incorporated as criteria into the Bayesian framework to achieve domain-invariant learning.
Provided that all domains inD are in the same domain space, then for any input sample xs in domain Ds, we assume that there exists a domain-transform function gζ(·) which is defined as a mapping function that is able to project xs to other different domains Dζ with respect to the parameter ζ, where ζ ∼ q(ζ), and a different ζ lead to different post-transformation domains Dζ . Usually the exact form of gζ(·) is not necessarily known. Under this assumption, we introduce the definition of domain invariance, which we will incorporate into the Bayesian layers of neural networks for domain-invariant learning.
Definition 2.1 (Domain Invariance) Let xs be a given sample from domain Ds ∈ D, and xζ = gζ(xs) be a transformation of xs in another domain Dζ , where ζ ∼ q(ζ). pθ(y|x) denotes the output distribution of input x with model θ. The model θ is domain-invariant if,
pθ(ys|xs) = pθ(yζ |xζ), ∀ζ ∼ q(ζ). (3)
Here, we use y to represent the output from a neural layer with input x, which can either be the prediction vector from the last layer or the feature vector from the convolutional layers.
To make the domain-invariant principle easier to implement, we then extend the Eq. (3) to an expectation form:
pθ(ys|xs) = Eqζ [pθ(yζ |xζ)]. (4)
Based on this definition, we use the Kullback-Leibler divergence between the two terms in Eq. (4), DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] , to quantify the domain invariance of the model, which will be zero when the model is domain invariant. As in most cases, there is no analytical form of the domaintransform function and only a few samples from Dζ are available, which makes Eqζ [pθ(yζ |xζ)] intractable. Thus, we derive the following upper bound of the divergence:
DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] ≤ Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] , (5)
which can be approximated by Monte Carlo sampling.
We define the complete objective function of our variational invariant learning by combining Eq. (5) with Eq. (2). However, in Bayesian inference, the likelihood is obtained by taking the expectation over the distribution of parameter θ, i.e., pθ(y|x) = Eq(θ)[p(y|x,θ)], which is also intractable in Eq. (5). As the KL divergence is a convex function (Nalisnick & Smyth, 2018), we further extend Eq. (5) to an upper bound:
Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] ≤ Eqζ [[ Eq(θ)DKL [ p(ys|xs,θ)||p(yζ |xζ ,θ) ]]] , (6)
which is tractable with the unbiased Monte Carlo approximation. The complete derivations of Eq. (5) and Eq. (6) are provided in Appendix A.
In addition, it is worth noting that the domain-transformation distribution q(ζ) is implicit and inexpressible in reality and there are only a limited number of domains available in practice. This problem is exacerbated because the target domain is unseen during training, which further limits the number of available domains. Moreover, in most of the domain generalization databases, for a certain sample xs from domain Ds, there is no transformation corresponding to xs in other domains. This prevents the expectation with respect to qζ from being directly tractable in general.
Thus, we resort to use an empirically tractable implementation and adopt an episodic setting as in (Li et al., 2019). In each episode, we choose one domain from the source domains S as the metasource domain Ds and the rest are used as the meta-target domains {Dt}Tt=1. To achieve variational invariant learning in the Bayesian framework, we use samples from meta-target domains in the same category as xs to approximate the samples of gζ(xs). Then we obtain a general loss function for domain-invariant learning:
LI = 1
T T∑ t=1 1 N N∑ i=1 Eq(θ) [ DKL [ p(ys|xs,θ)||p(yit|xit,θ) ]] , (7)
where { xit }N i=1
are from Dt, denoting the samples in the same category as xs. More details and an illustration of the domain-invariant loss function can be found in Appendix B.
With the aforementioned loss functions, we develop the loss function of variational invariant learning for domain generalization: LVIL = LBayes + λLI. (8) Our variational invariant learning combines the Bayesian framework, which is able to introduce uncertainty into the network and is beneficial for out-of-distribution problems (Daxberger & Hernández-Lobato, 2019), and a domain-invariant loss function LI, which is designed based on predictive distributions to make the model generalize better to the unseen target domains. For Bayesian learning, it has been demonstrated that being just “a bit” Bayesian in the last layer of the neural network can well represent the uncertainty in predictions (Kristiadi et al., 2020). This indicates that applying the Bayesian treatment only to the last layer already brings sufficient benefits of Bayesian inference. Although adding Bayesian inference to more layers improves the performance, it also increases the computational cost. Further, from the perspective of domain invariance, making both
the classifier and feature extractor more robust to the domain shifts also leads to better performance (Li et al., 2019). Thus, there is a trade-off between the benefits of variational Bayesian domain invariance and computational efficiency. Instead of applying the Bayesian principle to all the layers of the neural network, in this work we explore domain invariance by applying it to only the classifier layer ψ and the last feature extraction layer φ.
In this case, the LBayes in Eq. (2) becomes the ELBO with respect to ψ and φ jointly. As they are independent, the LBayes is expressed as: LBayes = −Eq(ψ)Eq(φ)[log p(y|x,ψ,φ)] + DKL[q(ψ)||p(ψ)] + DKL[q(φ)||p(φ)]. (9) The above variational inference objective allows us to explore domain-invariant representations and classifier in a unified way. The detailed derivation of Eq. (9) is provided in Appendix A.
Domain-Invariant Classifier To establish the domain-invariant classifier, we directly incorporate the proposed domain-invariant principle into the last layer of the network, which gives rise to
LI(ψ) = 1
T T∑ t=1 1 N N∑ i=1 Eq(ψ) [ DKL [ p(ys|zs,ψ)||p(yit|zit,ψ) ]] , (10)
where z denotes the feature representations of input x, and the subscripts s and t indicate the metasource domain and the meta-target domains as in Eq. (7). Since p(y|z,ψ) is a Bernoulli distribution, we can conveniently calculate the KL divergence in Eq. (10).
Domain-Invariant Representations To also make the representations domain invariant, we have
LI(φ) = 1
T T∑ t=1 1 N N∑ i=1 Eq(φ) [ DKL [ p(zs|xs,φ)||p(zit|xit,φ) ]] , (11)
where φ are the parameters of the feature extractor. Since the feature extractor is also a Bayesian layer, the distribution of p(z|x,φ) will be a factorized Gaussian if the posterior of φ is as well. We illustrate this as follows. Let x be the input feature of a Bayesian layer φ, which has a factorized Gaussian posterior, the posterior of the activation z of the Bayesian layer is also a factorized Gaussian (Kingma et al., 2015):
q(φi,j) ∼N (µi,j , σ2i,j) ∀φi,j ∈ φ⇒ p(zj |x,φ) ∼ N (γj , δ2j ),
γj = N∑ i=1 xiµi,j , and δ2j = N∑ i=1 x2iσ 2 i,j ,
(12)
where zj denotes the j-th element in z, likewise for xi, and φi,j denotes the element at the position (i, j) in φ. Based on this property of the Bayesian framework, we assume that the posterior of our variational invariant feature extractor has a factorized Gaussian distribution, which leads to an easier calculation of the KL divergence in Eq. (11). Note that with the domain-invariant representations, z in Eq. (10) corresponds to samples of the feature representation distributions: zs ∼ p(zs|xs,φ) and zt ∼ p(zt|xt,φ).
2.3 OBJECTIVE FUNCTION
The objective function of our variational invariant learning is defined as: LVIL = LBayes + λψLI(ψ) + λφLI(φ), (13)
where λψ and λφ are hyperparameters to control the domain-invariant terms. We adopt Monte Carlo sampling and obtain the empirical objective function for variational invariant learning as follows:
LVIL = 1
L L∑ `=1 1 M M∑ m [ − log p(ys|xs,ψ(`),φ(m)) ] + DKL [ q(ψ)||p(ψ)] + DKL[q(φ)||p(φ) ] + λψ 1
T T∑ t=1 1 N N∑ i=1 1 L L∑ `=1 DKL [ p(ys|zs,ψ(`))||p(yit|zit,ψ(`)) ] + λφ 1
T T∑ t=1 1 N N∑ i=1 1 M M∑ m=1 DKL [ p(zs|xs,φ(m))||p(zit|xit,φ(m)) ] ,
(14)
where xs and zs denote the input and its feature fromDs, respectively, and xit and z i t are fromDt as in Eq. (7). The posteriors are set to factorized Gaussian distributions, i.e., q(ψ) = N (µψ,σ2ψ) and q(φ) = N (µφ,σ2φ). We adopt the reparameterization trick to draw Monte Carlo samples (Kingma & Welling, 2014) asψ(`) = µψ+ (`)∗σψ , where (`) ∼ N (0, I). We draw the samples forφ(m) in a similar way. In the implementation of our variational invariant learning, to increase the flexibility of the prior distribution in our Bayesian layers, we choose to place a scale mixture of two Gaussian distributions as the priors p(ψ) and p(φ) (Blundell et al., 2015):
πN (0,σ21) + (1− π)N (0,σ22), (15)
where σ1, σ2 and π are hyperparameters chosen by cross-validation.
3 RELATED WORK
One solution for domain generalization is to generate more source domain data to increase the probability of covering the data in the target domains (Shankar et al., 2018; Volpi et al., 2018). Shankar et al. (2018) augmented the data by perturbing the input images with adversarial gradients generated by an auxiliary classifier. Qiao et al. (2020) proposed a more challenging scenario of domain generalization named single domain generalization, which only has one source domain, and they designed an adversarial domain augmentation method to create “fictitious” yet “challenging” data. Recently, Zhou et al. (2020) employed a generator to synthesize data from pseudo-novel domains to augment the source domains, maximizing the distance between source and pseudo-novel domains as measured by optimal transport (Peyré et al., 2019). Another solution for domain generalization is based on learning domain-invariant features (D’Innocente & Caputo, 2018; Li et al., 2018b; 2017a). Muandet et al. (2013) proposed domain-invariant component analysis to learn invariant transformantions by minimizing the dissimilarity across domains. Louizos et al. (2015) learned invariant representations by variational autoencoder (Kingma & Welling, 2014), which introduced Bayesian inference into invariant feature learning. Dou et al. (2019) and Seo et al. (2019) tried to achieve a similar goal by introducing two complementary losses, global class alignment loss and local sample clustering loss, and employing multiple normalization methods. Li et al. (2019) proposed an episodic training algorithm to obtain both domain-invariant feature extractor and classifier.
Recently, meta-learning based techniques have been considered to solve domain generalization problems. Li et al. (2018a) proposed a meta-learning domain generalization method which introduced the gradient-based method, i.e., model agnostic meta-learning (Finn et al., 2017), for domain generalization. Balaji et al. (2018) address the domain generalization problem by learning a regularization function in a meta-learning framework, making the model robust to domain shifts. Du et al. (2020) propose the meta variational information bottleneck to learn domain-invariant representations through episodic training.
To the best of our knowledge, Bayesian neural networks have not yet been explored in domain generalization. Our method introduces variational Bayesian approximation to both the feature extractor and classifier of the neural network in conjunction with the newly introduced domain-invariant principle for domain generalization. The resultant variational invariant learning combines the representational power of deep neural networks and variational Bayesian inference.
Similar to our proposal, CCSA by Motiian et al. (2017) also aligns representations across domains in the same class. Specifically, CCSA utilizes an L2 distance between deterministic features while we exploit Bayesian neural networks to learn domain-invariant representations by minimizing the distance between domain distributions. Theoretically, minimizing the distance between distributions incorporates larger inter-class variance than minimizing distance of deterministic features. Moreover, we apply our variational invariant learning to both the feature extractor and the classifier, while CCSA only considers an alignment loss on the feature representations.
4 EXPERIMENTS
4.1 DATASETS AND SETTINGS
We conduct our experiments on four widely used benchmarks in domain generalization:
PACS (Li et al., 2017a) consists of 9,991 images of seven classes from four domains - photo, artpainting, cartoon and sketch. We follow the “leave-one-out” protocol in (Li et al., 2017a; 2018b; Carlucci et al., 2019), where the model is trained on any three of the four domains, which we call source domains, and tested on the last (target) domain.
Office-Home (Venkateswara et al., 2017) also has four domains: art, clipart, product and realworld. There are about 15,500 images of 65 categories for object recognition in office and home environments. We use the same experimental protocol as for PACS.
Rotated MNIST and Fashion-MNIST were introduced for evaluating domain generalization in (Piratla et al., 2020). For the fair comparison, we follow their recommended settings and randomly select a subset of 2,000 images from MNIST and 10,000 images from Fashion-MNIST, which is considered to have been rotated by 0◦. The subset of images is then rotated by 15◦ through 75◦ in intervals of 15◦, creating five source domains. The target domains are created by rotation angles of 0◦ and 90◦. We use these two datasets to demonstrate the generalizability by comparing the performance on in-distribution and out-of-distribution data.
For all four benchmarks, we use ResNet-18 (He et al., 2016) pretrained on ImageNet (Deng et al., 2009) as the CNN backbone. During training, we use Adam optimization (Kingma & Ba, 2014) with the learning rate set to 0.0001, and train for 10,000 iterations. In each iteration we choose one source domain as the meta-source domain. The batch size is 128. To fit memory footprint, we choose a maximum number of samples per category per target domain to implement the domaininvariant learning, i.e. sixteen for PACS, Rotated MNIST and Fashion-MNIST datasets, and four for the Office-Home dataset. We choose λφ and λψ based on the performance on the validation set and their influence is summarized in the new Fig 3 in Appendix C. The optimal values of λφ and λψ are 0.1 and 100 respectively. Parameters σ1 and σ2 in Eq. (15) are set to 0.1 and 1.5. The model with the highest validation set accuracy is employed for evaluation on the target domain. All code will be made publicly available.
4.2 ABLATION STUDY
We conduct an ablation study to investigate the effectiveness of our variational invariant learning for domain generalization. The experiments are performed on the PACS dataset. Since the major contributions of this work are the Bayesian treatment and the domain-invariant principle, we evaluate their effect by individually incorporating them into the classifier - the last layer - ψ and the feature extractor - the penultimate layer - φ. The results are shown in Table 1. The “X” and “×” in the “Bayesian” column denote whether the classifier ψ and feature extractor φ are Bayesian layers or deterministic layers. In the “Invariant” column they denote whether the domain-invariant loss is introduced into the classifier and the feature extractor. Note that the predictive distribution is a Bernoulli distribution, which also admits the domain-invariant loss, we therefore include this case for a comprehensive comparison.
In Table 1, the first four rows demonstrate the benefit of the Bayesian treatment. The first row (a) serves as a baseline model, which is a vanilla deep convolutional network without any Bayesian treatment and domain-invariant loss. The backbone is also a ResNet-18 pretrained on ImageNet. It is clear the Bayesian treatment, either for the classifier (b) or the feature extractor (c), improves the performance, especially in the “Art-painting” and “Sketch” domains, and this is further demonstrated in (d) where we employ the Bayesian classifier and feature extractor simultaneously.
The benefit of the domain-invariant principle for classifiers is demonstrated by comparing (e) to (a) and (f) to (b). The settings with domain invariance consistently perform better than those without it. A similar trend is also observed when applying the domain-invariant principle to the feature extractor, as shown by comparing (g) to (c). Overall, our variational invariant learning (h) achieves the best performance compared to other variants, demonstrating its effectiveness for domain generalization. Note that the feature distributions p(z|x) are unknown without Bayesian formalism, leading to an intractable LI(φ). Therefore, we do not conduct the experiment with only the domain-invariant loss on both the classifier and the feature extractor.
To further demonstrate the domain-invariant property of our method, we visualize the features learned by different settings of our method in Table 1. We use t-SNE (Maaten & Hinton, 2008)
to reduce the feature dimension into a two-dimensional subspace, following Du et al. (2020). The visualization is shown in Fig. 1. More visualization results are provided in Appendix E.
Figs. 1 (a), (b), (c) and (d) show the baseline, Bayesian classifier, Bayesian representations and both Bayesian classifier and representations, demonstrating the benefits of Bayesian inference for learning domain-invariant features. The Bayesian treatment on both the classifier and the feature extractor enlarges the inter-class distance in all domains, which benefits the classification performance on the target domain, as shown in Fig. 1 (d). Figs. 1 (e), (f), (g), (h) are visualizations of feature representations after introducing the domain-invariant principle, compared to the upper row. Comparing the two figures in each column indicates that our domain-invariant principle imposed on either the representation or the classifier further enlarges the inter-class distances. At the same time, it reduces the distance between samples of the same class from different domains. This is even more apparent in the intra-class distance between samples from source and target domains. As a result, the inter-class distances in the target domain become larger, therefore improving performance. It is worth noting that the domain-invariant principle on the classifier in Fig. 1 (f) and on the feature extractor in Fig. 1 (g)) both improve the domain-invariant features. Our variational invariant learning in Fig. 1 (h) therefore has better performance by combining their benefits.
We also experiments with more layers in the feature extractor, see Table 2. “Bayesian φ′” and “Invariant φ′” denote whether the additional feature extraction layer φ′ has the Bayesian property and domain-invariant property. The classifiers have both properties in all cases in Table 2. The first row is the setting with only one variational invariant layer in the feature extractor. When introducing another Bayesian learning layer φ′ without the domain-invariant property into the model, as shown in the second row in Table 2, the average performance improves slightly. If we introduce both the Bayesian learning and domain-invariant learning into φ′, as shown in the third row, the overall performance declines a bit. One reason might be the information loss in feature representations caused by the excessive use of domain-invariant learning. In addition, due to the Bayesian inference and Monte-Carlo sampling, more variational-invariant layers leads to higher memory usage and more computations, which is also one reason for us to apply the variational invariant learning only to the last feature extraction layer and the classifier.
4.3 STATE-OF-THE-ART COMPARISON
In this section, we compare our method with several state-of-the-art methods on four datasets. The results are reported in Tables 3-5. The baseline on PACS (Table 3), Office-Home (Table 4), and rotated MNIST and Fashion-MNIST (Table 5) are all based on the same vanilla deep convolutional ResNet-18 network, without any Bayesian treatment, the same as row (a) in Table 1
On PACS, as shown in Table 3, our variational invariant learning method achieves the best overall performance. On each domain, our performance is competitive with the state-of-the-art and we exceed all other methods on the “Cartoon” domain. On Office-Home, as shown in Table 4, we again achieve the best recognition accuracy. It is worth mentioning that on the most challenging “Art” and “Clipart” domains, our variational invariant learning also delivers the highest performance, with a good improvement over previous methods.
L2A-OT and DSON outperform the proposed model on some domains of PACS and Office-Home. L2A-OT learns a generator to synthesize data from pseudo-novel domains to augment the source domains. The pseudo-novel domains often have similar characteristics with the source data. Thus, when the target data also have similar characteristics with the source domains this pays off as the
pseudo domains are more likely to cover the target domain, such as “Product” and “Real World” in Office-Home and “Photo” in PACS. When the test domain is different from all of the training domains the performance suffers, e.g., “Clipart” in Office-Home and “Sketch” in PACS. Our method generates domain-invariant representations and classifiers, resulting in competitive results across all domains and overall. DSON mixtures batch and instance normalization for domain generalization. This tactic is effective on PACS, but less competitive on Office-Home. We attribute this to the larger number of categories on Office-Home, where instance normalization is known to make features less discriminative with respect to object categories (Seo et al., 2019). Our domain-invariant network makes feature distributions and predictive distributions similar across domains, resulting in good performance on both PACS and Office-Home.
On the Rotated MNIST and Fashion-MNIST datasets, following the experimental settings in (Piratla et al., 2020), we evaluate our method on the in-distribution and out-of-distribution sets. As shown in Table 5, our VIL achieves the best performance on both sets of the two datasets, surpassing other methods. Moreover, our method especially improves the classification performance on the out-of-distribution sets, demonstrating its strong generalizability to unseen domains, which is also consistent with the findings in Fig. 1.
5 CONCLUSION
In this work, we propose variational invariant learning (VIL), a variational Bayesian learning framework for domain generalization. We introduce Bayesian neural networks into the model, which is able to better represent uncertainty and enhance the generalization to out-of-distribution data. To handle the domain shift between source and target domains, we propose a domain-invariant principle under the variational inference framework, which is incorporated by establishing a domain-invariant feature extractor and classifier. Our variational invariant learning combines the representational power of deep neural networks and uncertainty modeling ability of Bayesian learning, showing great effectiveness for domain generalization. Extensive ablation studies demonstrate the benefits of the Bayesian inference and domain-invariant principle for domain generalization. Our variational invariant learning sets a new state-of-the-art on four domain generalization benchmarks.
A DERIVATION
A.1 DERIVATION OF THE UPPER BOUNDS OF DOMAIN-INVARIANT LEARNING
As in most cases the Eqζ [pθ(yζ |xζ)] is intractable, we derive the upper bound in Eq. (5), which is achieved via Jensen’s inequality:
DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] = Epθ(ys|xs)[log pθ(ys|xs)]− Epθ(ys|xs) [ logEqζ [pθ(yζ |xζ)] ] ≤ Epθ(ys|xs)[log pθ(ys|xs)]− Epθ(ys|xs)Eqζ [log pθ(yζ |xζ)]
= Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] .
(16) In Bayesian inference, computing the likelihood pθ(y|x) = ∫ θ p(y|x,θ)dθ = Eq(θ)[p(y|x,θ)] is notoriously difficult. Thus, as the fact that KL divergence is a convex function, we obtain the upper bound in Eq. (6) achieved via Jensen’s inequality similar to Eq. (16):
Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] = Eqζ [ DKL [ Eq(θ)[p(ys|xs,θ)]||Eq(θ)[p(yζ |xζ ,θ)] ]] ≤ Eqζ [[ Eq(θ)DKL [ p(ys|xs,θ)||p(yζ |xζ ,θ) ]]] . (17)
A.2 DERIVATION OF VARIATIONAL BAYESIAN APPROXIMATION FOR REPRESENTATION (φ) AND CLASSIFIER (ψ) LAYERS.
We consider the model with two Bayesian layers φ and ψ as the last layer of feature extractor and the classifier respectively. The prior distribution of the model is p(φ,ψ), and the true posterior distribution is p(φ,ψ|x,y). Following the settings in Section 2.1, we need to learn a variational distribution q(φ,ψ) to approximate the true posterior by minimizing the KL divergence from q(φ,ψ) to p(φ,ψ|x,y):
φ∗,ψ∗ = argmin φ,ψ
DKL [ q(φ,ψ)||p(φ,ψ|x,y) ] . (18)
By applying the Bayesian rule p(φ,ψ|x,y) ∝ p(y|x,φ,ψ)p(φ,ψ), the optimization is equivalent to minimizing:
LBayes = ∫ q(φ,ψ) log
q(φ,ψ)
p(φ,ψ)p(y|x,φ,ψ) dφdψ = DKL [ q(φ,ψ)||p(φ,ψ) ] − Eq(φ,ψ) [ log p(y|x,φ,ψ) ] .
(19)
With φ and ψ are independent,
LBayes = −Eq(ψ)Eq(φ)[log p(y|x,ψ,φ)] + DKL[q(ψ)||p(ψ)] + DKL[q(φ)||p(φ)]. (20)
B DETAILS OF DOMAIN-INVARIANT LOSS IN VIL TRAINING
We split the training phase of VIL into several episodes. In each episode, as shown in Fig. 2, we randomly choose a source domain as the meta-source domain Ds, and the rest of the source domains {Dt}Tt=1 are treated as the meta-target domains. From Ds, we randomly select a batch of samples xs. For each xs, we then select N samples { xit }N i=1
, which are in the same category as xs, from each of the meta-target domains Dt. All of these samples are sent to the variational invariant feature extractor φ to get the representations zs and { zit }N i=1 , which are then sent to the variational
invariant classifier ψ to obtain the predictions ys and { yit }N i=1
. We obtain the domain-invariant loss for feature extractor LI(φ) by calculating the mean of the KL divergence of zs and each zit as Eq.(11). The domain-invariant loss for feature classifier LI(ψ) is calculated in a similar way on ys and { yit }N i=1 as Eq.(10).
C ABLATION STUDY FOR HYPERPARAMTERS
We also ablate the hyperparameters λφ, λψ and π on PACS with cartoon as the target domain. Results are shown in Fig 3 (a), (b) and (c). We obtain Fig 3 (a) by fixing λψ as 100 and adjusting λφ, Fig 3 (b) by fixing λφ as 1 and adjusting λψ and Fig 3 (c) by adjusting π while fixing other settings as in Section 4.1. λφ and λψ balance the influence of the Bayesian learning and domain-invariant learning, and their optimal values are 1 and 100. If the values are too small, the model tends to overfit to source domains as the performance on target data drops more obviously than on validation data. In contrast, too large values of them harm the overall performance of the model as there are obvious decrease of accuracy on both validation data and target data. Moreover, π balances the two components of the scale mixture prior of our Bayesian model. According to Blundell et al. (2015), the two components cause a prior density with heavier tail while many weights tightly concentrate around zero. Both of them are important. The performance is the best when π is 0.5 according to Fig 3 (c), which demonstrates the effectiveness of the two components in the scale mixture prior.
D DETAILED ABLATION STUDY ON PACS
In addition to the aforementioned experiments, we conduct some supplementary experiments with other settings on PACS to further demonstrate the effectiveness of the Bayesian inference and domain-invariant loss as shown in Table 6. The evaluated components are the same as in Table 1. For better comparison, we show the contents of Table 1 again in Table 6, and add three other settings with IDs (i), (j) and (k). Note that as the distribution of features z is unknown without a Bayesian feature extractor φ, the settings with LI(φ) and a non-Bayesian feature extractor is intractable. Comparing (i) with (d), we find that employing Bayesian inference to the last layer of the feature extractor improves the overall performance and the classification accuracy on three of the four domains. Moreover, comparing (j) and (k) to (g) shows the benefits of introducing variational Bayes and the domain-invariant loss to the classifier on most of the domains and the average of them.
E EXTRA VISUALIZATION RESULTS
To further observe and analyze the benefits of the individual components of VIL for domain-invariant learning, we visualize the features of all categories from the target domain only in Fig. 4, and features of only one category from all domains in Fig. 5. The same as Fig. 1, the visualization is conducted on the PACS dataset and the target domain is “art-painting”. The chosen category in Fig. 5 is “horse”.
Fig. 4 provides a more intuitive observation of the benefits of the Bayesian framework and domaininvariant learning in our method for enlarging the inter-class distance in the target domain. The conclusion is similar as in Fig. 1. From the figures in the first row, it is clear that the Bayesian framework whether in the classifier ((b)) or the feature extractor ((c)) increases the inter-class distance compared with the baseline method ((a)). With both of them ((d)), the performance becomes
better. Further, comparing two figures in each column, the inter-class distance is also enlarged by introducing the domain-invariant principle into the setting of each figure in the first row. VIL ((h)) achieves the best performance by combining the benefits of both the Bayesian framework and the domain-invariant principle in both feature extractor and classifier.
Fig. 5 provides a deeper insight into the intra-class feature distributions of the same category from different domains. By introducing the Bayesian inference into the model, the features demonstrate the manifold of the category as shown in the first row ((b), (c) and (d)). This makes recognition easier. Indeed, the visualization of features from multiple categories has similar properties as shown in Fig. 1. As shown in each column, introducing the domain-invariant learning into the model leads to a better mixture of features from different domains. The resultant domain-invariant representation makes the model more generalizable to unseen domains.
We also visualize the features in rotated MNIST and Fashion MNIST datasets, as shown in Fig. 6. Different shapes denote different categories. Red samples denote features from the in-distribution set and blue samples denote features from the out-of-distribution set. Compared with the baseline, our method reduces the intra-class distance between samples from the in-distribution set and the out-ofdistribution set and clusters the out-of-distribution samples of the same categories better, especially in the rotated Fashion-MNIST dataset. | 1. What is the novel approach introduced by the paper in addressing the domain generalization problem?
2. What are the strengths of the proposed variational Bayesian learning framework and domain-invariant principle?
3. What are the weaknesses of the paper regarding experimental comparisons and unclear parts?
4. How does the reviewer assess the significance and inspiration of the proposed method in related fields?
5. Do the authors provide sufficient explanations and discussions on the choice of settings and ablation studies? | Review | Review
########################
Summary:
The paper introduces a variational invariant learning approach with Bayesian approximation for domain generalization.
####################
Reason for score:
Overall, the paper is above the borderline. I like the idea of utilizing the Bayesian variational learning to address the domain generalization problem, which is very novel and promising. My major concern is about some unclear parts described in the paper and insufficient experimental comparison (see cons below). Hopefully, it would be grateful that the authors could address my concerns during the rebuttal period.
########################
Pros:
(1) The proposed variational Bayesian learning framework in the paper to represent uncertainty and enhance the generalization is reasonable and interesting, which can inspire other researchers in related fields.
(2) The introduced domain-invariant principle to establish a domain-invariant feature extractor and classifier seems promising, which can lead to an end-to-end framework with CNN and a Bayesian network.
(3) Extensive experimental results on four domain generalization benchmarks show the proposed method obtains a new state-of-the-art performance, which is appealing and convincing.
########################
Cons:
(1) In the paper, the authors claim that they only apply the domain invariance to the last feature extraction layer. In order to examine the relevance between the number of layers and the performance, an ablation study about the number of feature extraction layers should be added. In addition, the efficiency discussion about the model layers will be better to denote the setting chosen in the work.
(2) In Table 1, what happens if without Bayesian and with both invariant, i.e., only introducing invariant loss into both classifier and the feature extractor? I notice that such a certain situation is missing in the experiments.
(3) As shown in Table 2 and Table 3, DSON and L2A-OT can outperform the proposed model in terms of certain metrics, it would be better if the authors can analyze the results and give some comparisons about that.
(4) As for the uncertainty estimated by the proposed Bayesian network, is there any threshold being set in the model to determine which samples can be used in the training? Moreover, how to utilized such estimated uncertainty should be discussed further.
########################
Questions during the rebuttal period:
Please address and clarify the cons above. Thank you! |
ICLR | Title
Variational Invariant Learning for Bayesian Domain Generalization
Abstract
Domain generalization addresses the out-of-distribution problem, which is challenging due to the domain shift and the uncertainty caused by the inaccessibility to data from the target domains. In this paper, we propose variational invariant learning, a probabilistic inference framework that jointly models domain invariance and uncertainty. We introduce variational Bayesian approximation into both the feature representation and classifier layers to facilitate invariant learning for better generalization across domains. In the probabilistic modeling framework, we introduce a domain-invariant principle to explore invariance across domains in a unified way. We incorporate the principle into the variational Bayesian layers in neural networks, achieving domain-invariant representations and classifier. We empirically demonstrate the effectiveness of our proposal on four widely used cross-domain visual recognition benchmarks. Ablation studies demonstrate the benefits of our proposal and on all benchmarks our variational invariant learning consistently delivers state-of-the-art performance.
1 INTRODUCTION
Domain generalization (Muandet et al., 2013), as an out-of-distribution problem, aims to train a model on several source domains and have it generalize well to unseen target domains. The major challenge stems from the large distribution shift between the source and target domains, which is further complicated by the prediction uncertainty (Malinin & Gales, 2018) introduced by the inaccessibility to data from target domains during training. Previous approaches focus on learning domain-invariant features using novel loss functions (Muandet et al., 2013; Li et al., 2018a) or specific architectures (Li et al., 2017a; D’Innocente & Caputo, 2018). Meta-learning based methods were proposed to achieve similar goals by leveraging an episodic training strategy (Li et al., 2017b; Balaji et al., 2018; Du et al., 2020). Most of these methods are based on deep neural network backbones (Krizhevsky et al., 2012; He et al., 2016). However, while deep neural networks have achieved remarkable success in various vision tasks, their performance is known to degenerate considerably when the test samples are out of the training data distribution (Nguyen et al., 2015; Ilse et al., 2019), due to their poorly calibrated behavior (Guo et al., 2017; Kristiadi et al., 2020).
As an attractive solution, Bayesian learning naturally represents prediction uncertainty (Kristiadi et al., 2020; MacKay, 1992), possesses better generalizability to out-of-distribution examples (Louizos & Welling, 2017) and provides an elegant formulation to transfer knowledge across different datasets (Nguyen et al., 2018). Further, approximate Bayesian inference has been demonstrated to be able to improve prediction uncertainty (Blundell et al., 2015; Louizos & Welling, 2017; Atanov et al., 2019), even when only applied to the last network layer (Kristiadi et al., 2020). These properties make it appealing to introduce Bayesian learning into the challenging and unexplored scenario of domain generalization.
In this paper, we propose variational invariant learning (VIL), a Bayesian inference framework that jointly models domain invariance and uncertainty for domain generalization. We apply variational Bayesian approximation to the last two network layers for both the representations and classifier by placing prior distributions over their weights, which facilitates generalization. We adopt Bayesian neural networks to domain generalization, which enjoys the representational power of deep neural networks while facilitating better generalization. To further improve the robustness to domain shifts, we introduce the domain-invariant principle under the Bayesian inference framework, which enables
us to explore domain invariance for both feature representations and the classifier in a unified way. We evaluate our method on four widely-used benchmarks for cross-domain visual object classification. Our ablation studies demonstrate the effectiveness of the variational Bayesian domain-invariant features and classifier for domain generalization. Results further show that our method achieves the best performance on all of the four benchmarks.
2 METHODOLOGY
We explore Bayesian inference for domain generalization. In this task, the samples from the target domains are never seen during training, and are usually out of the data distribution of the source domains. This leads to uncertainty when making predictions on the target domains. Bayesian inference offers a principled way to represent the predictive uncertainty in neural networks (MacKay, 1992; Kristiadi et al., 2020). We briefly introduce approximate Bayesian inference, under which we will introduce our variational invariant learning for domain generalization.
2.1 APPROXIMATE BAYESIAN INFERENCE
Given a dataset {x(i),y(i)}Ni=1 of N input-output pairs and a model parameterized by weights θ with a prior distribution p(θ), Bayesian neural networks aim to infer the true posterior distribution p(θ|x,y). As the exact inference of the true posterior is computationally intractable, Hinton & Camp (1993) and Graves (2011) recommended learning a variational distribution q(θ) to approximate p(θ|x,y) by minimizing the Kullback-Leibler (KL) divergence between them:
θ∗ = argmin θ
DKL [ q(θ)||p(θ|x,y) ] . (1)
The above optimization is equivalent to minimizing the loss function:
LBayes = −Eq(θ)[log p(y|x,θ)] + DKL[q(θ)||p(θ)], (2)
which is also known as the negative value of the evidence lower bound (ELBO) (Blei et al., 2017).
2.2 VARIATIONAL DOMAIN-INVARIANT LEARNING
In domain generalization, let D = {Di}|D|i=1 = S ∪ T be a set of domains, where S and T denote source domains and target domains respectively. S and T do not have any overlap with each other but share the same label space. For each domainDi ∈ D, we can define a joint distribution p(xi,yi) in the input space X and the output space Y . We aim to learn a model f : X → Y in the source domains S that can generalize well to the target domains T . The fundamental problem in domain generalization is to achieve robustness to domain shift between source and target domains, that is, we aim to learn a model invariant to the distributional shift between the source and target domains. In this work, we mainly focus on the invariant property across domains instead of exploring general invariance properties (Nalisnick & Smyth, 2018). Therefore, we introduce a formal definition of domain invariance, which is easily incorporated as criteria into the Bayesian framework to achieve domain-invariant learning.
Provided that all domains inD are in the same domain space, then for any input sample xs in domain Ds, we assume that there exists a domain-transform function gζ(·) which is defined as a mapping function that is able to project xs to other different domains Dζ with respect to the parameter ζ, where ζ ∼ q(ζ), and a different ζ lead to different post-transformation domains Dζ . Usually the exact form of gζ(·) is not necessarily known. Under this assumption, we introduce the definition of domain invariance, which we will incorporate into the Bayesian layers of neural networks for domain-invariant learning.
Definition 2.1 (Domain Invariance) Let xs be a given sample from domain Ds ∈ D, and xζ = gζ(xs) be a transformation of xs in another domain Dζ , where ζ ∼ q(ζ). pθ(y|x) denotes the output distribution of input x with model θ. The model θ is domain-invariant if,
pθ(ys|xs) = pθ(yζ |xζ), ∀ζ ∼ q(ζ). (3)
Here, we use y to represent the output from a neural layer with input x, which can either be the prediction vector from the last layer or the feature vector from the convolutional layers.
To make the domain-invariant principle easier to implement, we then extend the Eq. (3) to an expectation form:
pθ(ys|xs) = Eqζ [pθ(yζ |xζ)]. (4)
Based on this definition, we use the Kullback-Leibler divergence between the two terms in Eq. (4), DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] , to quantify the domain invariance of the model, which will be zero when the model is domain invariant. As in most cases, there is no analytical form of the domaintransform function and only a few samples from Dζ are available, which makes Eqζ [pθ(yζ |xζ)] intractable. Thus, we derive the following upper bound of the divergence:
DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] ≤ Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] , (5)
which can be approximated by Monte Carlo sampling.
We define the complete objective function of our variational invariant learning by combining Eq. (5) with Eq. (2). However, in Bayesian inference, the likelihood is obtained by taking the expectation over the distribution of parameter θ, i.e., pθ(y|x) = Eq(θ)[p(y|x,θ)], which is also intractable in Eq. (5). As the KL divergence is a convex function (Nalisnick & Smyth, 2018), we further extend Eq. (5) to an upper bound:
Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] ≤ Eqζ [[ Eq(θ)DKL [ p(ys|xs,θ)||p(yζ |xζ ,θ) ]]] , (6)
which is tractable with the unbiased Monte Carlo approximation. The complete derivations of Eq. (5) and Eq. (6) are provided in Appendix A.
In addition, it is worth noting that the domain-transformation distribution q(ζ) is implicit and inexpressible in reality and there are only a limited number of domains available in practice. This problem is exacerbated because the target domain is unseen during training, which further limits the number of available domains. Moreover, in most of the domain generalization databases, for a certain sample xs from domain Ds, there is no transformation corresponding to xs in other domains. This prevents the expectation with respect to qζ from being directly tractable in general.
Thus, we resort to use an empirically tractable implementation and adopt an episodic setting as in (Li et al., 2019). In each episode, we choose one domain from the source domains S as the metasource domain Ds and the rest are used as the meta-target domains {Dt}Tt=1. To achieve variational invariant learning in the Bayesian framework, we use samples from meta-target domains in the same category as xs to approximate the samples of gζ(xs). Then we obtain a general loss function for domain-invariant learning:
LI = 1
T T∑ t=1 1 N N∑ i=1 Eq(θ) [ DKL [ p(ys|xs,θ)||p(yit|xit,θ) ]] , (7)
where { xit }N i=1
are from Dt, denoting the samples in the same category as xs. More details and an illustration of the domain-invariant loss function can be found in Appendix B.
With the aforementioned loss functions, we develop the loss function of variational invariant learning for domain generalization: LVIL = LBayes + λLI. (8) Our variational invariant learning combines the Bayesian framework, which is able to introduce uncertainty into the network and is beneficial for out-of-distribution problems (Daxberger & Hernández-Lobato, 2019), and a domain-invariant loss function LI, which is designed based on predictive distributions to make the model generalize better to the unseen target domains. For Bayesian learning, it has been demonstrated that being just “a bit” Bayesian in the last layer of the neural network can well represent the uncertainty in predictions (Kristiadi et al., 2020). This indicates that applying the Bayesian treatment only to the last layer already brings sufficient benefits of Bayesian inference. Although adding Bayesian inference to more layers improves the performance, it also increases the computational cost. Further, from the perspective of domain invariance, making both
the classifier and feature extractor more robust to the domain shifts also leads to better performance (Li et al., 2019). Thus, there is a trade-off between the benefits of variational Bayesian domain invariance and computational efficiency. Instead of applying the Bayesian principle to all the layers of the neural network, in this work we explore domain invariance by applying it to only the classifier layer ψ and the last feature extraction layer φ.
In this case, the LBayes in Eq. (2) becomes the ELBO with respect to ψ and φ jointly. As they are independent, the LBayes is expressed as: LBayes = −Eq(ψ)Eq(φ)[log p(y|x,ψ,φ)] + DKL[q(ψ)||p(ψ)] + DKL[q(φ)||p(φ)]. (9) The above variational inference objective allows us to explore domain-invariant representations and classifier in a unified way. The detailed derivation of Eq. (9) is provided in Appendix A.
Domain-Invariant Classifier To establish the domain-invariant classifier, we directly incorporate the proposed domain-invariant principle into the last layer of the network, which gives rise to
LI(ψ) = 1
T T∑ t=1 1 N N∑ i=1 Eq(ψ) [ DKL [ p(ys|zs,ψ)||p(yit|zit,ψ) ]] , (10)
where z denotes the feature representations of input x, and the subscripts s and t indicate the metasource domain and the meta-target domains as in Eq. (7). Since p(y|z,ψ) is a Bernoulli distribution, we can conveniently calculate the KL divergence in Eq. (10).
Domain-Invariant Representations To also make the representations domain invariant, we have
LI(φ) = 1
T T∑ t=1 1 N N∑ i=1 Eq(φ) [ DKL [ p(zs|xs,φ)||p(zit|xit,φ) ]] , (11)
where φ are the parameters of the feature extractor. Since the feature extractor is also a Bayesian layer, the distribution of p(z|x,φ) will be a factorized Gaussian if the posterior of φ is as well. We illustrate this as follows. Let x be the input feature of a Bayesian layer φ, which has a factorized Gaussian posterior, the posterior of the activation z of the Bayesian layer is also a factorized Gaussian (Kingma et al., 2015):
q(φi,j) ∼N (µi,j , σ2i,j) ∀φi,j ∈ φ⇒ p(zj |x,φ) ∼ N (γj , δ2j ),
γj = N∑ i=1 xiµi,j , and δ2j = N∑ i=1 x2iσ 2 i,j ,
(12)
where zj denotes the j-th element in z, likewise for xi, and φi,j denotes the element at the position (i, j) in φ. Based on this property of the Bayesian framework, we assume that the posterior of our variational invariant feature extractor has a factorized Gaussian distribution, which leads to an easier calculation of the KL divergence in Eq. (11). Note that with the domain-invariant representations, z in Eq. (10) corresponds to samples of the feature representation distributions: zs ∼ p(zs|xs,φ) and zt ∼ p(zt|xt,φ).
2.3 OBJECTIVE FUNCTION
The objective function of our variational invariant learning is defined as: LVIL = LBayes + λψLI(ψ) + λφLI(φ), (13)
where λψ and λφ are hyperparameters to control the domain-invariant terms. We adopt Monte Carlo sampling and obtain the empirical objective function for variational invariant learning as follows:
LVIL = 1
L L∑ `=1 1 M M∑ m [ − log p(ys|xs,ψ(`),φ(m)) ] + DKL [ q(ψ)||p(ψ)] + DKL[q(φ)||p(φ) ] + λψ 1
T T∑ t=1 1 N N∑ i=1 1 L L∑ `=1 DKL [ p(ys|zs,ψ(`))||p(yit|zit,ψ(`)) ] + λφ 1
T T∑ t=1 1 N N∑ i=1 1 M M∑ m=1 DKL [ p(zs|xs,φ(m))||p(zit|xit,φ(m)) ] ,
(14)
where xs and zs denote the input and its feature fromDs, respectively, and xit and z i t are fromDt as in Eq. (7). The posteriors are set to factorized Gaussian distributions, i.e., q(ψ) = N (µψ,σ2ψ) and q(φ) = N (µφ,σ2φ). We adopt the reparameterization trick to draw Monte Carlo samples (Kingma & Welling, 2014) asψ(`) = µψ+ (`)∗σψ , where (`) ∼ N (0, I). We draw the samples forφ(m) in a similar way. In the implementation of our variational invariant learning, to increase the flexibility of the prior distribution in our Bayesian layers, we choose to place a scale mixture of two Gaussian distributions as the priors p(ψ) and p(φ) (Blundell et al., 2015):
πN (0,σ21) + (1− π)N (0,σ22), (15)
where σ1, σ2 and π are hyperparameters chosen by cross-validation.
3 RELATED WORK
One solution for domain generalization is to generate more source domain data to increase the probability of covering the data in the target domains (Shankar et al., 2018; Volpi et al., 2018). Shankar et al. (2018) augmented the data by perturbing the input images with adversarial gradients generated by an auxiliary classifier. Qiao et al. (2020) proposed a more challenging scenario of domain generalization named single domain generalization, which only has one source domain, and they designed an adversarial domain augmentation method to create “fictitious” yet “challenging” data. Recently, Zhou et al. (2020) employed a generator to synthesize data from pseudo-novel domains to augment the source domains, maximizing the distance between source and pseudo-novel domains as measured by optimal transport (Peyré et al., 2019). Another solution for domain generalization is based on learning domain-invariant features (D’Innocente & Caputo, 2018; Li et al., 2018b; 2017a). Muandet et al. (2013) proposed domain-invariant component analysis to learn invariant transformantions by minimizing the dissimilarity across domains. Louizos et al. (2015) learned invariant representations by variational autoencoder (Kingma & Welling, 2014), which introduced Bayesian inference into invariant feature learning. Dou et al. (2019) and Seo et al. (2019) tried to achieve a similar goal by introducing two complementary losses, global class alignment loss and local sample clustering loss, and employing multiple normalization methods. Li et al. (2019) proposed an episodic training algorithm to obtain both domain-invariant feature extractor and classifier.
Recently, meta-learning based techniques have been considered to solve domain generalization problems. Li et al. (2018a) proposed a meta-learning domain generalization method which introduced the gradient-based method, i.e., model agnostic meta-learning (Finn et al., 2017), for domain generalization. Balaji et al. (2018) address the domain generalization problem by learning a regularization function in a meta-learning framework, making the model robust to domain shifts. Du et al. (2020) propose the meta variational information bottleneck to learn domain-invariant representations through episodic training.
To the best of our knowledge, Bayesian neural networks have not yet been explored in domain generalization. Our method introduces variational Bayesian approximation to both the feature extractor and classifier of the neural network in conjunction with the newly introduced domain-invariant principle for domain generalization. The resultant variational invariant learning combines the representational power of deep neural networks and variational Bayesian inference.
Similar to our proposal, CCSA by Motiian et al. (2017) also aligns representations across domains in the same class. Specifically, CCSA utilizes an L2 distance between deterministic features while we exploit Bayesian neural networks to learn domain-invariant representations by minimizing the distance between domain distributions. Theoretically, minimizing the distance between distributions incorporates larger inter-class variance than minimizing distance of deterministic features. Moreover, we apply our variational invariant learning to both the feature extractor and the classifier, while CCSA only considers an alignment loss on the feature representations.
4 EXPERIMENTS
4.1 DATASETS AND SETTINGS
We conduct our experiments on four widely used benchmarks in domain generalization:
PACS (Li et al., 2017a) consists of 9,991 images of seven classes from four domains - photo, artpainting, cartoon and sketch. We follow the “leave-one-out” protocol in (Li et al., 2017a; 2018b; Carlucci et al., 2019), where the model is trained on any three of the four domains, which we call source domains, and tested on the last (target) domain.
Office-Home (Venkateswara et al., 2017) also has four domains: art, clipart, product and realworld. There are about 15,500 images of 65 categories for object recognition in office and home environments. We use the same experimental protocol as for PACS.
Rotated MNIST and Fashion-MNIST were introduced for evaluating domain generalization in (Piratla et al., 2020). For the fair comparison, we follow their recommended settings and randomly select a subset of 2,000 images from MNIST and 10,000 images from Fashion-MNIST, which is considered to have been rotated by 0◦. The subset of images is then rotated by 15◦ through 75◦ in intervals of 15◦, creating five source domains. The target domains are created by rotation angles of 0◦ and 90◦. We use these two datasets to demonstrate the generalizability by comparing the performance on in-distribution and out-of-distribution data.
For all four benchmarks, we use ResNet-18 (He et al., 2016) pretrained on ImageNet (Deng et al., 2009) as the CNN backbone. During training, we use Adam optimization (Kingma & Ba, 2014) with the learning rate set to 0.0001, and train for 10,000 iterations. In each iteration we choose one source domain as the meta-source domain. The batch size is 128. To fit memory footprint, we choose a maximum number of samples per category per target domain to implement the domaininvariant learning, i.e. sixteen for PACS, Rotated MNIST and Fashion-MNIST datasets, and four for the Office-Home dataset. We choose λφ and λψ based on the performance on the validation set and their influence is summarized in the new Fig 3 in Appendix C. The optimal values of λφ and λψ are 0.1 and 100 respectively. Parameters σ1 and σ2 in Eq. (15) are set to 0.1 and 1.5. The model with the highest validation set accuracy is employed for evaluation on the target domain. All code will be made publicly available.
4.2 ABLATION STUDY
We conduct an ablation study to investigate the effectiveness of our variational invariant learning for domain generalization. The experiments are performed on the PACS dataset. Since the major contributions of this work are the Bayesian treatment and the domain-invariant principle, we evaluate their effect by individually incorporating them into the classifier - the last layer - ψ and the feature extractor - the penultimate layer - φ. The results are shown in Table 1. The “X” and “×” in the “Bayesian” column denote whether the classifier ψ and feature extractor φ are Bayesian layers or deterministic layers. In the “Invariant” column they denote whether the domain-invariant loss is introduced into the classifier and the feature extractor. Note that the predictive distribution is a Bernoulli distribution, which also admits the domain-invariant loss, we therefore include this case for a comprehensive comparison.
In Table 1, the first four rows demonstrate the benefit of the Bayesian treatment. The first row (a) serves as a baseline model, which is a vanilla deep convolutional network without any Bayesian treatment and domain-invariant loss. The backbone is also a ResNet-18 pretrained on ImageNet. It is clear the Bayesian treatment, either for the classifier (b) or the feature extractor (c), improves the performance, especially in the “Art-painting” and “Sketch” domains, and this is further demonstrated in (d) where we employ the Bayesian classifier and feature extractor simultaneously.
The benefit of the domain-invariant principle for classifiers is demonstrated by comparing (e) to (a) and (f) to (b). The settings with domain invariance consistently perform better than those without it. A similar trend is also observed when applying the domain-invariant principle to the feature extractor, as shown by comparing (g) to (c). Overall, our variational invariant learning (h) achieves the best performance compared to other variants, demonstrating its effectiveness for domain generalization. Note that the feature distributions p(z|x) are unknown without Bayesian formalism, leading to an intractable LI(φ). Therefore, we do not conduct the experiment with only the domain-invariant loss on both the classifier and the feature extractor.
To further demonstrate the domain-invariant property of our method, we visualize the features learned by different settings of our method in Table 1. We use t-SNE (Maaten & Hinton, 2008)
to reduce the feature dimension into a two-dimensional subspace, following Du et al. (2020). The visualization is shown in Fig. 1. More visualization results are provided in Appendix E.
Figs. 1 (a), (b), (c) and (d) show the baseline, Bayesian classifier, Bayesian representations and both Bayesian classifier and representations, demonstrating the benefits of Bayesian inference for learning domain-invariant features. The Bayesian treatment on both the classifier and the feature extractor enlarges the inter-class distance in all domains, which benefits the classification performance on the target domain, as shown in Fig. 1 (d). Figs. 1 (e), (f), (g), (h) are visualizations of feature representations after introducing the domain-invariant principle, compared to the upper row. Comparing the two figures in each column indicates that our domain-invariant principle imposed on either the representation or the classifier further enlarges the inter-class distances. At the same time, it reduces the distance between samples of the same class from different domains. This is even more apparent in the intra-class distance between samples from source and target domains. As a result, the inter-class distances in the target domain become larger, therefore improving performance. It is worth noting that the domain-invariant principle on the classifier in Fig. 1 (f) and on the feature extractor in Fig. 1 (g)) both improve the domain-invariant features. Our variational invariant learning in Fig. 1 (h) therefore has better performance by combining their benefits.
We also experiments with more layers in the feature extractor, see Table 2. “Bayesian φ′” and “Invariant φ′” denote whether the additional feature extraction layer φ′ has the Bayesian property and domain-invariant property. The classifiers have both properties in all cases in Table 2. The first row is the setting with only one variational invariant layer in the feature extractor. When introducing another Bayesian learning layer φ′ without the domain-invariant property into the model, as shown in the second row in Table 2, the average performance improves slightly. If we introduce both the Bayesian learning and domain-invariant learning into φ′, as shown in the third row, the overall performance declines a bit. One reason might be the information loss in feature representations caused by the excessive use of domain-invariant learning. In addition, due to the Bayesian inference and Monte-Carlo sampling, more variational-invariant layers leads to higher memory usage and more computations, which is also one reason for us to apply the variational invariant learning only to the last feature extraction layer and the classifier.
4.3 STATE-OF-THE-ART COMPARISON
In this section, we compare our method with several state-of-the-art methods on four datasets. The results are reported in Tables 3-5. The baseline on PACS (Table 3), Office-Home (Table 4), and rotated MNIST and Fashion-MNIST (Table 5) are all based on the same vanilla deep convolutional ResNet-18 network, without any Bayesian treatment, the same as row (a) in Table 1
On PACS, as shown in Table 3, our variational invariant learning method achieves the best overall performance. On each domain, our performance is competitive with the state-of-the-art and we exceed all other methods on the “Cartoon” domain. On Office-Home, as shown in Table 4, we again achieve the best recognition accuracy. It is worth mentioning that on the most challenging “Art” and “Clipart” domains, our variational invariant learning also delivers the highest performance, with a good improvement over previous methods.
L2A-OT and DSON outperform the proposed model on some domains of PACS and Office-Home. L2A-OT learns a generator to synthesize data from pseudo-novel domains to augment the source domains. The pseudo-novel domains often have similar characteristics with the source data. Thus, when the target data also have similar characteristics with the source domains this pays off as the
pseudo domains are more likely to cover the target domain, such as “Product” and “Real World” in Office-Home and “Photo” in PACS. When the test domain is different from all of the training domains the performance suffers, e.g., “Clipart” in Office-Home and “Sketch” in PACS. Our method generates domain-invariant representations and classifiers, resulting in competitive results across all domains and overall. DSON mixtures batch and instance normalization for domain generalization. This tactic is effective on PACS, but less competitive on Office-Home. We attribute this to the larger number of categories on Office-Home, where instance normalization is known to make features less discriminative with respect to object categories (Seo et al., 2019). Our domain-invariant network makes feature distributions and predictive distributions similar across domains, resulting in good performance on both PACS and Office-Home.
On the Rotated MNIST and Fashion-MNIST datasets, following the experimental settings in (Piratla et al., 2020), we evaluate our method on the in-distribution and out-of-distribution sets. As shown in Table 5, our VIL achieves the best performance on both sets of the two datasets, surpassing other methods. Moreover, our method especially improves the classification performance on the out-of-distribution sets, demonstrating its strong generalizability to unseen domains, which is also consistent with the findings in Fig. 1.
5 CONCLUSION
In this work, we propose variational invariant learning (VIL), a variational Bayesian learning framework for domain generalization. We introduce Bayesian neural networks into the model, which is able to better represent uncertainty and enhance the generalization to out-of-distribution data. To handle the domain shift between source and target domains, we propose a domain-invariant principle under the variational inference framework, which is incorporated by establishing a domain-invariant feature extractor and classifier. Our variational invariant learning combines the representational power of deep neural networks and uncertainty modeling ability of Bayesian learning, showing great effectiveness for domain generalization. Extensive ablation studies demonstrate the benefits of the Bayesian inference and domain-invariant principle for domain generalization. Our variational invariant learning sets a new state-of-the-art on four domain generalization benchmarks.
A DERIVATION
A.1 DERIVATION OF THE UPPER BOUNDS OF DOMAIN-INVARIANT LEARNING
As in most cases the Eqζ [pθ(yζ |xζ)] is intractable, we derive the upper bound in Eq. (5), which is achieved via Jensen’s inequality:
DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] = Epθ(ys|xs)[log pθ(ys|xs)]− Epθ(ys|xs) [ logEqζ [pθ(yζ |xζ)] ] ≤ Epθ(ys|xs)[log pθ(ys|xs)]− Epθ(ys|xs)Eqζ [log pθ(yζ |xζ)]
= Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] .
(16) In Bayesian inference, computing the likelihood pθ(y|x) = ∫ θ p(y|x,θ)dθ = Eq(θ)[p(y|x,θ)] is notoriously difficult. Thus, as the fact that KL divergence is a convex function, we obtain the upper bound in Eq. (6) achieved via Jensen’s inequality similar to Eq. (16):
Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] = Eqζ [ DKL [ Eq(θ)[p(ys|xs,θ)]||Eq(θ)[p(yζ |xζ ,θ)] ]] ≤ Eqζ [[ Eq(θ)DKL [ p(ys|xs,θ)||p(yζ |xζ ,θ) ]]] . (17)
A.2 DERIVATION OF VARIATIONAL BAYESIAN APPROXIMATION FOR REPRESENTATION (φ) AND CLASSIFIER (ψ) LAYERS.
We consider the model with two Bayesian layers φ and ψ as the last layer of feature extractor and the classifier respectively. The prior distribution of the model is p(φ,ψ), and the true posterior distribution is p(φ,ψ|x,y). Following the settings in Section 2.1, we need to learn a variational distribution q(φ,ψ) to approximate the true posterior by minimizing the KL divergence from q(φ,ψ) to p(φ,ψ|x,y):
φ∗,ψ∗ = argmin φ,ψ
DKL [ q(φ,ψ)||p(φ,ψ|x,y) ] . (18)
By applying the Bayesian rule p(φ,ψ|x,y) ∝ p(y|x,φ,ψ)p(φ,ψ), the optimization is equivalent to minimizing:
LBayes = ∫ q(φ,ψ) log
q(φ,ψ)
p(φ,ψ)p(y|x,φ,ψ) dφdψ = DKL [ q(φ,ψ)||p(φ,ψ) ] − Eq(φ,ψ) [ log p(y|x,φ,ψ) ] .
(19)
With φ and ψ are independent,
LBayes = −Eq(ψ)Eq(φ)[log p(y|x,ψ,φ)] + DKL[q(ψ)||p(ψ)] + DKL[q(φ)||p(φ)]. (20)
B DETAILS OF DOMAIN-INVARIANT LOSS IN VIL TRAINING
We split the training phase of VIL into several episodes. In each episode, as shown in Fig. 2, we randomly choose a source domain as the meta-source domain Ds, and the rest of the source domains {Dt}Tt=1 are treated as the meta-target domains. From Ds, we randomly select a batch of samples xs. For each xs, we then select N samples { xit }N i=1
, which are in the same category as xs, from each of the meta-target domains Dt. All of these samples are sent to the variational invariant feature extractor φ to get the representations zs and { zit }N i=1 , which are then sent to the variational
invariant classifier ψ to obtain the predictions ys and { yit }N i=1
. We obtain the domain-invariant loss for feature extractor LI(φ) by calculating the mean of the KL divergence of zs and each zit as Eq.(11). The domain-invariant loss for feature classifier LI(ψ) is calculated in a similar way on ys and { yit }N i=1 as Eq.(10).
C ABLATION STUDY FOR HYPERPARAMTERS
We also ablate the hyperparameters λφ, λψ and π on PACS with cartoon as the target domain. Results are shown in Fig 3 (a), (b) and (c). We obtain Fig 3 (a) by fixing λψ as 100 and adjusting λφ, Fig 3 (b) by fixing λφ as 1 and adjusting λψ and Fig 3 (c) by adjusting π while fixing other settings as in Section 4.1. λφ and λψ balance the influence of the Bayesian learning and domain-invariant learning, and their optimal values are 1 and 100. If the values are too small, the model tends to overfit to source domains as the performance on target data drops more obviously than on validation data. In contrast, too large values of them harm the overall performance of the model as there are obvious decrease of accuracy on both validation data and target data. Moreover, π balances the two components of the scale mixture prior of our Bayesian model. According to Blundell et al. (2015), the two components cause a prior density with heavier tail while many weights tightly concentrate around zero. Both of them are important. The performance is the best when π is 0.5 according to Fig 3 (c), which demonstrates the effectiveness of the two components in the scale mixture prior.
D DETAILED ABLATION STUDY ON PACS
In addition to the aforementioned experiments, we conduct some supplementary experiments with other settings on PACS to further demonstrate the effectiveness of the Bayesian inference and domain-invariant loss as shown in Table 6. The evaluated components are the same as in Table 1. For better comparison, we show the contents of Table 1 again in Table 6, and add three other settings with IDs (i), (j) and (k). Note that as the distribution of features z is unknown without a Bayesian feature extractor φ, the settings with LI(φ) and a non-Bayesian feature extractor is intractable. Comparing (i) with (d), we find that employing Bayesian inference to the last layer of the feature extractor improves the overall performance and the classification accuracy on three of the four domains. Moreover, comparing (j) and (k) to (g) shows the benefits of introducing variational Bayes and the domain-invariant loss to the classifier on most of the domains and the average of them.
E EXTRA VISUALIZATION RESULTS
To further observe and analyze the benefits of the individual components of VIL for domain-invariant learning, we visualize the features of all categories from the target domain only in Fig. 4, and features of only one category from all domains in Fig. 5. The same as Fig. 1, the visualization is conducted on the PACS dataset and the target domain is “art-painting”. The chosen category in Fig. 5 is “horse”.
Fig. 4 provides a more intuitive observation of the benefits of the Bayesian framework and domaininvariant learning in our method for enlarging the inter-class distance in the target domain. The conclusion is similar as in Fig. 1. From the figures in the first row, it is clear that the Bayesian framework whether in the classifier ((b)) or the feature extractor ((c)) increases the inter-class distance compared with the baseline method ((a)). With both of them ((d)), the performance becomes
better. Further, comparing two figures in each column, the inter-class distance is also enlarged by introducing the domain-invariant principle into the setting of each figure in the first row. VIL ((h)) achieves the best performance by combining the benefits of both the Bayesian framework and the domain-invariant principle in both feature extractor and classifier.
Fig. 5 provides a deeper insight into the intra-class feature distributions of the same category from different domains. By introducing the Bayesian inference into the model, the features demonstrate the manifold of the category as shown in the first row ((b), (c) and (d)). This makes recognition easier. Indeed, the visualization of features from multiple categories has similar properties as shown in Fig. 1. As shown in each column, introducing the domain-invariant learning into the model leads to a better mixture of features from different domains. The resultant domain-invariant representation makes the model more generalizable to unseen domains.
We also visualize the features in rotated MNIST and Fashion MNIST datasets, as shown in Fig. 6. Different shapes denote different categories. Red samples denote features from the in-distribution set and blue samples denote features from the out-of-distribution set. Compared with the baseline, our method reduces the intra-class distance between samples from the in-distribution set and the out-ofdistribution set and clusters the out-of-distribution samples of the same categories better, especially in the rotated Fashion-MNIST dataset. | 1. What is the focus and contribution of the paper on domain adaptation?
2. What are the strengths of the proposed approach, particularly in terms of its simplicity and computational practicality?
3. What are the weaknesses of the paper regarding the lack of clear Bayesian interpretation for certain parameters?
4. Do you have any concerns or questions about the experimental evaluation and the use of different baseline methods for each dataset?
5. How do the parameters \lambda_{\phi} and \lambda_{\phi} impact the performance, and can they be interpreted from the MLE perspective? | Review | Review
The authors propose a variational Bayesian approach to domain adaptation. The goal is to achieve flexibility in specifying domain invariance as well as modeling uncertainty. The variational approach could also help generalization.
The key idea is simple and straightforward — a distribution over domains is first specified and variational approximation is then used for parameter estimation. In particular, the authors decompose the variational distribution in terms of the domain and classifier parameters, applying the approximation to the last two network layers. The objective function is given as a weighted sum of the KL divergence corresponding the two component distributions.
In the empirical evaluation, the authors compare the performance on in-distribution and out-of-distribution data. They conclude that the proposed method achieves generally improved accuracy on several image datasets, including PACS, Office-Home, as well as MNIST variants.
Strengths:
The presentation is clear. The proposed Bayesian formulation is natural and the resulting learning problem is computationally practical.
Weaknesses:
Some parameters such as the weights \lambda_{\phi} and \lambda_{\phi} do not have clear Bayesian interpretations.
In the experimental evaluation, different baseline methods are used for each datasets.
Questions: How do the parameters \lambda_{\phi} and \lambda_{\phi} impact the performance? Could we interpret these parameters from the MLE perspective? |
ICLR | Title
Variational Invariant Learning for Bayesian Domain Generalization
Abstract
Domain generalization addresses the out-of-distribution problem, which is challenging due to the domain shift and the uncertainty caused by the inaccessibility to data from the target domains. In this paper, we propose variational invariant learning, a probabilistic inference framework that jointly models domain invariance and uncertainty. We introduce variational Bayesian approximation into both the feature representation and classifier layers to facilitate invariant learning for better generalization across domains. In the probabilistic modeling framework, we introduce a domain-invariant principle to explore invariance across domains in a unified way. We incorporate the principle into the variational Bayesian layers in neural networks, achieving domain-invariant representations and classifier. We empirically demonstrate the effectiveness of our proposal on four widely used cross-domain visual recognition benchmarks. Ablation studies demonstrate the benefits of our proposal and on all benchmarks our variational invariant learning consistently delivers state-of-the-art performance.
1 INTRODUCTION
Domain generalization (Muandet et al., 2013), as an out-of-distribution problem, aims to train a model on several source domains and have it generalize well to unseen target domains. The major challenge stems from the large distribution shift between the source and target domains, which is further complicated by the prediction uncertainty (Malinin & Gales, 2018) introduced by the inaccessibility to data from target domains during training. Previous approaches focus on learning domain-invariant features using novel loss functions (Muandet et al., 2013; Li et al., 2018a) or specific architectures (Li et al., 2017a; D’Innocente & Caputo, 2018). Meta-learning based methods were proposed to achieve similar goals by leveraging an episodic training strategy (Li et al., 2017b; Balaji et al., 2018; Du et al., 2020). Most of these methods are based on deep neural network backbones (Krizhevsky et al., 2012; He et al., 2016). However, while deep neural networks have achieved remarkable success in various vision tasks, their performance is known to degenerate considerably when the test samples are out of the training data distribution (Nguyen et al., 2015; Ilse et al., 2019), due to their poorly calibrated behavior (Guo et al., 2017; Kristiadi et al., 2020).
As an attractive solution, Bayesian learning naturally represents prediction uncertainty (Kristiadi et al., 2020; MacKay, 1992), possesses better generalizability to out-of-distribution examples (Louizos & Welling, 2017) and provides an elegant formulation to transfer knowledge across different datasets (Nguyen et al., 2018). Further, approximate Bayesian inference has been demonstrated to be able to improve prediction uncertainty (Blundell et al., 2015; Louizos & Welling, 2017; Atanov et al., 2019), even when only applied to the last network layer (Kristiadi et al., 2020). These properties make it appealing to introduce Bayesian learning into the challenging and unexplored scenario of domain generalization.
In this paper, we propose variational invariant learning (VIL), a Bayesian inference framework that jointly models domain invariance and uncertainty for domain generalization. We apply variational Bayesian approximation to the last two network layers for both the representations and classifier by placing prior distributions over their weights, which facilitates generalization. We adopt Bayesian neural networks to domain generalization, which enjoys the representational power of deep neural networks while facilitating better generalization. To further improve the robustness to domain shifts, we introduce the domain-invariant principle under the Bayesian inference framework, which enables
us to explore domain invariance for both feature representations and the classifier in a unified way. We evaluate our method on four widely-used benchmarks for cross-domain visual object classification. Our ablation studies demonstrate the effectiveness of the variational Bayesian domain-invariant features and classifier for domain generalization. Results further show that our method achieves the best performance on all of the four benchmarks.
2 METHODOLOGY
We explore Bayesian inference for domain generalization. In this task, the samples from the target domains are never seen during training, and are usually out of the data distribution of the source domains. This leads to uncertainty when making predictions on the target domains. Bayesian inference offers a principled way to represent the predictive uncertainty in neural networks (MacKay, 1992; Kristiadi et al., 2020). We briefly introduce approximate Bayesian inference, under which we will introduce our variational invariant learning for domain generalization.
2.1 APPROXIMATE BAYESIAN INFERENCE
Given a dataset {x(i),y(i)}Ni=1 of N input-output pairs and a model parameterized by weights θ with a prior distribution p(θ), Bayesian neural networks aim to infer the true posterior distribution p(θ|x,y). As the exact inference of the true posterior is computationally intractable, Hinton & Camp (1993) and Graves (2011) recommended learning a variational distribution q(θ) to approximate p(θ|x,y) by minimizing the Kullback-Leibler (KL) divergence between them:
θ∗ = argmin θ
DKL [ q(θ)||p(θ|x,y) ] . (1)
The above optimization is equivalent to minimizing the loss function:
LBayes = −Eq(θ)[log p(y|x,θ)] + DKL[q(θ)||p(θ)], (2)
which is also known as the negative value of the evidence lower bound (ELBO) (Blei et al., 2017).
2.2 VARIATIONAL DOMAIN-INVARIANT LEARNING
In domain generalization, let D = {Di}|D|i=1 = S ∪ T be a set of domains, where S and T denote source domains and target domains respectively. S and T do not have any overlap with each other but share the same label space. For each domainDi ∈ D, we can define a joint distribution p(xi,yi) in the input space X and the output space Y . We aim to learn a model f : X → Y in the source domains S that can generalize well to the target domains T . The fundamental problem in domain generalization is to achieve robustness to domain shift between source and target domains, that is, we aim to learn a model invariant to the distributional shift between the source and target domains. In this work, we mainly focus on the invariant property across domains instead of exploring general invariance properties (Nalisnick & Smyth, 2018). Therefore, we introduce a formal definition of domain invariance, which is easily incorporated as criteria into the Bayesian framework to achieve domain-invariant learning.
Provided that all domains inD are in the same domain space, then for any input sample xs in domain Ds, we assume that there exists a domain-transform function gζ(·) which is defined as a mapping function that is able to project xs to other different domains Dζ with respect to the parameter ζ, where ζ ∼ q(ζ), and a different ζ lead to different post-transformation domains Dζ . Usually the exact form of gζ(·) is not necessarily known. Under this assumption, we introduce the definition of domain invariance, which we will incorporate into the Bayesian layers of neural networks for domain-invariant learning.
Definition 2.1 (Domain Invariance) Let xs be a given sample from domain Ds ∈ D, and xζ = gζ(xs) be a transformation of xs in another domain Dζ , where ζ ∼ q(ζ). pθ(y|x) denotes the output distribution of input x with model θ. The model θ is domain-invariant if,
pθ(ys|xs) = pθ(yζ |xζ), ∀ζ ∼ q(ζ). (3)
Here, we use y to represent the output from a neural layer with input x, which can either be the prediction vector from the last layer or the feature vector from the convolutional layers.
To make the domain-invariant principle easier to implement, we then extend the Eq. (3) to an expectation form:
pθ(ys|xs) = Eqζ [pθ(yζ |xζ)]. (4)
Based on this definition, we use the Kullback-Leibler divergence between the two terms in Eq. (4), DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] , to quantify the domain invariance of the model, which will be zero when the model is domain invariant. As in most cases, there is no analytical form of the domaintransform function and only a few samples from Dζ are available, which makes Eqζ [pθ(yζ |xζ)] intractable. Thus, we derive the following upper bound of the divergence:
DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] ≤ Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] , (5)
which can be approximated by Monte Carlo sampling.
We define the complete objective function of our variational invariant learning by combining Eq. (5) with Eq. (2). However, in Bayesian inference, the likelihood is obtained by taking the expectation over the distribution of parameter θ, i.e., pθ(y|x) = Eq(θ)[p(y|x,θ)], which is also intractable in Eq. (5). As the KL divergence is a convex function (Nalisnick & Smyth, 2018), we further extend Eq. (5) to an upper bound:
Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] ≤ Eqζ [[ Eq(θ)DKL [ p(ys|xs,θ)||p(yζ |xζ ,θ) ]]] , (6)
which is tractable with the unbiased Monte Carlo approximation. The complete derivations of Eq. (5) and Eq. (6) are provided in Appendix A.
In addition, it is worth noting that the domain-transformation distribution q(ζ) is implicit and inexpressible in reality and there are only a limited number of domains available in practice. This problem is exacerbated because the target domain is unseen during training, which further limits the number of available domains. Moreover, in most of the domain generalization databases, for a certain sample xs from domain Ds, there is no transformation corresponding to xs in other domains. This prevents the expectation with respect to qζ from being directly tractable in general.
Thus, we resort to use an empirically tractable implementation and adopt an episodic setting as in (Li et al., 2019). In each episode, we choose one domain from the source domains S as the metasource domain Ds and the rest are used as the meta-target domains {Dt}Tt=1. To achieve variational invariant learning in the Bayesian framework, we use samples from meta-target domains in the same category as xs to approximate the samples of gζ(xs). Then we obtain a general loss function for domain-invariant learning:
LI = 1
T T∑ t=1 1 N N∑ i=1 Eq(θ) [ DKL [ p(ys|xs,θ)||p(yit|xit,θ) ]] , (7)
where { xit }N i=1
are from Dt, denoting the samples in the same category as xs. More details and an illustration of the domain-invariant loss function can be found in Appendix B.
With the aforementioned loss functions, we develop the loss function of variational invariant learning for domain generalization: LVIL = LBayes + λLI. (8) Our variational invariant learning combines the Bayesian framework, which is able to introduce uncertainty into the network and is beneficial for out-of-distribution problems (Daxberger & Hernández-Lobato, 2019), and a domain-invariant loss function LI, which is designed based on predictive distributions to make the model generalize better to the unseen target domains. For Bayesian learning, it has been demonstrated that being just “a bit” Bayesian in the last layer of the neural network can well represent the uncertainty in predictions (Kristiadi et al., 2020). This indicates that applying the Bayesian treatment only to the last layer already brings sufficient benefits of Bayesian inference. Although adding Bayesian inference to more layers improves the performance, it also increases the computational cost. Further, from the perspective of domain invariance, making both
the classifier and feature extractor more robust to the domain shifts also leads to better performance (Li et al., 2019). Thus, there is a trade-off between the benefits of variational Bayesian domain invariance and computational efficiency. Instead of applying the Bayesian principle to all the layers of the neural network, in this work we explore domain invariance by applying it to only the classifier layer ψ and the last feature extraction layer φ.
In this case, the LBayes in Eq. (2) becomes the ELBO with respect to ψ and φ jointly. As they are independent, the LBayes is expressed as: LBayes = −Eq(ψ)Eq(φ)[log p(y|x,ψ,φ)] + DKL[q(ψ)||p(ψ)] + DKL[q(φ)||p(φ)]. (9) The above variational inference objective allows us to explore domain-invariant representations and classifier in a unified way. The detailed derivation of Eq. (9) is provided in Appendix A.
Domain-Invariant Classifier To establish the domain-invariant classifier, we directly incorporate the proposed domain-invariant principle into the last layer of the network, which gives rise to
LI(ψ) = 1
T T∑ t=1 1 N N∑ i=1 Eq(ψ) [ DKL [ p(ys|zs,ψ)||p(yit|zit,ψ) ]] , (10)
where z denotes the feature representations of input x, and the subscripts s and t indicate the metasource domain and the meta-target domains as in Eq. (7). Since p(y|z,ψ) is a Bernoulli distribution, we can conveniently calculate the KL divergence in Eq. (10).
Domain-Invariant Representations To also make the representations domain invariant, we have
LI(φ) = 1
T T∑ t=1 1 N N∑ i=1 Eq(φ) [ DKL [ p(zs|xs,φ)||p(zit|xit,φ) ]] , (11)
where φ are the parameters of the feature extractor. Since the feature extractor is also a Bayesian layer, the distribution of p(z|x,φ) will be a factorized Gaussian if the posterior of φ is as well. We illustrate this as follows. Let x be the input feature of a Bayesian layer φ, which has a factorized Gaussian posterior, the posterior of the activation z of the Bayesian layer is also a factorized Gaussian (Kingma et al., 2015):
q(φi,j) ∼N (µi,j , σ2i,j) ∀φi,j ∈ φ⇒ p(zj |x,φ) ∼ N (γj , δ2j ),
γj = N∑ i=1 xiµi,j , and δ2j = N∑ i=1 x2iσ 2 i,j ,
(12)
where zj denotes the j-th element in z, likewise for xi, and φi,j denotes the element at the position (i, j) in φ. Based on this property of the Bayesian framework, we assume that the posterior of our variational invariant feature extractor has a factorized Gaussian distribution, which leads to an easier calculation of the KL divergence in Eq. (11). Note that with the domain-invariant representations, z in Eq. (10) corresponds to samples of the feature representation distributions: zs ∼ p(zs|xs,φ) and zt ∼ p(zt|xt,φ).
2.3 OBJECTIVE FUNCTION
The objective function of our variational invariant learning is defined as: LVIL = LBayes + λψLI(ψ) + λφLI(φ), (13)
where λψ and λφ are hyperparameters to control the domain-invariant terms. We adopt Monte Carlo sampling and obtain the empirical objective function for variational invariant learning as follows:
LVIL = 1
L L∑ `=1 1 M M∑ m [ − log p(ys|xs,ψ(`),φ(m)) ] + DKL [ q(ψ)||p(ψ)] + DKL[q(φ)||p(φ) ] + λψ 1
T T∑ t=1 1 N N∑ i=1 1 L L∑ `=1 DKL [ p(ys|zs,ψ(`))||p(yit|zit,ψ(`)) ] + λφ 1
T T∑ t=1 1 N N∑ i=1 1 M M∑ m=1 DKL [ p(zs|xs,φ(m))||p(zit|xit,φ(m)) ] ,
(14)
where xs and zs denote the input and its feature fromDs, respectively, and xit and z i t are fromDt as in Eq. (7). The posteriors are set to factorized Gaussian distributions, i.e., q(ψ) = N (µψ,σ2ψ) and q(φ) = N (µφ,σ2φ). We adopt the reparameterization trick to draw Monte Carlo samples (Kingma & Welling, 2014) asψ(`) = µψ+ (`)∗σψ , where (`) ∼ N (0, I). We draw the samples forφ(m) in a similar way. In the implementation of our variational invariant learning, to increase the flexibility of the prior distribution in our Bayesian layers, we choose to place a scale mixture of two Gaussian distributions as the priors p(ψ) and p(φ) (Blundell et al., 2015):
πN (0,σ21) + (1− π)N (0,σ22), (15)
where σ1, σ2 and π are hyperparameters chosen by cross-validation.
3 RELATED WORK
One solution for domain generalization is to generate more source domain data to increase the probability of covering the data in the target domains (Shankar et al., 2018; Volpi et al., 2018). Shankar et al. (2018) augmented the data by perturbing the input images with adversarial gradients generated by an auxiliary classifier. Qiao et al. (2020) proposed a more challenging scenario of domain generalization named single domain generalization, which only has one source domain, and they designed an adversarial domain augmentation method to create “fictitious” yet “challenging” data. Recently, Zhou et al. (2020) employed a generator to synthesize data from pseudo-novel domains to augment the source domains, maximizing the distance between source and pseudo-novel domains as measured by optimal transport (Peyré et al., 2019). Another solution for domain generalization is based on learning domain-invariant features (D’Innocente & Caputo, 2018; Li et al., 2018b; 2017a). Muandet et al. (2013) proposed domain-invariant component analysis to learn invariant transformantions by minimizing the dissimilarity across domains. Louizos et al. (2015) learned invariant representations by variational autoencoder (Kingma & Welling, 2014), which introduced Bayesian inference into invariant feature learning. Dou et al. (2019) and Seo et al. (2019) tried to achieve a similar goal by introducing two complementary losses, global class alignment loss and local sample clustering loss, and employing multiple normalization methods. Li et al. (2019) proposed an episodic training algorithm to obtain both domain-invariant feature extractor and classifier.
Recently, meta-learning based techniques have been considered to solve domain generalization problems. Li et al. (2018a) proposed a meta-learning domain generalization method which introduced the gradient-based method, i.e., model agnostic meta-learning (Finn et al., 2017), for domain generalization. Balaji et al. (2018) address the domain generalization problem by learning a regularization function in a meta-learning framework, making the model robust to domain shifts. Du et al. (2020) propose the meta variational information bottleneck to learn domain-invariant representations through episodic training.
To the best of our knowledge, Bayesian neural networks have not yet been explored in domain generalization. Our method introduces variational Bayesian approximation to both the feature extractor and classifier of the neural network in conjunction with the newly introduced domain-invariant principle for domain generalization. The resultant variational invariant learning combines the representational power of deep neural networks and variational Bayesian inference.
Similar to our proposal, CCSA by Motiian et al. (2017) also aligns representations across domains in the same class. Specifically, CCSA utilizes an L2 distance between deterministic features while we exploit Bayesian neural networks to learn domain-invariant representations by minimizing the distance between domain distributions. Theoretically, minimizing the distance between distributions incorporates larger inter-class variance than minimizing distance of deterministic features. Moreover, we apply our variational invariant learning to both the feature extractor and the classifier, while CCSA only considers an alignment loss on the feature representations.
4 EXPERIMENTS
4.1 DATASETS AND SETTINGS
We conduct our experiments on four widely used benchmarks in domain generalization:
PACS (Li et al., 2017a) consists of 9,991 images of seven classes from four domains - photo, artpainting, cartoon and sketch. We follow the “leave-one-out” protocol in (Li et al., 2017a; 2018b; Carlucci et al., 2019), where the model is trained on any three of the four domains, which we call source domains, and tested on the last (target) domain.
Office-Home (Venkateswara et al., 2017) also has four domains: art, clipart, product and realworld. There are about 15,500 images of 65 categories for object recognition in office and home environments. We use the same experimental protocol as for PACS.
Rotated MNIST and Fashion-MNIST were introduced for evaluating domain generalization in (Piratla et al., 2020). For the fair comparison, we follow their recommended settings and randomly select a subset of 2,000 images from MNIST and 10,000 images from Fashion-MNIST, which is considered to have been rotated by 0◦. The subset of images is then rotated by 15◦ through 75◦ in intervals of 15◦, creating five source domains. The target domains are created by rotation angles of 0◦ and 90◦. We use these two datasets to demonstrate the generalizability by comparing the performance on in-distribution and out-of-distribution data.
For all four benchmarks, we use ResNet-18 (He et al., 2016) pretrained on ImageNet (Deng et al., 2009) as the CNN backbone. During training, we use Adam optimization (Kingma & Ba, 2014) with the learning rate set to 0.0001, and train for 10,000 iterations. In each iteration we choose one source domain as the meta-source domain. The batch size is 128. To fit memory footprint, we choose a maximum number of samples per category per target domain to implement the domaininvariant learning, i.e. sixteen for PACS, Rotated MNIST and Fashion-MNIST datasets, and four for the Office-Home dataset. We choose λφ and λψ based on the performance on the validation set and their influence is summarized in the new Fig 3 in Appendix C. The optimal values of λφ and λψ are 0.1 and 100 respectively. Parameters σ1 and σ2 in Eq. (15) are set to 0.1 and 1.5. The model with the highest validation set accuracy is employed for evaluation on the target domain. All code will be made publicly available.
4.2 ABLATION STUDY
We conduct an ablation study to investigate the effectiveness of our variational invariant learning for domain generalization. The experiments are performed on the PACS dataset. Since the major contributions of this work are the Bayesian treatment and the domain-invariant principle, we evaluate their effect by individually incorporating them into the classifier - the last layer - ψ and the feature extractor - the penultimate layer - φ. The results are shown in Table 1. The “X” and “×” in the “Bayesian” column denote whether the classifier ψ and feature extractor φ are Bayesian layers or deterministic layers. In the “Invariant” column they denote whether the domain-invariant loss is introduced into the classifier and the feature extractor. Note that the predictive distribution is a Bernoulli distribution, which also admits the domain-invariant loss, we therefore include this case for a comprehensive comparison.
In Table 1, the first four rows demonstrate the benefit of the Bayesian treatment. The first row (a) serves as a baseline model, which is a vanilla deep convolutional network without any Bayesian treatment and domain-invariant loss. The backbone is also a ResNet-18 pretrained on ImageNet. It is clear the Bayesian treatment, either for the classifier (b) or the feature extractor (c), improves the performance, especially in the “Art-painting” and “Sketch” domains, and this is further demonstrated in (d) where we employ the Bayesian classifier and feature extractor simultaneously.
The benefit of the domain-invariant principle for classifiers is demonstrated by comparing (e) to (a) and (f) to (b). The settings with domain invariance consistently perform better than those without it. A similar trend is also observed when applying the domain-invariant principle to the feature extractor, as shown by comparing (g) to (c). Overall, our variational invariant learning (h) achieves the best performance compared to other variants, demonstrating its effectiveness for domain generalization. Note that the feature distributions p(z|x) are unknown without Bayesian formalism, leading to an intractable LI(φ). Therefore, we do not conduct the experiment with only the domain-invariant loss on both the classifier and the feature extractor.
To further demonstrate the domain-invariant property of our method, we visualize the features learned by different settings of our method in Table 1. We use t-SNE (Maaten & Hinton, 2008)
to reduce the feature dimension into a two-dimensional subspace, following Du et al. (2020). The visualization is shown in Fig. 1. More visualization results are provided in Appendix E.
Figs. 1 (a), (b), (c) and (d) show the baseline, Bayesian classifier, Bayesian representations and both Bayesian classifier and representations, demonstrating the benefits of Bayesian inference for learning domain-invariant features. The Bayesian treatment on both the classifier and the feature extractor enlarges the inter-class distance in all domains, which benefits the classification performance on the target domain, as shown in Fig. 1 (d). Figs. 1 (e), (f), (g), (h) are visualizations of feature representations after introducing the domain-invariant principle, compared to the upper row. Comparing the two figures in each column indicates that our domain-invariant principle imposed on either the representation or the classifier further enlarges the inter-class distances. At the same time, it reduces the distance between samples of the same class from different domains. This is even more apparent in the intra-class distance between samples from source and target domains. As a result, the inter-class distances in the target domain become larger, therefore improving performance. It is worth noting that the domain-invariant principle on the classifier in Fig. 1 (f) and on the feature extractor in Fig. 1 (g)) both improve the domain-invariant features. Our variational invariant learning in Fig. 1 (h) therefore has better performance by combining their benefits.
We also experiments with more layers in the feature extractor, see Table 2. “Bayesian φ′” and “Invariant φ′” denote whether the additional feature extraction layer φ′ has the Bayesian property and domain-invariant property. The classifiers have both properties in all cases in Table 2. The first row is the setting with only one variational invariant layer in the feature extractor. When introducing another Bayesian learning layer φ′ without the domain-invariant property into the model, as shown in the second row in Table 2, the average performance improves slightly. If we introduce both the Bayesian learning and domain-invariant learning into φ′, as shown in the third row, the overall performance declines a bit. One reason might be the information loss in feature representations caused by the excessive use of domain-invariant learning. In addition, due to the Bayesian inference and Monte-Carlo sampling, more variational-invariant layers leads to higher memory usage and more computations, which is also one reason for us to apply the variational invariant learning only to the last feature extraction layer and the classifier.
4.3 STATE-OF-THE-ART COMPARISON
In this section, we compare our method with several state-of-the-art methods on four datasets. The results are reported in Tables 3-5. The baseline on PACS (Table 3), Office-Home (Table 4), and rotated MNIST and Fashion-MNIST (Table 5) are all based on the same vanilla deep convolutional ResNet-18 network, without any Bayesian treatment, the same as row (a) in Table 1
On PACS, as shown in Table 3, our variational invariant learning method achieves the best overall performance. On each domain, our performance is competitive with the state-of-the-art and we exceed all other methods on the “Cartoon” domain. On Office-Home, as shown in Table 4, we again achieve the best recognition accuracy. It is worth mentioning that on the most challenging “Art” and “Clipart” domains, our variational invariant learning also delivers the highest performance, with a good improvement over previous methods.
L2A-OT and DSON outperform the proposed model on some domains of PACS and Office-Home. L2A-OT learns a generator to synthesize data from pseudo-novel domains to augment the source domains. The pseudo-novel domains often have similar characteristics with the source data. Thus, when the target data also have similar characteristics with the source domains this pays off as the
pseudo domains are more likely to cover the target domain, such as “Product” and “Real World” in Office-Home and “Photo” in PACS. When the test domain is different from all of the training domains the performance suffers, e.g., “Clipart” in Office-Home and “Sketch” in PACS. Our method generates domain-invariant representations and classifiers, resulting in competitive results across all domains and overall. DSON mixtures batch and instance normalization for domain generalization. This tactic is effective on PACS, but less competitive on Office-Home. We attribute this to the larger number of categories on Office-Home, where instance normalization is known to make features less discriminative with respect to object categories (Seo et al., 2019). Our domain-invariant network makes feature distributions and predictive distributions similar across domains, resulting in good performance on both PACS and Office-Home.
On the Rotated MNIST and Fashion-MNIST datasets, following the experimental settings in (Piratla et al., 2020), we evaluate our method on the in-distribution and out-of-distribution sets. As shown in Table 5, our VIL achieves the best performance on both sets of the two datasets, surpassing other methods. Moreover, our method especially improves the classification performance on the out-of-distribution sets, demonstrating its strong generalizability to unseen domains, which is also consistent with the findings in Fig. 1.
5 CONCLUSION
In this work, we propose variational invariant learning (VIL), a variational Bayesian learning framework for domain generalization. We introduce Bayesian neural networks into the model, which is able to better represent uncertainty and enhance the generalization to out-of-distribution data. To handle the domain shift between source and target domains, we propose a domain-invariant principle under the variational inference framework, which is incorporated by establishing a domain-invariant feature extractor and classifier. Our variational invariant learning combines the representational power of deep neural networks and uncertainty modeling ability of Bayesian learning, showing great effectiveness for domain generalization. Extensive ablation studies demonstrate the benefits of the Bayesian inference and domain-invariant principle for domain generalization. Our variational invariant learning sets a new state-of-the-art on four domain generalization benchmarks.
A DERIVATION
A.1 DERIVATION OF THE UPPER BOUNDS OF DOMAIN-INVARIANT LEARNING
As in most cases the Eqζ [pθ(yζ |xζ)] is intractable, we derive the upper bound in Eq. (5), which is achieved via Jensen’s inequality:
DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] = Epθ(ys|xs)[log pθ(ys|xs)]− Epθ(ys|xs) [ logEqζ [pθ(yζ |xζ)] ] ≤ Epθ(ys|xs)[log pθ(ys|xs)]− Epθ(ys|xs)Eqζ [log pθ(yζ |xζ)]
= Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] .
(16) In Bayesian inference, computing the likelihood pθ(y|x) = ∫ θ p(y|x,θ)dθ = Eq(θ)[p(y|x,θ)] is notoriously difficult. Thus, as the fact that KL divergence is a convex function, we obtain the upper bound in Eq. (6) achieved via Jensen’s inequality similar to Eq. (16):
Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] = Eqζ [ DKL [ Eq(θ)[p(ys|xs,θ)]||Eq(θ)[p(yζ |xζ ,θ)] ]] ≤ Eqζ [[ Eq(θ)DKL [ p(ys|xs,θ)||p(yζ |xζ ,θ) ]]] . (17)
A.2 DERIVATION OF VARIATIONAL BAYESIAN APPROXIMATION FOR REPRESENTATION (φ) AND CLASSIFIER (ψ) LAYERS.
We consider the model with two Bayesian layers φ and ψ as the last layer of feature extractor and the classifier respectively. The prior distribution of the model is p(φ,ψ), and the true posterior distribution is p(φ,ψ|x,y). Following the settings in Section 2.1, we need to learn a variational distribution q(φ,ψ) to approximate the true posterior by minimizing the KL divergence from q(φ,ψ) to p(φ,ψ|x,y):
φ∗,ψ∗ = argmin φ,ψ
DKL [ q(φ,ψ)||p(φ,ψ|x,y) ] . (18)
By applying the Bayesian rule p(φ,ψ|x,y) ∝ p(y|x,φ,ψ)p(φ,ψ), the optimization is equivalent to minimizing:
LBayes = ∫ q(φ,ψ) log
q(φ,ψ)
p(φ,ψ)p(y|x,φ,ψ) dφdψ = DKL [ q(φ,ψ)||p(φ,ψ) ] − Eq(φ,ψ) [ log p(y|x,φ,ψ) ] .
(19)
With φ and ψ are independent,
LBayes = −Eq(ψ)Eq(φ)[log p(y|x,ψ,φ)] + DKL[q(ψ)||p(ψ)] + DKL[q(φ)||p(φ)]. (20)
B DETAILS OF DOMAIN-INVARIANT LOSS IN VIL TRAINING
We split the training phase of VIL into several episodes. In each episode, as shown in Fig. 2, we randomly choose a source domain as the meta-source domain Ds, and the rest of the source domains {Dt}Tt=1 are treated as the meta-target domains. From Ds, we randomly select a batch of samples xs. For each xs, we then select N samples { xit }N i=1
, which are in the same category as xs, from each of the meta-target domains Dt. All of these samples are sent to the variational invariant feature extractor φ to get the representations zs and { zit }N i=1 , which are then sent to the variational
invariant classifier ψ to obtain the predictions ys and { yit }N i=1
. We obtain the domain-invariant loss for feature extractor LI(φ) by calculating the mean of the KL divergence of zs and each zit as Eq.(11). The domain-invariant loss for feature classifier LI(ψ) is calculated in a similar way on ys and { yit }N i=1 as Eq.(10).
C ABLATION STUDY FOR HYPERPARAMTERS
We also ablate the hyperparameters λφ, λψ and π on PACS with cartoon as the target domain. Results are shown in Fig 3 (a), (b) and (c). We obtain Fig 3 (a) by fixing λψ as 100 and adjusting λφ, Fig 3 (b) by fixing λφ as 1 and adjusting λψ and Fig 3 (c) by adjusting π while fixing other settings as in Section 4.1. λφ and λψ balance the influence of the Bayesian learning and domain-invariant learning, and their optimal values are 1 and 100. If the values are too small, the model tends to overfit to source domains as the performance on target data drops more obviously than on validation data. In contrast, too large values of them harm the overall performance of the model as there are obvious decrease of accuracy on both validation data and target data. Moreover, π balances the two components of the scale mixture prior of our Bayesian model. According to Blundell et al. (2015), the two components cause a prior density with heavier tail while many weights tightly concentrate around zero. Both of them are important. The performance is the best when π is 0.5 according to Fig 3 (c), which demonstrates the effectiveness of the two components in the scale mixture prior.
D DETAILED ABLATION STUDY ON PACS
In addition to the aforementioned experiments, we conduct some supplementary experiments with other settings on PACS to further demonstrate the effectiveness of the Bayesian inference and domain-invariant loss as shown in Table 6. The evaluated components are the same as in Table 1. For better comparison, we show the contents of Table 1 again in Table 6, and add three other settings with IDs (i), (j) and (k). Note that as the distribution of features z is unknown without a Bayesian feature extractor φ, the settings with LI(φ) and a non-Bayesian feature extractor is intractable. Comparing (i) with (d), we find that employing Bayesian inference to the last layer of the feature extractor improves the overall performance and the classification accuracy on three of the four domains. Moreover, comparing (j) and (k) to (g) shows the benefits of introducing variational Bayes and the domain-invariant loss to the classifier on most of the domains and the average of them.
E EXTRA VISUALIZATION RESULTS
To further observe and analyze the benefits of the individual components of VIL for domain-invariant learning, we visualize the features of all categories from the target domain only in Fig. 4, and features of only one category from all domains in Fig. 5. The same as Fig. 1, the visualization is conducted on the PACS dataset and the target domain is “art-painting”. The chosen category in Fig. 5 is “horse”.
Fig. 4 provides a more intuitive observation of the benefits of the Bayesian framework and domaininvariant learning in our method for enlarging the inter-class distance in the target domain. The conclusion is similar as in Fig. 1. From the figures in the first row, it is clear that the Bayesian framework whether in the classifier ((b)) or the feature extractor ((c)) increases the inter-class distance compared with the baseline method ((a)). With both of them ((d)), the performance becomes
better. Further, comparing two figures in each column, the inter-class distance is also enlarged by introducing the domain-invariant principle into the setting of each figure in the first row. VIL ((h)) achieves the best performance by combining the benefits of both the Bayesian framework and the domain-invariant principle in both feature extractor and classifier.
Fig. 5 provides a deeper insight into the intra-class feature distributions of the same category from different domains. By introducing the Bayesian inference into the model, the features demonstrate the manifold of the category as shown in the first row ((b), (c) and (d)). This makes recognition easier. Indeed, the visualization of features from multiple categories has similar properties as shown in Fig. 1. As shown in each column, introducing the domain-invariant learning into the model leads to a better mixture of features from different domains. The resultant domain-invariant representation makes the model more generalizable to unseen domains.
We also visualize the features in rotated MNIST and Fashion MNIST datasets, as shown in Fig. 6. Different shapes denote different categories. Red samples denote features from the in-distribution set and blue samples denote features from the out-of-distribution set. Compared with the baseline, our method reduces the intra-class distance between samples from the in-distribution set and the out-ofdistribution set and clusters the out-of-distribution samples of the same categories better, especially in the rotated Fashion-MNIST dataset. | 1. What is the focus of the paper regarding domain generalization?
2. What are the strengths of the proposed approach, particularly in treating uncertainty?
3. What are the weaknesses of the paper, especially regarding technical novelty?
4. Do you have any questions or suggestions regarding the proposed method, such as enhancing dissimilarity between different classes across domains?
5. Are there any minor comments or suggestions regarding the paper's content, such as clarifying definitions or providing more analysis of hyperparameters? | Review | Review
Summary:
The paper proposes a Bayesian inference framework for domain generalization (DG) that deals with both the uncertainty and domain shit. The proposed method treats the uncertainty effectively by applying the variational Bayesian inference to the last two layers of the model (feature representations and classifier layers). To make the model domain-invariant, the proposed method introduces a domain-invariant principle that makes distributions of the model's outputs (representations) in the same class similar across domains. The paper experimentally demonstrated the effectiveness of the method with four visual datasets.
Pros:
The problem tackled by this paper is very important because accurate prediction is generally difficult in DG due to the lack of target training data and therefore considering the uncertainty is especially important.
Experimental results with four widely used datasets show the effectiveness of the method.
Cons:
The technical novelty of this method seems a bit low. This is because this method is a relatively simple combination of Bayesian modeling of neural networks and some existing techniques for DG. In previous works for DG, techniques to bring distributions of model's outputs in the same class closer together across domains have been proposed. For example, CCSA [1] learns the domain-invariant representations by matching the representations in the same class across domains.
[1] Motiian et al., Unified deep supervised domain adaptation and generalization, ICCV2017
More analysis of hyperparameters such as
λ
ϕ
,
λ
ψ
, and
π
would improve the quality of this paper.
Reasons for Scores:
Although I like the research direction of this paper as described in Pros, I have some concerns about the technical novelty of this method as described in Cons.
Minor comments:
definition 2.1 (domain invariance) is a little confusing for me although I can see what the method wants to do by seeing Eq. (7). For example, what is the formal definition of domain-transform function?
Although the loss function for domain-invariant learning (7) seems to bring the distributions of the same class across domains closer together, can the method incorporate a mechanism to enhance dissimilarity of the different classes across domains? I think that it might improve performance.
There are some ambiguous statements. In Eq. (10), are
z
s
and
z
t
the means of distribution of
q
(
z
|
x
,
ϕ
)
? |
ICLR | Title
Variational Invariant Learning for Bayesian Domain Generalization
Abstract
Domain generalization addresses the out-of-distribution problem, which is challenging due to the domain shift and the uncertainty caused by the inaccessibility to data from the target domains. In this paper, we propose variational invariant learning, a probabilistic inference framework that jointly models domain invariance and uncertainty. We introduce variational Bayesian approximation into both the feature representation and classifier layers to facilitate invariant learning for better generalization across domains. In the probabilistic modeling framework, we introduce a domain-invariant principle to explore invariance across domains in a unified way. We incorporate the principle into the variational Bayesian layers in neural networks, achieving domain-invariant representations and classifier. We empirically demonstrate the effectiveness of our proposal on four widely used cross-domain visual recognition benchmarks. Ablation studies demonstrate the benefits of our proposal and on all benchmarks our variational invariant learning consistently delivers state-of-the-art performance.
1 INTRODUCTION
Domain generalization (Muandet et al., 2013), as an out-of-distribution problem, aims to train a model on several source domains and have it generalize well to unseen target domains. The major challenge stems from the large distribution shift between the source and target domains, which is further complicated by the prediction uncertainty (Malinin & Gales, 2018) introduced by the inaccessibility to data from target domains during training. Previous approaches focus on learning domain-invariant features using novel loss functions (Muandet et al., 2013; Li et al., 2018a) or specific architectures (Li et al., 2017a; D’Innocente & Caputo, 2018). Meta-learning based methods were proposed to achieve similar goals by leveraging an episodic training strategy (Li et al., 2017b; Balaji et al., 2018; Du et al., 2020). Most of these methods are based on deep neural network backbones (Krizhevsky et al., 2012; He et al., 2016). However, while deep neural networks have achieved remarkable success in various vision tasks, their performance is known to degenerate considerably when the test samples are out of the training data distribution (Nguyen et al., 2015; Ilse et al., 2019), due to their poorly calibrated behavior (Guo et al., 2017; Kristiadi et al., 2020).
As an attractive solution, Bayesian learning naturally represents prediction uncertainty (Kristiadi et al., 2020; MacKay, 1992), possesses better generalizability to out-of-distribution examples (Louizos & Welling, 2017) and provides an elegant formulation to transfer knowledge across different datasets (Nguyen et al., 2018). Further, approximate Bayesian inference has been demonstrated to be able to improve prediction uncertainty (Blundell et al., 2015; Louizos & Welling, 2017; Atanov et al., 2019), even when only applied to the last network layer (Kristiadi et al., 2020). These properties make it appealing to introduce Bayesian learning into the challenging and unexplored scenario of domain generalization.
In this paper, we propose variational invariant learning (VIL), a Bayesian inference framework that jointly models domain invariance and uncertainty for domain generalization. We apply variational Bayesian approximation to the last two network layers for both the representations and classifier by placing prior distributions over their weights, which facilitates generalization. We adopt Bayesian neural networks to domain generalization, which enjoys the representational power of deep neural networks while facilitating better generalization. To further improve the robustness to domain shifts, we introduce the domain-invariant principle under the Bayesian inference framework, which enables
us to explore domain invariance for both feature representations and the classifier in a unified way. We evaluate our method on four widely-used benchmarks for cross-domain visual object classification. Our ablation studies demonstrate the effectiveness of the variational Bayesian domain-invariant features and classifier for domain generalization. Results further show that our method achieves the best performance on all of the four benchmarks.
2 METHODOLOGY
We explore Bayesian inference for domain generalization. In this task, the samples from the target domains are never seen during training, and are usually out of the data distribution of the source domains. This leads to uncertainty when making predictions on the target domains. Bayesian inference offers a principled way to represent the predictive uncertainty in neural networks (MacKay, 1992; Kristiadi et al., 2020). We briefly introduce approximate Bayesian inference, under which we will introduce our variational invariant learning for domain generalization.
2.1 APPROXIMATE BAYESIAN INFERENCE
Given a dataset {x(i),y(i)}Ni=1 of N input-output pairs and a model parameterized by weights θ with a prior distribution p(θ), Bayesian neural networks aim to infer the true posterior distribution p(θ|x,y). As the exact inference of the true posterior is computationally intractable, Hinton & Camp (1993) and Graves (2011) recommended learning a variational distribution q(θ) to approximate p(θ|x,y) by minimizing the Kullback-Leibler (KL) divergence between them:
θ∗ = argmin θ
DKL [ q(θ)||p(θ|x,y) ] . (1)
The above optimization is equivalent to minimizing the loss function:
LBayes = −Eq(θ)[log p(y|x,θ)] + DKL[q(θ)||p(θ)], (2)
which is also known as the negative value of the evidence lower bound (ELBO) (Blei et al., 2017).
2.2 VARIATIONAL DOMAIN-INVARIANT LEARNING
In domain generalization, let D = {Di}|D|i=1 = S ∪ T be a set of domains, where S and T denote source domains and target domains respectively. S and T do not have any overlap with each other but share the same label space. For each domainDi ∈ D, we can define a joint distribution p(xi,yi) in the input space X and the output space Y . We aim to learn a model f : X → Y in the source domains S that can generalize well to the target domains T . The fundamental problem in domain generalization is to achieve robustness to domain shift between source and target domains, that is, we aim to learn a model invariant to the distributional shift between the source and target domains. In this work, we mainly focus on the invariant property across domains instead of exploring general invariance properties (Nalisnick & Smyth, 2018). Therefore, we introduce a formal definition of domain invariance, which is easily incorporated as criteria into the Bayesian framework to achieve domain-invariant learning.
Provided that all domains inD are in the same domain space, then for any input sample xs in domain Ds, we assume that there exists a domain-transform function gζ(·) which is defined as a mapping function that is able to project xs to other different domains Dζ with respect to the parameter ζ, where ζ ∼ q(ζ), and a different ζ lead to different post-transformation domains Dζ . Usually the exact form of gζ(·) is not necessarily known. Under this assumption, we introduce the definition of domain invariance, which we will incorporate into the Bayesian layers of neural networks for domain-invariant learning.
Definition 2.1 (Domain Invariance) Let xs be a given sample from domain Ds ∈ D, and xζ = gζ(xs) be a transformation of xs in another domain Dζ , where ζ ∼ q(ζ). pθ(y|x) denotes the output distribution of input x with model θ. The model θ is domain-invariant if,
pθ(ys|xs) = pθ(yζ |xζ), ∀ζ ∼ q(ζ). (3)
Here, we use y to represent the output from a neural layer with input x, which can either be the prediction vector from the last layer or the feature vector from the convolutional layers.
To make the domain-invariant principle easier to implement, we then extend the Eq. (3) to an expectation form:
pθ(ys|xs) = Eqζ [pθ(yζ |xζ)]. (4)
Based on this definition, we use the Kullback-Leibler divergence between the two terms in Eq. (4), DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] , to quantify the domain invariance of the model, which will be zero when the model is domain invariant. As in most cases, there is no analytical form of the domaintransform function and only a few samples from Dζ are available, which makes Eqζ [pθ(yζ |xζ)] intractable. Thus, we derive the following upper bound of the divergence:
DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] ≤ Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] , (5)
which can be approximated by Monte Carlo sampling.
We define the complete objective function of our variational invariant learning by combining Eq. (5) with Eq. (2). However, in Bayesian inference, the likelihood is obtained by taking the expectation over the distribution of parameter θ, i.e., pθ(y|x) = Eq(θ)[p(y|x,θ)], which is also intractable in Eq. (5). As the KL divergence is a convex function (Nalisnick & Smyth, 2018), we further extend Eq. (5) to an upper bound:
Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] ≤ Eqζ [[ Eq(θ)DKL [ p(ys|xs,θ)||p(yζ |xζ ,θ) ]]] , (6)
which is tractable with the unbiased Monte Carlo approximation. The complete derivations of Eq. (5) and Eq. (6) are provided in Appendix A.
In addition, it is worth noting that the domain-transformation distribution q(ζ) is implicit and inexpressible in reality and there are only a limited number of domains available in practice. This problem is exacerbated because the target domain is unseen during training, which further limits the number of available domains. Moreover, in most of the domain generalization databases, for a certain sample xs from domain Ds, there is no transformation corresponding to xs in other domains. This prevents the expectation with respect to qζ from being directly tractable in general.
Thus, we resort to use an empirically tractable implementation and adopt an episodic setting as in (Li et al., 2019). In each episode, we choose one domain from the source domains S as the metasource domain Ds and the rest are used as the meta-target domains {Dt}Tt=1. To achieve variational invariant learning in the Bayesian framework, we use samples from meta-target domains in the same category as xs to approximate the samples of gζ(xs). Then we obtain a general loss function for domain-invariant learning:
LI = 1
T T∑ t=1 1 N N∑ i=1 Eq(θ) [ DKL [ p(ys|xs,θ)||p(yit|xit,θ) ]] , (7)
where { xit }N i=1
are from Dt, denoting the samples in the same category as xs. More details and an illustration of the domain-invariant loss function can be found in Appendix B.
With the aforementioned loss functions, we develop the loss function of variational invariant learning for domain generalization: LVIL = LBayes + λLI. (8) Our variational invariant learning combines the Bayesian framework, which is able to introduce uncertainty into the network and is beneficial for out-of-distribution problems (Daxberger & Hernández-Lobato, 2019), and a domain-invariant loss function LI, which is designed based on predictive distributions to make the model generalize better to the unseen target domains. For Bayesian learning, it has been demonstrated that being just “a bit” Bayesian in the last layer of the neural network can well represent the uncertainty in predictions (Kristiadi et al., 2020). This indicates that applying the Bayesian treatment only to the last layer already brings sufficient benefits of Bayesian inference. Although adding Bayesian inference to more layers improves the performance, it also increases the computational cost. Further, from the perspective of domain invariance, making both
the classifier and feature extractor more robust to the domain shifts also leads to better performance (Li et al., 2019). Thus, there is a trade-off between the benefits of variational Bayesian domain invariance and computational efficiency. Instead of applying the Bayesian principle to all the layers of the neural network, in this work we explore domain invariance by applying it to only the classifier layer ψ and the last feature extraction layer φ.
In this case, the LBayes in Eq. (2) becomes the ELBO with respect to ψ and φ jointly. As they are independent, the LBayes is expressed as: LBayes = −Eq(ψ)Eq(φ)[log p(y|x,ψ,φ)] + DKL[q(ψ)||p(ψ)] + DKL[q(φ)||p(φ)]. (9) The above variational inference objective allows us to explore domain-invariant representations and classifier in a unified way. The detailed derivation of Eq. (9) is provided in Appendix A.
Domain-Invariant Classifier To establish the domain-invariant classifier, we directly incorporate the proposed domain-invariant principle into the last layer of the network, which gives rise to
LI(ψ) = 1
T T∑ t=1 1 N N∑ i=1 Eq(ψ) [ DKL [ p(ys|zs,ψ)||p(yit|zit,ψ) ]] , (10)
where z denotes the feature representations of input x, and the subscripts s and t indicate the metasource domain and the meta-target domains as in Eq. (7). Since p(y|z,ψ) is a Bernoulli distribution, we can conveniently calculate the KL divergence in Eq. (10).
Domain-Invariant Representations To also make the representations domain invariant, we have
LI(φ) = 1
T T∑ t=1 1 N N∑ i=1 Eq(φ) [ DKL [ p(zs|xs,φ)||p(zit|xit,φ) ]] , (11)
where φ are the parameters of the feature extractor. Since the feature extractor is also a Bayesian layer, the distribution of p(z|x,φ) will be a factorized Gaussian if the posterior of φ is as well. We illustrate this as follows. Let x be the input feature of a Bayesian layer φ, which has a factorized Gaussian posterior, the posterior of the activation z of the Bayesian layer is also a factorized Gaussian (Kingma et al., 2015):
q(φi,j) ∼N (µi,j , σ2i,j) ∀φi,j ∈ φ⇒ p(zj |x,φ) ∼ N (γj , δ2j ),
γj = N∑ i=1 xiµi,j , and δ2j = N∑ i=1 x2iσ 2 i,j ,
(12)
where zj denotes the j-th element in z, likewise for xi, and φi,j denotes the element at the position (i, j) in φ. Based on this property of the Bayesian framework, we assume that the posterior of our variational invariant feature extractor has a factorized Gaussian distribution, which leads to an easier calculation of the KL divergence in Eq. (11). Note that with the domain-invariant representations, z in Eq. (10) corresponds to samples of the feature representation distributions: zs ∼ p(zs|xs,φ) and zt ∼ p(zt|xt,φ).
2.3 OBJECTIVE FUNCTION
The objective function of our variational invariant learning is defined as: LVIL = LBayes + λψLI(ψ) + λφLI(φ), (13)
where λψ and λφ are hyperparameters to control the domain-invariant terms. We adopt Monte Carlo sampling and obtain the empirical objective function for variational invariant learning as follows:
LVIL = 1
L L∑ `=1 1 M M∑ m [ − log p(ys|xs,ψ(`),φ(m)) ] + DKL [ q(ψ)||p(ψ)] + DKL[q(φ)||p(φ) ] + λψ 1
T T∑ t=1 1 N N∑ i=1 1 L L∑ `=1 DKL [ p(ys|zs,ψ(`))||p(yit|zit,ψ(`)) ] + λφ 1
T T∑ t=1 1 N N∑ i=1 1 M M∑ m=1 DKL [ p(zs|xs,φ(m))||p(zit|xit,φ(m)) ] ,
(14)
where xs and zs denote the input and its feature fromDs, respectively, and xit and z i t are fromDt as in Eq. (7). The posteriors are set to factorized Gaussian distributions, i.e., q(ψ) = N (µψ,σ2ψ) and q(φ) = N (µφ,σ2φ). We adopt the reparameterization trick to draw Monte Carlo samples (Kingma & Welling, 2014) asψ(`) = µψ+ (`)∗σψ , where (`) ∼ N (0, I). We draw the samples forφ(m) in a similar way. In the implementation of our variational invariant learning, to increase the flexibility of the prior distribution in our Bayesian layers, we choose to place a scale mixture of two Gaussian distributions as the priors p(ψ) and p(φ) (Blundell et al., 2015):
πN (0,σ21) + (1− π)N (0,σ22), (15)
where σ1, σ2 and π are hyperparameters chosen by cross-validation.
3 RELATED WORK
One solution for domain generalization is to generate more source domain data to increase the probability of covering the data in the target domains (Shankar et al., 2018; Volpi et al., 2018). Shankar et al. (2018) augmented the data by perturbing the input images with adversarial gradients generated by an auxiliary classifier. Qiao et al. (2020) proposed a more challenging scenario of domain generalization named single domain generalization, which only has one source domain, and they designed an adversarial domain augmentation method to create “fictitious” yet “challenging” data. Recently, Zhou et al. (2020) employed a generator to synthesize data from pseudo-novel domains to augment the source domains, maximizing the distance between source and pseudo-novel domains as measured by optimal transport (Peyré et al., 2019). Another solution for domain generalization is based on learning domain-invariant features (D’Innocente & Caputo, 2018; Li et al., 2018b; 2017a). Muandet et al. (2013) proposed domain-invariant component analysis to learn invariant transformantions by minimizing the dissimilarity across domains. Louizos et al. (2015) learned invariant representations by variational autoencoder (Kingma & Welling, 2014), which introduced Bayesian inference into invariant feature learning. Dou et al. (2019) and Seo et al. (2019) tried to achieve a similar goal by introducing two complementary losses, global class alignment loss and local sample clustering loss, and employing multiple normalization methods. Li et al. (2019) proposed an episodic training algorithm to obtain both domain-invariant feature extractor and classifier.
Recently, meta-learning based techniques have been considered to solve domain generalization problems. Li et al. (2018a) proposed a meta-learning domain generalization method which introduced the gradient-based method, i.e., model agnostic meta-learning (Finn et al., 2017), for domain generalization. Balaji et al. (2018) address the domain generalization problem by learning a regularization function in a meta-learning framework, making the model robust to domain shifts. Du et al. (2020) propose the meta variational information bottleneck to learn domain-invariant representations through episodic training.
To the best of our knowledge, Bayesian neural networks have not yet been explored in domain generalization. Our method introduces variational Bayesian approximation to both the feature extractor and classifier of the neural network in conjunction with the newly introduced domain-invariant principle for domain generalization. The resultant variational invariant learning combines the representational power of deep neural networks and variational Bayesian inference.
Similar to our proposal, CCSA by Motiian et al. (2017) also aligns representations across domains in the same class. Specifically, CCSA utilizes an L2 distance between deterministic features while we exploit Bayesian neural networks to learn domain-invariant representations by minimizing the distance between domain distributions. Theoretically, minimizing the distance between distributions incorporates larger inter-class variance than minimizing distance of deterministic features. Moreover, we apply our variational invariant learning to both the feature extractor and the classifier, while CCSA only considers an alignment loss on the feature representations.
4 EXPERIMENTS
4.1 DATASETS AND SETTINGS
We conduct our experiments on four widely used benchmarks in domain generalization:
PACS (Li et al., 2017a) consists of 9,991 images of seven classes from four domains - photo, artpainting, cartoon and sketch. We follow the “leave-one-out” protocol in (Li et al., 2017a; 2018b; Carlucci et al., 2019), where the model is trained on any three of the four domains, which we call source domains, and tested on the last (target) domain.
Office-Home (Venkateswara et al., 2017) also has four domains: art, clipart, product and realworld. There are about 15,500 images of 65 categories for object recognition in office and home environments. We use the same experimental protocol as for PACS.
Rotated MNIST and Fashion-MNIST were introduced for evaluating domain generalization in (Piratla et al., 2020). For the fair comparison, we follow their recommended settings and randomly select a subset of 2,000 images from MNIST and 10,000 images from Fashion-MNIST, which is considered to have been rotated by 0◦. The subset of images is then rotated by 15◦ through 75◦ in intervals of 15◦, creating five source domains. The target domains are created by rotation angles of 0◦ and 90◦. We use these two datasets to demonstrate the generalizability by comparing the performance on in-distribution and out-of-distribution data.
For all four benchmarks, we use ResNet-18 (He et al., 2016) pretrained on ImageNet (Deng et al., 2009) as the CNN backbone. During training, we use Adam optimization (Kingma & Ba, 2014) with the learning rate set to 0.0001, and train for 10,000 iterations. In each iteration we choose one source domain as the meta-source domain. The batch size is 128. To fit memory footprint, we choose a maximum number of samples per category per target domain to implement the domaininvariant learning, i.e. sixteen for PACS, Rotated MNIST and Fashion-MNIST datasets, and four for the Office-Home dataset. We choose λφ and λψ based on the performance on the validation set and their influence is summarized in the new Fig 3 in Appendix C. The optimal values of λφ and λψ are 0.1 and 100 respectively. Parameters σ1 and σ2 in Eq. (15) are set to 0.1 and 1.5. The model with the highest validation set accuracy is employed for evaluation on the target domain. All code will be made publicly available.
4.2 ABLATION STUDY
We conduct an ablation study to investigate the effectiveness of our variational invariant learning for domain generalization. The experiments are performed on the PACS dataset. Since the major contributions of this work are the Bayesian treatment and the domain-invariant principle, we evaluate their effect by individually incorporating them into the classifier - the last layer - ψ and the feature extractor - the penultimate layer - φ. The results are shown in Table 1. The “X” and “×” in the “Bayesian” column denote whether the classifier ψ and feature extractor φ are Bayesian layers or deterministic layers. In the “Invariant” column they denote whether the domain-invariant loss is introduced into the classifier and the feature extractor. Note that the predictive distribution is a Bernoulli distribution, which also admits the domain-invariant loss, we therefore include this case for a comprehensive comparison.
In Table 1, the first four rows demonstrate the benefit of the Bayesian treatment. The first row (a) serves as a baseline model, which is a vanilla deep convolutional network without any Bayesian treatment and domain-invariant loss. The backbone is also a ResNet-18 pretrained on ImageNet. It is clear the Bayesian treatment, either for the classifier (b) or the feature extractor (c), improves the performance, especially in the “Art-painting” and “Sketch” domains, and this is further demonstrated in (d) where we employ the Bayesian classifier and feature extractor simultaneously.
The benefit of the domain-invariant principle for classifiers is demonstrated by comparing (e) to (a) and (f) to (b). The settings with domain invariance consistently perform better than those without it. A similar trend is also observed when applying the domain-invariant principle to the feature extractor, as shown by comparing (g) to (c). Overall, our variational invariant learning (h) achieves the best performance compared to other variants, demonstrating its effectiveness for domain generalization. Note that the feature distributions p(z|x) are unknown without Bayesian formalism, leading to an intractable LI(φ). Therefore, we do not conduct the experiment with only the domain-invariant loss on both the classifier and the feature extractor.
To further demonstrate the domain-invariant property of our method, we visualize the features learned by different settings of our method in Table 1. We use t-SNE (Maaten & Hinton, 2008)
to reduce the feature dimension into a two-dimensional subspace, following Du et al. (2020). The visualization is shown in Fig. 1. More visualization results are provided in Appendix E.
Figs. 1 (a), (b), (c) and (d) show the baseline, Bayesian classifier, Bayesian representations and both Bayesian classifier and representations, demonstrating the benefits of Bayesian inference for learning domain-invariant features. The Bayesian treatment on both the classifier and the feature extractor enlarges the inter-class distance in all domains, which benefits the classification performance on the target domain, as shown in Fig. 1 (d). Figs. 1 (e), (f), (g), (h) are visualizations of feature representations after introducing the domain-invariant principle, compared to the upper row. Comparing the two figures in each column indicates that our domain-invariant principle imposed on either the representation or the classifier further enlarges the inter-class distances. At the same time, it reduces the distance between samples of the same class from different domains. This is even more apparent in the intra-class distance between samples from source and target domains. As a result, the inter-class distances in the target domain become larger, therefore improving performance. It is worth noting that the domain-invariant principle on the classifier in Fig. 1 (f) and on the feature extractor in Fig. 1 (g)) both improve the domain-invariant features. Our variational invariant learning in Fig. 1 (h) therefore has better performance by combining their benefits.
We also experiments with more layers in the feature extractor, see Table 2. “Bayesian φ′” and “Invariant φ′” denote whether the additional feature extraction layer φ′ has the Bayesian property and domain-invariant property. The classifiers have both properties in all cases in Table 2. The first row is the setting with only one variational invariant layer in the feature extractor. When introducing another Bayesian learning layer φ′ without the domain-invariant property into the model, as shown in the second row in Table 2, the average performance improves slightly. If we introduce both the Bayesian learning and domain-invariant learning into φ′, as shown in the third row, the overall performance declines a bit. One reason might be the information loss in feature representations caused by the excessive use of domain-invariant learning. In addition, due to the Bayesian inference and Monte-Carlo sampling, more variational-invariant layers leads to higher memory usage and more computations, which is also one reason for us to apply the variational invariant learning only to the last feature extraction layer and the classifier.
4.3 STATE-OF-THE-ART COMPARISON
In this section, we compare our method with several state-of-the-art methods on four datasets. The results are reported in Tables 3-5. The baseline on PACS (Table 3), Office-Home (Table 4), and rotated MNIST and Fashion-MNIST (Table 5) are all based on the same vanilla deep convolutional ResNet-18 network, without any Bayesian treatment, the same as row (a) in Table 1
On PACS, as shown in Table 3, our variational invariant learning method achieves the best overall performance. On each domain, our performance is competitive with the state-of-the-art and we exceed all other methods on the “Cartoon” domain. On Office-Home, as shown in Table 4, we again achieve the best recognition accuracy. It is worth mentioning that on the most challenging “Art” and “Clipart” domains, our variational invariant learning also delivers the highest performance, with a good improvement over previous methods.
L2A-OT and DSON outperform the proposed model on some domains of PACS and Office-Home. L2A-OT learns a generator to synthesize data from pseudo-novel domains to augment the source domains. The pseudo-novel domains often have similar characteristics with the source data. Thus, when the target data also have similar characteristics with the source domains this pays off as the
pseudo domains are more likely to cover the target domain, such as “Product” and “Real World” in Office-Home and “Photo” in PACS. When the test domain is different from all of the training domains the performance suffers, e.g., “Clipart” in Office-Home and “Sketch” in PACS. Our method generates domain-invariant representations and classifiers, resulting in competitive results across all domains and overall. DSON mixtures batch and instance normalization for domain generalization. This tactic is effective on PACS, but less competitive on Office-Home. We attribute this to the larger number of categories on Office-Home, where instance normalization is known to make features less discriminative with respect to object categories (Seo et al., 2019). Our domain-invariant network makes feature distributions and predictive distributions similar across domains, resulting in good performance on both PACS and Office-Home.
On the Rotated MNIST and Fashion-MNIST datasets, following the experimental settings in (Piratla et al., 2020), we evaluate our method on the in-distribution and out-of-distribution sets. As shown in Table 5, our VIL achieves the best performance on both sets of the two datasets, surpassing other methods. Moreover, our method especially improves the classification performance on the out-of-distribution sets, demonstrating its strong generalizability to unseen domains, which is also consistent with the findings in Fig. 1.
5 CONCLUSION
In this work, we propose variational invariant learning (VIL), a variational Bayesian learning framework for domain generalization. We introduce Bayesian neural networks into the model, which is able to better represent uncertainty and enhance the generalization to out-of-distribution data. To handle the domain shift between source and target domains, we propose a domain-invariant principle under the variational inference framework, which is incorporated by establishing a domain-invariant feature extractor and classifier. Our variational invariant learning combines the representational power of deep neural networks and uncertainty modeling ability of Bayesian learning, showing great effectiveness for domain generalization. Extensive ablation studies demonstrate the benefits of the Bayesian inference and domain-invariant principle for domain generalization. Our variational invariant learning sets a new state-of-the-art on four domain generalization benchmarks.
A DERIVATION
A.1 DERIVATION OF THE UPPER BOUNDS OF DOMAIN-INVARIANT LEARNING
As in most cases the Eqζ [pθ(yζ |xζ)] is intractable, we derive the upper bound in Eq. (5), which is achieved via Jensen’s inequality:
DKL [ pθ(ys|xs)||Eqζ [pθ(yζ |xζ)] ] = Epθ(ys|xs)[log pθ(ys|xs)]− Epθ(ys|xs) [ logEqζ [pθ(yζ |xζ)] ] ≤ Epθ(ys|xs)[log pθ(ys|xs)]− Epθ(ys|xs)Eqζ [log pθ(yζ |xζ)]
= Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] .
(16) In Bayesian inference, computing the likelihood pθ(y|x) = ∫ θ p(y|x,θ)dθ = Eq(θ)[p(y|x,θ)] is notoriously difficult. Thus, as the fact that KL divergence is a convex function, we obtain the upper bound in Eq. (6) achieved via Jensen’s inequality similar to Eq. (16):
Eqζ [ DKL [ pθ(ys|xs)||pθ(yζ |xζ) ]] = Eqζ [ DKL [ Eq(θ)[p(ys|xs,θ)]||Eq(θ)[p(yζ |xζ ,θ)] ]] ≤ Eqζ [[ Eq(θ)DKL [ p(ys|xs,θ)||p(yζ |xζ ,θ) ]]] . (17)
A.2 DERIVATION OF VARIATIONAL BAYESIAN APPROXIMATION FOR REPRESENTATION (φ) AND CLASSIFIER (ψ) LAYERS.
We consider the model with two Bayesian layers φ and ψ as the last layer of feature extractor and the classifier respectively. The prior distribution of the model is p(φ,ψ), and the true posterior distribution is p(φ,ψ|x,y). Following the settings in Section 2.1, we need to learn a variational distribution q(φ,ψ) to approximate the true posterior by minimizing the KL divergence from q(φ,ψ) to p(φ,ψ|x,y):
φ∗,ψ∗ = argmin φ,ψ
DKL [ q(φ,ψ)||p(φ,ψ|x,y) ] . (18)
By applying the Bayesian rule p(φ,ψ|x,y) ∝ p(y|x,φ,ψ)p(φ,ψ), the optimization is equivalent to minimizing:
LBayes = ∫ q(φ,ψ) log
q(φ,ψ)
p(φ,ψ)p(y|x,φ,ψ) dφdψ = DKL [ q(φ,ψ)||p(φ,ψ) ] − Eq(φ,ψ) [ log p(y|x,φ,ψ) ] .
(19)
With φ and ψ are independent,
LBayes = −Eq(ψ)Eq(φ)[log p(y|x,ψ,φ)] + DKL[q(ψ)||p(ψ)] + DKL[q(φ)||p(φ)]. (20)
B DETAILS OF DOMAIN-INVARIANT LOSS IN VIL TRAINING
We split the training phase of VIL into several episodes. In each episode, as shown in Fig. 2, we randomly choose a source domain as the meta-source domain Ds, and the rest of the source domains {Dt}Tt=1 are treated as the meta-target domains. From Ds, we randomly select a batch of samples xs. For each xs, we then select N samples { xit }N i=1
, which are in the same category as xs, from each of the meta-target domains Dt. All of these samples are sent to the variational invariant feature extractor φ to get the representations zs and { zit }N i=1 , which are then sent to the variational
invariant classifier ψ to obtain the predictions ys and { yit }N i=1
. We obtain the domain-invariant loss for feature extractor LI(φ) by calculating the mean of the KL divergence of zs and each zit as Eq.(11). The domain-invariant loss for feature classifier LI(ψ) is calculated in a similar way on ys and { yit }N i=1 as Eq.(10).
C ABLATION STUDY FOR HYPERPARAMTERS
We also ablate the hyperparameters λφ, λψ and π on PACS with cartoon as the target domain. Results are shown in Fig 3 (a), (b) and (c). We obtain Fig 3 (a) by fixing λψ as 100 and adjusting λφ, Fig 3 (b) by fixing λφ as 1 and adjusting λψ and Fig 3 (c) by adjusting π while fixing other settings as in Section 4.1. λφ and λψ balance the influence of the Bayesian learning and domain-invariant learning, and their optimal values are 1 and 100. If the values are too small, the model tends to overfit to source domains as the performance on target data drops more obviously than on validation data. In contrast, too large values of them harm the overall performance of the model as there are obvious decrease of accuracy on both validation data and target data. Moreover, π balances the two components of the scale mixture prior of our Bayesian model. According to Blundell et al. (2015), the two components cause a prior density with heavier tail while many weights tightly concentrate around zero. Both of them are important. The performance is the best when π is 0.5 according to Fig 3 (c), which demonstrates the effectiveness of the two components in the scale mixture prior.
D DETAILED ABLATION STUDY ON PACS
In addition to the aforementioned experiments, we conduct some supplementary experiments with other settings on PACS to further demonstrate the effectiveness of the Bayesian inference and domain-invariant loss as shown in Table 6. The evaluated components are the same as in Table 1. For better comparison, we show the contents of Table 1 again in Table 6, and add three other settings with IDs (i), (j) and (k). Note that as the distribution of features z is unknown without a Bayesian feature extractor φ, the settings with LI(φ) and a non-Bayesian feature extractor is intractable. Comparing (i) with (d), we find that employing Bayesian inference to the last layer of the feature extractor improves the overall performance and the classification accuracy on three of the four domains. Moreover, comparing (j) and (k) to (g) shows the benefits of introducing variational Bayes and the domain-invariant loss to the classifier on most of the domains and the average of them.
E EXTRA VISUALIZATION RESULTS
To further observe and analyze the benefits of the individual components of VIL for domain-invariant learning, we visualize the features of all categories from the target domain only in Fig. 4, and features of only one category from all domains in Fig. 5. The same as Fig. 1, the visualization is conducted on the PACS dataset and the target domain is “art-painting”. The chosen category in Fig. 5 is “horse”.
Fig. 4 provides a more intuitive observation of the benefits of the Bayesian framework and domaininvariant learning in our method for enlarging the inter-class distance in the target domain. The conclusion is similar as in Fig. 1. From the figures in the first row, it is clear that the Bayesian framework whether in the classifier ((b)) or the feature extractor ((c)) increases the inter-class distance compared with the baseline method ((a)). With both of them ((d)), the performance becomes
better. Further, comparing two figures in each column, the inter-class distance is also enlarged by introducing the domain-invariant principle into the setting of each figure in the first row. VIL ((h)) achieves the best performance by combining the benefits of both the Bayesian framework and the domain-invariant principle in both feature extractor and classifier.
Fig. 5 provides a deeper insight into the intra-class feature distributions of the same category from different domains. By introducing the Bayesian inference into the model, the features demonstrate the manifold of the category as shown in the first row ((b), (c) and (d)). This makes recognition easier. Indeed, the visualization of features from multiple categories has similar properties as shown in Fig. 1. As shown in each column, introducing the domain-invariant learning into the model leads to a better mixture of features from different domains. The resultant domain-invariant representation makes the model more generalizable to unseen domains.
We also visualize the features in rotated MNIST and Fashion MNIST datasets, as shown in Fig. 6. Different shapes denote different categories. Red samples denote features from the in-distribution set and blue samples denote features from the out-of-distribution set. Compared with the baseline, our method reduces the intra-class distance between samples from the in-distribution set and the out-ofdistribution set and clusters the out-of-distribution samples of the same categories better, especially in the rotated Fashion-MNIST dataset. | 1. What is the main contribution of the paper in the field of probabilistic inference?
2. What are the strengths of the proposed approach, particularly in its ability to facilitate domain generalization and invariance?
3. Do you have any concerns regarding the paper's claims and comparisons with other works in the field?
4. How does the reviewer assess the clarity and quality of the paper's content? | Review | Review
Summary: The paper proposes variational invariant learning (VIL) as a framework for probabilistic inference that jointly models domain invariance and uncertainty. The approach exploits variational Bayesian approximation in both feature encoding and classifier layers in order to facilitate domain generalization and invariance. The paper evaluates VIL on benchmarks for cross-domain visual object classification.
Positives: The paper presents their methodology and definitions very clearly. Comparisons to state-of-the-art methods are well presented. The ablation study is very clear to provide a thorough understanding of the approach. Overall I like this paper.
Concerns: The authors state: "the Bayesian approach has not yet been explored in domain generalization" or "this is the first work to adopt variational Bayes to domain generalization". I may not strongly agree on these statements in their current form. There has been some previous work on extending variational autoencoding frameworks for domain generalization from several perspectives (e.g, adversarial inference). This type of variational inference based models ideally exploit a Bayesian approach for domain generalization as well. I would expect the authors to rephrase their claims in a more specific manner to make this distinction. Regarding this concern, one example to consider for instance is "The Variational Fair Autoencoder" (Louizos et al., ICLR 2016) approach for invariant representation learning with variational autoencoders for domain generalization. |
ICLR | Title
Causal Estimation for Text Data with (Apparent) Overlap Violations
Abstract
Consider the problem of estimating the causal effect of some attribute of a text document; for example: what effect does writing a polite vs. rude email have on response time? To estimate a causal effect from observational data, we need to adjust for confounding aspects of the text that affect both the treatment and outcome—e.g., the topic or writing level of the text. These confounding aspects are unknown a priori, so it seems natural to adjust for the entirety of the text (e.g., using a transformer). However, causal identification and estimation procedures rely on the assumption of overlap: for all levels of the adjustment variables, there is randomness leftover so that every unit could have (not) received treatment. Since the treatment here is itself an attribute of the text, it is perfectly determined, and overlap is apparently violated. The purpose of this paper is to show how to handle causal identification and obtain robust causal estimation in the presence of apparent overlap violations. In brief, the idea is to use supervised representation learning to produce a data representation that preserves confounding information while eliminating information that is only predictive of the treatment. This representation then suffices for adjustment and satisfies overlap. Adapting results on non-parametric estimation, we find that this procedure is robust to conditional outcome misestimation, yielding a low-absolute-bias estimator with valid uncertainty quantification under weak conditions. Empirical results show strong improvements in bias and uncertainty quantification relative to the natural baseline. Code, demo data and a tutorial are available at https://github.com/gl-ybnbxb/TI-estimator.
1 INTRODUCTION
We consider the problem of estimating the causal effect of an attribute of a passage of text on some downstream outcome. For example, what is the effect of writing a polite or rude email on the amount of time it takes to get a response? In principle, we might hope to answer such questions with a randomized experiment. However, this can be difficult in practice—e.g., if poor outcomes are costly or take long to gather. Accordingly, in this paper, we will be interested in estimating such effects using observational data.
There are three steps to estimating causal effects using observational data (See Chapter 36 Murphy (2023)). First, we need to specify a concrete causal quantity as our estimand. That is, give a formal quantity target of estimation corresponding to the high-level question of interest. The next step is causal identification: we need to prove that this causal estimator can, in principle, be estimated using only observational data. The standard approach for identification relies on adjusting for confounding variables that affect both the treatment and the outcome. For identification to hold, our adjustment variables must satisfy two conditions: unconfoundedness and overlap. The former requires the adjustment variables contain sufficient information on all common causes. The latter requires that the adjustment variable does not contain enough information about treatment assignment to let us perfectly predict it. Intuitively, to disentangle the effect of treatment from the effect of confounding, we must observe each treatment state at all levels of confounding. The final
step is estimation using a finite data sample. Here, overlap also turns out to be critically important as a major determinant of the best possible accuracy (asymptotic variance) of the estimator Chernozhukov et al. (2016).
Since the treatment is a linguistic property, it is often reasonable to assume that text data has information about all common causes of the treatment and the outcome. Thus, we may aim to satisfy unconfoundedness in the text setting by adjusting for all the text as the confounding part. However, doing so brings about overlap violation. Since the treatment is a linguistic property determined by the text, the probability of treatment given any text is either 0 or 1. The polite/rude tone is determined by the text itself. Therefore, overlap does not hold if we naively adjust for all the text as the confounding part. This problem is the main subject of this paper. Or, more precisely, our goal is to find a causal estimand, causal identification conditions, and a robust estimation procedure that will allow us to effectively estimate causal effects even in the presence of such (apparent) overlap violations.
In fact, there is an obvious first approach: simply use a standard plug-in estimation procedure that relies only on modeling the outcome from the text and treatment variables. In particular, do not make any explicit use of the propensity score, the probability each unit is treated. Pryzant et al. (2020) use an approach of this kind and show it is reasonable in some situations. Indeed, we will see in Sections 3 and 4 that this procedure can be interpreted as a point estimator of a controlled causal effect. Even once we understand what the implied causal estimand is, this approach has a major drawback: the estimator is only accurate when the text-outcome model converges at a very fast rate. This is particularly an issue in the text setting, where we would like to use large, flexible, deep learning models for this relationship. In practice, we find that this procedure works poorly: the estimator has significant absolute bias and (the natural approach to) uncertainty quantification almost never includes the estimand true value; see Section 5.
The contribution of this paper is a method for robustly estimating causal effects in text. The main idea is to break estimation into a two-stage procedure, where in the first stage we learn a representation of the text that preserves enough information to account for confounding, but throws away enough information to avoid overlap issues. Then, we use this representation as the adjustment variables in a standard double machine-learning estimation procedure Chernozhukov et al. (2016; 2017a). To establish this method, the contributions of this paper are:
1. We give a formal causal estimand corresponding to the text-attribute question. We show this estimand is causally identified under weak conditions, even in the presence of apparent overlap issues.
2. We show how to efficiently estimate this quantity using the adapted double-ML technique just described. We show that this estimator admits a central limit theorem at a fast ( p n) rate under weak conditions on the rate at which the ML model
learns the text-outcome relationship (namely, convergence at n1/4 rate). This implies absolute bias decreases rapidly, and an (asymptotically) valid procedure for uncertainty quantification.
3. We test the performance of this procedure empirically, finding significant improvements in bias and uncertainty quantification relative to the outcome-model-only baseline.
Related work The most related literature is on causal inference with text variables. Papers include treating text as treatment Pryzant et al. (2020); Wood-Doughty et al. (2018); Egami et al. (2018); Fong & Grimmer (2016); Wang & Culotta (2019); Tan et al. (2014)), as outcome Egami et al. (2018); Sridhar & Getoor (2019), as confounder Veitch et al. (2019); Roberts et al. (2020); Mozer et al. (2020); Keith et al. (2020), and discovering or predicting causality from text del Prado Martin & Brendel (2016); Tabari et al. (2018); Balashankar et al. (2019); Mani & Cooper (2000). There are also numerous applications using text to adjust for confounding (e.g., Olteanu et al., 2017; Hall, 2017; Kiciman et al., 2018; Sridhar et al., 2018; Sridhar & Getoor, 2019; Saha et al., 2019; Karell & Freedman, 2019; Zhang et al., 2020). Of these, Pryzant et al. (2020) also address non-parametric es-
timation of the causal effect of text attributes. Their focus is primarily on mismeasurement of the treatments, while our motivation is robust estimation.
This paper also relates to work on causal estimation with (near) overlap violations. D’Amour et al. (2021) points out high-dimensional adjustment (e.g., Rassen et al., 2011; Louizos et al., 2017; Li et al., 2016; Athey et al., 2017) suffers from overlap issues. Extra assumptions such as sparsity are often needed to meet the overlap condition. These results do not directly apply here because we assume there exists a low-dimensional summary that suffices to handle confounding.
D’Amour & Franks (2021) studies summary statistics that suffice for identification, which they call deconfounding scores. The supervised representation learning approach in this paper can be viewed as an extremal case of the deconfounding score. However, they consider the case where ordinary overlap holds with all observed features, with the aim of using both the outcome model and propensity score to find efficient statistical estimation procedures (in a linear-gaussian setting). This does not make sense in the setting we consider. Additionally, our main statistical result (robustness to outcome model estimation) is new.
2 NOTATION AND PROBLEM SETUP
We follow the causal setup of Pryzant et al. (2020). We are interested in estimating the causal effect of treatment A on outcome Y . For example, how does writing a negative sentiment (A) review (X ) affect product sales (Y )? There are two immediate challenges to estimating such effects with observed text data. First, we do not actually observe A, which is the intent of the writer. Instead, we only observe Ã, a version of A that is inferred from the text itself. In this paper, we will assume that A= Ã almost surely—e.g., a reader can always tell if a review was meant to be negative or positive. This assumption is often reasonable, and follows Pryzant et al. (2020). The next challenge is that the treatment may be correlated with other aspects of the text (Z) that are also relevant to the outcome— e.g., the product category of the item being reviewed. Such Z can act as confounding variables, and must somehow be adjusted for in a causal estimation problem.
Each unit (Ai , Zi , X i , Yi) is drawn independently and identically from an unknown distribution P. Figure 1 shows the causal relationships among variables, where solid arrows represent causal relations, and the dotted line represents possible correlations between two variables. We assume that text X contains all common causes of à and the outcome Y .
3 IDENTIFICATION AND CAUSAL ESTIMAND
The first task is to translate the qualitative causal question of interest—what is the effect of A on Y —into a causal estimand. This estimand must both be faithful to the qualitative question and be identifiable from observational data under reasonable assumptions. The key challenges here are that we only observe à (not A itself), there are unknown confounding variables influencing the text, and à is a deterministic function of the text, leading to overlap violations if we naively adjust for all the text. Our high-level idea is to split the text into abstract (unknown) parts depending on whether they are confounding—affect both à and Y —or whether they affect à alone. The part of the text that affects only à is not necessary for causal adjustment, and can be thrown away. If this part contains “enough” information about Ã, then throwing it away can eliminate our ability to perfectly predict Ã, thus fixing the overlap issue. We now turn to formalizing this idea, showing how it can be used to define an estimand and to identify this estimand from observational data.
Causal model The first idea is to decompose the text into three parts: one part affected by only A, one part affected interactively by A and Z , and another part affected only by Z . We use XA, XA∧Z and XZ to denote them, respectively; see Figure 2 for the corresponding causal model. Note that there could be additional information in the text in addition to these three parts. However, since they are irrelevant to both A and Z , we do not need to consider them in the model.
Controlled direct effect (CDE) The treatment A affects the outcome through two paths. Both “directly” through XA—the part of the text determined just by the treatment—and also through a path going through XA∧Z—the part of the text that relies on interaction effects with other factors. Our formal causal effect aims at capturing the effect of A through only the first, direct, path.
CDE := EXA∧Z ,XZ |A=1 E[Y | XA∧Z , XZ , do(A= 1)] −E[Y | XA∧Z , XZ , do(A= 0)] . (3.1)
Here, do is Pearl’s do notation, and the estimand is a variant of the controlled direct effect (Pearl, 2009). Intuitively, it can be interpreted as the expected change in the outcome induced by changing the treatment from 1 to 0 while keeping part of the text affected by Z the same as it would have been had we set A = 1. This is a reasonable formalization of the qualitative “effect of A on Y ”. Of course, it is not the only possible formalization. Its advantage is that, as we will see, it can be identified and estimated under reasonable conditions.
Identification To identify CDE we must rewrite the expression in terms of observable quantities. There are three challenges: we need to get rid of the do operator, we don’t observe
A (only Ã), and the variables XA∧Z , XZ are unknown (they are latent parts of X ). Informally, the identification argument is as follows. First, XA∧Z , XZ block all backdoor paths (common causes) in Figure 2. Moreover, because we have thrown away XA, we now satisfy overlap. Accordingly, the do operator can be replaced by conditioning following the usual causal-adjustment argument. Next, A= Ã almost surely, so we can just replace A with Ã. Now, our estimand has been reduced to:
˜CDE := EXA∧Z ,XZ |Ã=1 E[Y | XA∧Z , XZ , Ã= 1]−E[Y | XA∧Z , XZ , Ã= 0)] . (3.2)
The final step is to deal with the unknown XA∧Z , XZ . To fix this issue, we first define the conditional outcome Q according to:
Q(Ã, X ) := E(Y | Ã, X ). (3.3) A key insight here is that, subject to the causal model in Figure 2, we have Q(Ã, X ) = E(Y | Ã, XA∧Z , XZ). But this is exactly the quantity in (3.2). Moreover, Q(Ã, X ) is an observable data quantity (it depends only on the distribution of the observed quantities). In summary:
Theorem 1. Assume the following: 1. (Causal structure) The causal relationships among A, Ã, Z, Y , and X satisfy the causal DAG in Figure 2; 2. (Overlap) 0< P(A= 1 | XA∧Z , XZ)< 1; 3. (Intention equals perception) A= Ã almost surely with respect to all interventional distributions. Then, the CDE is identified from observational data as
CDE= τCDE := EX |Ã=1 E[Y | η(X ), Ã= 1]−E[Y | η(X ), Ã= 0] , (3.4)
where η(X ) := (Q(0, X ),Q(1, X )).
The proof is in Appendix B.
We give the result in terms of an abstract sufficient statistic η(X ) to emphasize that the actual conditional expectation model is not required, only some statistic that is informationally equivalent. We emphasize that, regardless of whether the overlap condition holds or not, the propensity score of η(X ) is accessible and meaningful. Therefore, we can easily identify when identification fails as long as η(X ) is well-estimated.
4 METHOD
Our ultimate goal is to draw a conclusion about whether the treatment has a causal effect on the outcome. Following the previous section, we have reduced this problem to estimating τCDE, defined in Theorem 1. The task now is to develop an estimation procedure, including uncertainty quantification.
4.1 OUTCOME ONLY ESTIMATOR
We start by introducing the naive outcome only estimator as a first approach to CDE estimation. The estimator is adapted from Pryzant et al. (2020). The observation here is that, taking η(X ) = (Q(0, X ),Q(1, X )) in (3.4), we have τCDE = EX |A=1 [ E(Y | A= 1, X )−E(Y | A= 0, X ) ] . (4.1) Since Q(A, X ) is a function of the whole text data X , it is estimable from observational data. Namely, it is the solution to the square error risk:
Q = argmin Q̃ E[(Y − Q̃(A, X ))2]. (4.2)
With a finite sample, we can estimate Q as Q̂ by fitting a machine-learning model to minimize the (possibly regularized) square error empirical risk. That is, fit a model using mean square error as the objective function. Then, a straightforward estimator is:
τ̂Q := 1 n1 ∑ i:Ai=1
Q̂1(X i)− Q̂0(X i), (4.3) where n1 is the number of treated units.
It should be noted that the model for Q is not arbitrary. One significant issue for those models which directly regress Y on A and X is when overlap does not hold, the model could ignore A and only use X as the covariate. As a result, we need to choose a class of models that force the use of the treatment A. To address this, we use a two-headed model that regress Y on X for A= 0/1 separately in the conditional outcome learning model (See Section 4.2 and Figure 3).
As discussed in the introduction Section 1, this estimator yields a consistent point estimate, but does not offer a simple approach for uncertainty quantification. A natural guess for an estimate of its variance is:
ˆvar(τ̂Q) := 1 n ˆvar(Q̂1(X i)− Q̂0(X i) | Q̂). (4.4) That is, just compute the variance of the mean conditional on the fitted model. However, this procedure yields asymptotically valid confidence intervals only if the outcome model converges extremely quickly; i.e., if E[(Q̂ −Q)2] 12 = o(n− 12 ). We could instead bootstrap, refitting Q̂ on each bootstrap sample. However, with modern language models, this can be prohibitively computationally expensive.
4.2 TREATMENT IGNORANT EFFECT ESTIMATION (TI-ESTIMATOR)
Following Theorem 1, it suffices to adjust for η(X ) = (Q(0, X ),Q(1, X )). Accordingly, we use the following pipeline. We first estimate Q̂0(X ) and Q̂1(X ) (using a neural language model), as with the outcome-only estimator. Then, we take η̂(X ) := (Q̂0(X ), Q̂1(X )) and estimate ĝη ≈ P(A= 1 | η̂). That is, we estimate the propensity score corresponding to the estimated representation. Finally, we plug the estimated Q̂ and ĝη into a standard double machine learning estimator (Chernozhukov et al., 2016).
We describe the three steps in detail.
Q-Net In the first stage, we estimate the conditional outcomes and hence obtain the estimated two-dimensional confounding vector η̂(X ). For concreteness, we will use the dragonnet architecture of Shi et al. (2019). Specifically, we train DistilBERT (Sanh et al., 2019) modified to include three heads, as shown in Figure 3. Two of the heads correspond to Q̂0(X ) and Q̂1(X ) respectively. As discussed in the Section 4.1, applying two heads can force the model to use the treatment A. The final head is a single linear layer predicting the treatment. This propensity score prediction head can help prevent (implicit) regularization of the model from throwing away XA∧Z information that is necessary for identification. The output of this head is not used for the estimation since its purpose is to force the DistilBERT representation to preserve all confounding information. This has been shown to improve causal estimation (Shi et al., 2019; Veitch et al., 2019).
We train the model by minimizing the objective function
L (θ ;X) = 1 n ∑ i Q̂ai (x i;θ )− yi 2 +αCrossEntropy (ai , gu(x i)) + βLmlm(x i) , (4.5) where θ are the model parameters, α, β are hyperparameters and Lmlm(·) is the masked language modeling objective of DistilBERT.
There is a final nuance. In practice, we split the data into K-folds. For each fold j, we train a model Q̂− j on the other K −1 folds. Then, we make predictions for the data points in fold j using Q̂− j . Slightly abusing notation, we use Q̂a(x) to denote the predictions obtained in this manner.
Propensity score estimation Next, we define η̂(x) := (Q̂0(x), Q̂1(x)) and estimate the propensity score ĝη(x) ≈ P(A = 1 | η̂(x)). To do this, we fit a nonparametric estimator to the binary classification task of predicting A from η̂(X ) in a cross fitting or K-fold fashion. The important insight here is that since η̂(X ) is
2-dimensional, non-parametric estimation is possible at a fast rate. In Section 5, we try several methods and find that kernel regression usually works well.
We also define gη(X ) := P(A = 1 | η(X )) as the idealized propensity score. The idea is that as η̂ → η, we will also have ĝη → gη so long as we have a valid non-parametric estimate.
CDE estimation The final stage is to combine the estimated outcome model and propensity score into a CDE estimator. To that end, we define the influence curve of τCDE as follows:
ϕ(X ;Q, gη,τ CDE) := A · (Y −Q(0, X )) p − gη(X ) p 1− gη(X ) ·(1−A)·(Y −Q(0, X ))−AτCDE, (4.6) where p = P (A= 1). Then, the standard double machine learning estimator of τCDE Chernozhukov et al. (2016), and the α-level confidence interval of this estimator, is given by
τ̂TI = 1 n n∑ i=1 ϕ̂i , C I TI = τ̂TI − z1−α/2ŝd(ϕ̂i − Ai · τ̂TI/p̂), τ̂TI + z1−α/2ŝd(ϕ̂i − Ai · τ̂TI/p̂) ,
(4.7) where
ϕ̂i = Ai · Yi − Q̂0(X i)
p̂ − ĝη(X i) p̂ 1− ĝη(X i) · (1− Ai) · Yi − Q̂0(X i) , i = 1, · · · , n, (4.8)
p̂ = 1n ∑n
i=1 Ai , z1−α/2 is the α/2-upper quantile of the standard normal, and ŝd(·) is the sample standard deviation.
Validity We now have an estimation procedure. It remains to give conditions under which this procedure is valid. In particular, we require that it should yield a consistent estimate and asymptotically correct confidence intervals.
Theorem 2. Assume the following.
1. The mis-estimation of conditional outcomes can be bounded as follows
max a∈{0,1}E[(Q̂a(X )−Q(a, X )) 2] 1 2 = o(n− 14 ). (4.9)
2. The propensity score function P(A= 1|·, ·) is Lipschitz continuous onR2, and ∃ ϵ > 0, P ϵ ≤ gη(X )≤ 1− ϵ = 1
3. The propensity score estimate converges at least as quickly as k nearest neighbor; i.e., E[ ĝη(X )− P(A= 1 | η̂(X )
2 | η̂(X )] 12 = O(n− 14 ) Györfi et al. (2002); 4. There exist positive constants C1, C2, c, and q > 2 such that
E[|Y |q] 1q ≤ C2, sup η∈supp(η(X )) E[(Y −Q(A, X )2 | η(X ) = η)] ≤ C2, E[(Y −Q(A, X )2)] 12 ≥ c, max
a∈{0,1}E[ Q̂a(X )−Q(a, X ) ] 1q ≤ C1.
Then, the estimator τ̂TI is consistent and p n(τ̂TI −τCDE) d→ N(0,σ2) (4.10) where σ2 = E ϕ(X ;Q, gη,τCDE) 2 .
The proof is provided in Appendix A. The key point from this theorem is that we get asymptotic normality at the (fast) p
nrate while requiring only a slow (n1/4) convergence rate of Q. Intuitively, the reason is simply that, because η̂(X ) is only 2-dimensional, it is always possible to nonparametrically estimate the propensity score from η̂ at a fast rate—even naive KNN works! Effectively, this means the rate at which we estimate the true propensity score gη(X ) = P(A= 1 | η(X )) is dominated by the rate at which we estimate η(X ), which is in turn determined by the rate for Q̂. Now, the key property of the double ML estimator is that convergence only depends on the product of the convergence rates of Q̂ and ĝ. Accordingly, this procedure is robust in the sense that we only need to estimate Q̂ at the square root of the rate we needed for the naive Q-only procedure. This is much more plausible in practice. As we will see in Section 5, the TI-estimator dramatically improves the quality of the estimated confidence intervals and reduces the absolute bias of estimation.
Remark 3. In addition to robustness to noisy estimation of Q, there are some other advantages this estimation procedure inherits from the double ML estimator. If Q̂ is consistent, then the estimator is nonparametrically efficient in the sense that no other non-parametric estimator has a smaller asymptotic variance. That is, the procedure using the data as efficiently as possible.
5 EXPERIMENTS
We empirically study the method’s capability to provide accurate causal estimates with good uncertainty quantification Testing using semi-synthetic data (where ground truth causal effects are known), we find that the estimation procedure yields accurate causal estimates and confidence intervals. In particular, the TI-estimator has significantly lower absolute bias and vastly better uncertainty quantification than the Q-only method.
Additionally, we study the effect of the choice of nonparametric propensity score estimator and the choice of double machine-learning estimator, and the method’s robustness in regard to Q̂’s miscalibration. These results are reported in Appendices C and D. Although these
choices do not matter asymptotically, we find they have a significant impact in actual finite sample estimation. We find that, in general, kernel regression works well for propensity score estimation and the vanilla the Augmented Inverse Probability of Treatment weighted Estimator (AIPTW) corresponding to the CDE works well.
Finally, we reproduce the real-data analysis from Pryzant et al. (2020). We find that politeness has a positive effect on reducing email response time.
5.1 AMAZON REVIEWS
Dataset We closely follow the setup of Pryzant et al. (2020). We use publicly available Amazon reviews for music products as the basis for our semi-synthetic data. We include reviews for mp3, CD and vinyl, and among these exclude reviews for products costing more than $100 or shorter than 5 words. The treatment A is whether the review is five stars (A= 1) or one/two stars (A= 0).
To have a ground truth causal effect, we must now simulate the outcome. To produce a realistic dataset, we choose a real variable as the confounder. Namely, the confounder C is whether the product is a CD (C = 1) or not (C = 0). Then, outcome Y is generated according to Y ← βaA+βc (π(C)− βo)+γN(0,1). The true causal effect is controlled by βa. We choose βa = 1.0, 0.0 to generate data with and without causal effects. In this setting, βa is the oracle value of our causal estimand. The strength of confounding is controlled by βc . We choose βc = 50.0, 100.0. The ground-truth propensity score is π(C) = P(A= 1|C). We set it to have the value π(0) = 0.8 and π(1) = 0.6 (by subsampling the data). βo is an offset E[π(C)] = π(0)P(C = 0) + π(1)P(C = 1), where P(C = a), a = 0,1 are estimated from data. Finally, the noise level is controlled by γ; we choose 1.0 and 4.0 to simulate data with small and large noise. The final dataset has 10, 685 data entries.
Protocol For the language model, we use the pretrained distilbert-base-uncased model provided by the transformers package. The model is trained in the k-folding fashion with 5 folds. We apply the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 2e−5 and a batch size of 64. The maximum number of epochs is set as 20, with early stopping based on validation loss with a patience of 6. Each experiment is replicated with five different seeds and the final Q̂(a, x i) predictions are obtained by averaging the predictions from the 5 resulting models. The propensity model is implemented by running the Gaussian process regression using GaussianProcessClassifier in the sklearn package with DotProduct + WhiteKernel kernel. (We choose different random state for the GPR to guarantee the convergence of the GPR.) The coverage experiment uses 100 replicates.
Results The main question here is the efficacy of the estimation procedure. Table 1 compares the outcome-only estimator τ̂Q and the estimator τ̂TI. First, the absolute bias of the new method is significantly lower than the absolute bias of the outcome-only estimator. This is particularly true where there is moderate to high levels of confounding. Next, we check actual coverage rates over 100 replicates of the experiment. First, we find that the naive approach for the outcome-only estimator fails completely. The nominal confidence interval almost never actually includes the true effect. It is wildly optimistic. By contrast, the confidence intervals from the new method often cover the true value. This is an enormous improvement over the baseline. Nevertheless, they still do not actually achieve their nominal (95%) coverage. This may be because the Q̂ estimate is still not good enough for the asymptotics to kick in, and we are not yet justified in ignoring the uncertainty from model fitting.
5.2 APPLICATION: CONSUMER COMPLAINTS TO THE FINANCIAL PROTECTION BUREAU
We follow the same pipeline of the real data experiment in (Pryzant et al., 2020, §6.2). The dataset is consumers complaints made to the financial protection. Treatment A is politeness (measured using Yeomans et al. (2018)) and the outcome Y is a binary indicator of whether complaints receive a response within 15 days.
We use the same training procedure as for the simulation data. Table 2 shows point estimates and their 95% confidence intervals. Notice that the naive estimator show a significant negative effect of politeness on reducing response time. On the other hand, the more accurate AIPTW method as well as the outcome-only estimator have confidence intervals that cover only positive values, so we conclude that consumers’ politeness has a positive effect on response time. This matches our intuitions that being more polite should increase the probability of receiving a timely reply.
6 DISCUSSION
In this paper, we address the estimation of the causal effect of a text document attribute using observational data. The key challenge is that we must adjust for the text—to handle confounding—but adjusting for all of the text violates overlap. We saw that this issue could be effectively circumvented with a suitable choice of estimand and estimation procedure. In particular, we have seen an estimand that corresponds to the qualitative causal question, and an estimator that is valid even when the outcome model is learned slowly. The procedure also circumvents the need for bootstrapping, which is prohibitively expensive in our setting.
There are some limitations. The actual coverage proportion of our estimator is below the nominal level. This is presumably due to the imperfect fit of the conditional outcome model. Diagnostics (see Appendix D) show that as conditional outcome estimations become more accurate, the TI estimator becomes less biased, and its coverage increases. It seems plausible that the issue could be resolved by using more powerful language models.
Although we have focused on text in this paper, the problem of causal estimation with apparent overlap violation exists in any problem where we must adjust for unstructured and high-dimensional covariates. Another interesting direction for future work is to understand how analogous procedures work outside the text setting.
ACKNOWLEDGEMENT
Thanks to Alexander D’Amour for feedback on an earlier draft. We acknowledge the University of Chicago’s Research Computing Center for providing computing resources. This work was partially supported by Open Philanthropy.
A PROOF OF ASYMPTOTIC NORMALITY
Theorem 2. Assume the following.
1. The mis-estimation of conditional outcomes can be bounded as follows
max a∈{0,1}E[(Q̂a(X )−Q(a, X )) 2] 1 2 = o(n− 14 ). (4.9)
2. The propensity score function P(A= 1|·, ·) is Lipschitz continuous onR2, and ∃ ϵ > 0, P ϵ ≤ gη(X )≤ 1− ϵ = 1
3. The propensity score estimate converges at least as quickly as k nearest neighbor; i.e., E[ ĝη(X )− P(A= 1 | η̂(X )
2 | η̂(X )] 12 = O(n− 14 ) Györfi et al. (2002); 4. There exist positive constants C1, C2, c, and q > 2 such that
E[|Y |q] 1q ≤ C2, sup η∈supp(η(X )) E[(Y −Q(A, X )2 | η(X ) = η)] ≤ C2, E[(Y −Q(A, X )2)] 12 ≥ c, max
a∈{0,1}E[ Q̂a(X )−Q(a, X ) ] 1q ≤ C1.
Then, the estimator τ̂TI is consistent and
p n(τ̂TI −τCDE) d→ N(0,σ2) (4.10)
where σ2 = E ϕ(X ;Q, gη,τCDE)
2 .
Proof. We first prove that misestimation of propensity score has the rate n− 14 . For simplicity, we use fg , f̂g : (u, v) ∈ R2 → R to denote conditional probability P(A = 1|u, v) = fg(u, v) and the estimated propensity function by running the nonparametric regression P̂(A = 1|u, v) = f̂g(u, v). Specifically, we have fg(Q(0, X ),Q(1, X )) = gη(X ) and f̂g(Q̂0(X ), Q̂1(X )) = P̂(A = 1|Q̂0(X ), Q̂1(X )) = ĝη(X ). Since E[ Q̂0(X )−Q(0, X ) 2 ] 1 2 , E[ Q̂1(X )−Q(1, X ) 2 ] 1 2 = o(n−1/4) and fg is Lipschitz continuous, we have
E fg(Q̂0(X ), Q̂1(X ))− fg (Q(0, X ),Q(1, X )) 2
1 2
≤L ·E Q̂0(X ), Q̂1(X ) − (Q(0, X ),Q(1, X )) 2
2
1 2
=L · ¦E Q̂0(X )−Q(0, X ) 2 + E Q̂1(X )−Q(1, X ) 2 © 12 =o(n−1/4)
(A.1)
Since the true propensity function fg is Lipschitz continuous on R2, the mean squared error rate of the k nearest neighbor is O(n−1/2) Györfi et al. (2002). In addition, since the propensity score function and its estimation are bounded under 1, we have the following equation
E f̂g(Q̂0(X ), Q̂1(X ))− fg(Q̂0(X ), Q̂1(X )) 2 = O(n−1/2), (A.2)
due to the dominated convergence theorem. By (A.1) and (A.2), we can bound the mean squared error of estimated propensity score in the following form:
E ĝη(X )− gη(X ) 2
≤E ĝη(X )− fg(Q̂0(X ), Q̂1(X )) 2 +E fg(Q̂0(X ), Q̂1(X ))− gη(X ) 2 =E fg(Q̂0(X ), Q̂1(X ))− fg (Q(0, X ),Q(1, X )) 2+ E f̂g(Q̂0(X ), Q̂1(X ))− fg(Q̂0(X ), Q̂1(X )) 2
=O(n−1/2),
(A.3)
that is E ĝη(X )− gη(X ) 2 12 = O(n− 14 ).
Before we apply the conclusion of Theorem 5.1 in (Chernozhukov et al., 2017b), we need to check all assumptions in Assumption 5.1 hold in Chernozhukov et al. (2017b). Let C :=max ¦ (2Cq1 + 2 q) 1 q , C2 © .
(a) E[Y − Q(A, X ) | η(X ), A] = 0, E[A − gη(X ) | η(X )] = 0 are easily checked by invoking definitions of Q and gη.
(b) E[|Y |q] 1q ≤ C , E[(Y −Q(A, X ))2] 12 ≥ c, and supη∈supp(η(X ))E[(Y −Q(A, X ))2 | η(X ) = η] ≤ C are guaranteed by the fourth condition in the theorem.
(c) P ϵ ≤ gη(X )≤ 1− ϵ = 1 is the second condition in the theorem.
(d) Since propensity score function and its estimation are bounded under 1, we have
E[ Q̂1(X )−Q(1, X ) q] +E[ Q̂0(X )−Q(0, X ) q] +E[ ĝη(X )− gη(X ) q] 1q
≤ (Cq1 + Cq1 + 2q) 1 q ≤ C
(e) Based on (A.3) and condition 1 in the theorem, we have
E[ Q̂1(X )−Q(1, X ) 2 ] +E[ Q̂0(X )−Q(0, X ) 2 ] +E[ ĝη(X )− gη(X ) 2 ] 1 2
≤ o(n− 12 ) + o(n− 12 ) +O(n− 12 ) 12 ≤ O(n− 14 ), E[ Q̂0(X )−Q(0, X ) 2 ] 1 2 ·E[ ĝη(X )− gη(X ) 2] 12 = o(n− 12 )
(f) Based on condition 3 in the theorem, we have
sup x∈supp(X )
E[ ĝη(X )− P(A= 1 | η̂(X )) 2 | η̂(X ) = η̂(x)] = O(n− 12 ).
We consider a smaller positive constant ϵ̃ instead of ϵ. Note that for ϵ̃ < ϵ, we still have P(ϵ̃ ≤ gη(X )≤ 1− ϵ̃) = 1. Then,
P
sup
x∈supp(X )
ĝη(x)− 12 > 12 − ϵ̃ = P inf x∈supp(X ) ĝη(x)< ϵ̃ + P
sup x∈supp(X ) ĝη(x)> 1− ϵ̃
≤P inf x∈supp(X )P(A= 1 | η̂(X ) = η̂(x))− infx∈supp(X ) ĝη(x)> ϵ − ϵ̃
+ P
sup x∈supp(X ) ĝη(x)− sup x∈supp(X ) P(A= 1 | η̂(X ) = η̂(x))> 1− ϵ̃ − (1− ϵ)
≤E
infx∈supp(X ) ĝη(x)− infx∈supp(X ) P(A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2 + E supx∈supp(X ) ĝη(x)− supx∈supp(X ) P(A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2
≤2supx∈supp(X )E
ĝη(X )− P (A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2 =O(n− 12 )
Hence, P(supx∈supp(X ) ĝη(x)− 12 ≤ 12 − ϵ̃)≥ 1−O(n− 12 ).
With (a)-(f), we can invoke the conclusion in Theorem 5.1 in (Chernozhukov et al., 2017b), and get the asymptotic normality of the TI estimator.
B PROOF OF CAUSAL IDENTIFICATION
Theorem 1. Assume the following: 1. (Causal structure) The causal relationships among A, Ã, Z, Y , and X satisfy the causal DAG in Figure 2; 2. (Overlap) 0< P(A= 1 | XA∧Z , XZ)< 1; 3. (Intention equals perception) A= Ã almost surely with respect to all interventional distributions. Then, the CDE is identified from observational data as
CDE= τCDE := EX |Ã=1 E[Y | η(X ), Ã= 1]−E[Y | η(X ), Ã= 0] , (3.4)
where η(X ) := (Q(0, X ),Q(1, X )).
Proof. We first prove that this two-dimensional confounding part η(X ) satisfies positivity. Since (Q(0, X ), Q(1, X )) = (E [Y | A= 1, XA∧Z , XZ] , E [Y | A= 0, XA∧Z , XZ]) is a function of (XA∧Z , XZ), the following equations hold:
P(A= 1 | Q(0, X ),Q(1, X )) =E(A | Q(0, X ),Q(1, X )) =E [E (A | XA∧Z , XZ) | Q(0, X ),Q(1, X )] =E [P(A= 1| XA∧Z , XZ) | Q(0, X ),Q(1, X )] .
(B.1)
As 0 < P(A= 1| XA∧Z , XZ) < 1, we have 0 < P(A= 1| Q(0, X ),Q(1, X )) < 1. Furthermore, we have 0< P(Ã= 1| Q(0, X ),Q(1, X ))< 1 due to almost everywhere equivalence of A and Ã.
Since A= Ã, we can rewrite (3.1) by replacing A with à in the following form:
CDE=EXA∧Z ,XZ | Ã=1 E(Y | do(Ã= 1), XA∧Z , XZ)−E(Y | do(Ã= 0), XA∧Z , XZ)
=EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1, XA∧Z , XZ)−E(Y | Ã= 0, XA∧Z , XZ) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1, X )−E(Y | Ã= 0, X ) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1,Q(0, X ),Q(1, X )) −E E(Y | Ã= 0,Q(0, X ),Q(1, X )) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1,η(X )) −E E(Y | Ã= 0,η(X )) =EX | Ã=1 E(Y | Ã= 1,η(X )) −E E(Y | Ã= 0,η(X )) .
(B.2)
The equivalence of the first and the second line is because XA∧Z , XZ block all backdoor paths between à and Y (See Figure 2) and 0 < P(à = 1| Q(0, X ),Q(1, X )) < 1. Thus, the “do-operation” in the first line can be safely removed. Equivalence of the second line and the third line is due to Q(Ã, X ) = E Y | Ã, XA∧Z , XZ , which is subject to the causal model in Figure 2. The last equation is based on the fact that η(X ) is a function of only XA∧Z and XZ . (It can be easily checked by using the definition of the expectation.)
(B.2) shows that (Q(0, X ),Q(1, X )) is a two-dimensional confounding variable such that CDE is identifiable when we adjust for it as the confounding part.
Note that if f and h are two invertible functions on R, ( f (Q(0, X )), h(Q(1, X ))) also suffices the identification for CDE. Since the sigma algebra should be the same for (Q(0, X ),Q(1, X )) and f (Q(0, X )), h(Q(1, X )), i.e.,
σ (Q(0, X ),Q(1, X )) = σ ( f (Q(0, X )), h(Q(1, X ))) .
Hence, we have
P (A= 1 | Q(0, X ),Q(1, X )) = P (A= 1 | f (Q(0, X )), h(Q(1, X ))) , E (Y | Q(0, X ),Q(1, X )) = E (Y | f (Q(0, X )), h(Q(1, X ))) . (B.3)
C ADDITIONAL EXPERIMENTS
We conduct additional experiments to show how the estimation of causal effect changes 1) over different nonparametric models for the propensity score estimation, and 2) when using different double machine learning estimators on causal estimation. Specifically, for the first study, we apply different nonparametric models and the logistic regression to the estimated confounding part η̂(X ) =
Q̂0(X ), Q̂1(X ) to obtain propensity scores. We use ATT AIPTW in all above cases for causal effect estimation. For the second study, we fix the first two stages of the TI estimator, i.e. we apply Q-Net for the conditional outcomes and compute propensity scores with the Gaussian process regression where the kernel function is the summation of dot product and white noise. Estimated conditional outcomes and propensity scores are plugged into different double machine learning estimators. We make the following conclusions with results of above experiments.
The choice of nonparametric models is significant. Table 3 summarizes results with applying different regression models for the propensity estimation. We can see that suitable nonparametric models will strongly increase the coverage proportion over true causal estimand. Therefore, we conclude that the accuracy in causal estimation is highly dependent on the choice of nonparametric models. In practice, when there is some prior information about the propensity score function, we should apply the most suitable nonparametric model to increase the reliability of our causal estimation.
The ATT AIPTW is consistently the best double machine learning estimator. Table 4 shows results by applying different double machine learning estimators. We apply both estimators for the average treatment effect (ATE) and the controlled direct effect (CDE). The bias of “unadjusted” estimator τ̂naive is also included in Table 4 (a). For absolute bias,
ATT AIPTW τ̂TI has comparable results with other double machine learning estimators in most cases. For coverage proportion of confidence intervals, though it has lower rates in some cases, τ̂TI has consistently the best performance. Especially in high confounding situations, the advantage of τ̂TI is obvious.
Estimator For each dataset, we compute estimators as follows. n1 and n0 stands for the number of individuals in the treated and controlled group. n= n1+ n0 is the total number of individuals.
• “Unadjusted” baseline estimator: τ̂naive = 1n1 ∑ i:Ai=1 Yi − 1n0 ∑ i:Ai=0 Yi
• “Outcome-only” estimator: τ̂Q = 1n1 ∑ i:Ai=1 Q̂1,i − Q̂0,i
• ATT AIPTW: τ̂TI = 1n1 ∑ i:Ai=1 Ai(Yi − Q̂0,i)− (1− Ai)(Yi − Q̂0,i) ĝi1− ĝi
D DISCUSSION OF LOW COVERAGE
In this section, we discuss why the confidence intervals we get (See Table 1) have lower coverage than the nominated level 95%. We conduct diagnostics and find that the inaccuracy of Q’s estimations is responsible for the low coverage. We compute absolute biases, variances, and coverages of τTI’s with different mean squared errors Ê[(Q− Q̂)2] by using different numbers of datasets. According to Figure 4–Figure 5, as the mean squared error of Q increases, the bias of τTI grows and the coverage of τTI drops. Specifically, the highest coverage of each setting is almost 95% (use 50 datasets with most accurate conditional outcome estimations). In practice, one direct way to improve the TI estimator’s accuracy is to apply better NLP models so that more accurate conditional outcome estimations can be obtained. | 1. What is the focus of the paper regarding causal effect estimation?
2. What are the strengths of the proposed approach, particularly in its conceptual elegance and performance?
3. What are the weaknesses of the paper, especially regarding its identification results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studies the problem of estimating causal effect with overlap violations. The authors focus on a specific application---causal estimation for text data. The authors start with writing down the model in terms of a DAG and discuss identification results based on the DAG. The authors then proceed with discussing estimation strategies; they propose an outcome only estimator and a doubly robust TI-estimator. They further establish a theorem that allows inference on the proposed TI-estimator. Finally, the authors demonstrate the performance of the proposed estimator through simulations and real data analysis.
Strengths And Weaknesses
Strengths
The paper is very well-written. The proposed method is motivated and explained in a very nice way.
The paper addresses an interesting and important question: causal estimation with overlap violations.
The proposed method is very elegant conceptually.
The proposed method appears to have good performance both theoretically and empirically.
Weaknesses
The identification result is not new: it is the same as using prognostic score as deconfounding score in D’Amour & Franks (2021). See the next section for more details.
Clarity, Quality, Novelty And Reproducibility
See the next section for more details. |
ICLR | Title
Causal Estimation for Text Data with (Apparent) Overlap Violations
Abstract
Consider the problem of estimating the causal effect of some attribute of a text document; for example: what effect does writing a polite vs. rude email have on response time? To estimate a causal effect from observational data, we need to adjust for confounding aspects of the text that affect both the treatment and outcome—e.g., the topic or writing level of the text. These confounding aspects are unknown a priori, so it seems natural to adjust for the entirety of the text (e.g., using a transformer). However, causal identification and estimation procedures rely on the assumption of overlap: for all levels of the adjustment variables, there is randomness leftover so that every unit could have (not) received treatment. Since the treatment here is itself an attribute of the text, it is perfectly determined, and overlap is apparently violated. The purpose of this paper is to show how to handle causal identification and obtain robust causal estimation in the presence of apparent overlap violations. In brief, the idea is to use supervised representation learning to produce a data representation that preserves confounding information while eliminating information that is only predictive of the treatment. This representation then suffices for adjustment and satisfies overlap. Adapting results on non-parametric estimation, we find that this procedure is robust to conditional outcome misestimation, yielding a low-absolute-bias estimator with valid uncertainty quantification under weak conditions. Empirical results show strong improvements in bias and uncertainty quantification relative to the natural baseline. Code, demo data and a tutorial are available at https://github.com/gl-ybnbxb/TI-estimator.
1 INTRODUCTION
We consider the problem of estimating the causal effect of an attribute of a passage of text on some downstream outcome. For example, what is the effect of writing a polite or rude email on the amount of time it takes to get a response? In principle, we might hope to answer such questions with a randomized experiment. However, this can be difficult in practice—e.g., if poor outcomes are costly or take long to gather. Accordingly, in this paper, we will be interested in estimating such effects using observational data.
There are three steps to estimating causal effects using observational data (See Chapter 36 Murphy (2023)). First, we need to specify a concrete causal quantity as our estimand. That is, give a formal quantity target of estimation corresponding to the high-level question of interest. The next step is causal identification: we need to prove that this causal estimator can, in principle, be estimated using only observational data. The standard approach for identification relies on adjusting for confounding variables that affect both the treatment and the outcome. For identification to hold, our adjustment variables must satisfy two conditions: unconfoundedness and overlap. The former requires the adjustment variables contain sufficient information on all common causes. The latter requires that the adjustment variable does not contain enough information about treatment assignment to let us perfectly predict it. Intuitively, to disentangle the effect of treatment from the effect of confounding, we must observe each treatment state at all levels of confounding. The final
step is estimation using a finite data sample. Here, overlap also turns out to be critically important as a major determinant of the best possible accuracy (asymptotic variance) of the estimator Chernozhukov et al. (2016).
Since the treatment is a linguistic property, it is often reasonable to assume that text data has information about all common causes of the treatment and the outcome. Thus, we may aim to satisfy unconfoundedness in the text setting by adjusting for all the text as the confounding part. However, doing so brings about overlap violation. Since the treatment is a linguistic property determined by the text, the probability of treatment given any text is either 0 or 1. The polite/rude tone is determined by the text itself. Therefore, overlap does not hold if we naively adjust for all the text as the confounding part. This problem is the main subject of this paper. Or, more precisely, our goal is to find a causal estimand, causal identification conditions, and a robust estimation procedure that will allow us to effectively estimate causal effects even in the presence of such (apparent) overlap violations.
In fact, there is an obvious first approach: simply use a standard plug-in estimation procedure that relies only on modeling the outcome from the text and treatment variables. In particular, do not make any explicit use of the propensity score, the probability each unit is treated. Pryzant et al. (2020) use an approach of this kind and show it is reasonable in some situations. Indeed, we will see in Sections 3 and 4 that this procedure can be interpreted as a point estimator of a controlled causal effect. Even once we understand what the implied causal estimand is, this approach has a major drawback: the estimator is only accurate when the text-outcome model converges at a very fast rate. This is particularly an issue in the text setting, where we would like to use large, flexible, deep learning models for this relationship. In practice, we find that this procedure works poorly: the estimator has significant absolute bias and (the natural approach to) uncertainty quantification almost never includes the estimand true value; see Section 5.
The contribution of this paper is a method for robustly estimating causal effects in text. The main idea is to break estimation into a two-stage procedure, where in the first stage we learn a representation of the text that preserves enough information to account for confounding, but throws away enough information to avoid overlap issues. Then, we use this representation as the adjustment variables in a standard double machine-learning estimation procedure Chernozhukov et al. (2016; 2017a). To establish this method, the contributions of this paper are:
1. We give a formal causal estimand corresponding to the text-attribute question. We show this estimand is causally identified under weak conditions, even in the presence of apparent overlap issues.
2. We show how to efficiently estimate this quantity using the adapted double-ML technique just described. We show that this estimator admits a central limit theorem at a fast ( p n) rate under weak conditions on the rate at which the ML model
learns the text-outcome relationship (namely, convergence at n1/4 rate). This implies absolute bias decreases rapidly, and an (asymptotically) valid procedure for uncertainty quantification.
3. We test the performance of this procedure empirically, finding significant improvements in bias and uncertainty quantification relative to the outcome-model-only baseline.
Related work The most related literature is on causal inference with text variables. Papers include treating text as treatment Pryzant et al. (2020); Wood-Doughty et al. (2018); Egami et al. (2018); Fong & Grimmer (2016); Wang & Culotta (2019); Tan et al. (2014)), as outcome Egami et al. (2018); Sridhar & Getoor (2019), as confounder Veitch et al. (2019); Roberts et al. (2020); Mozer et al. (2020); Keith et al. (2020), and discovering or predicting causality from text del Prado Martin & Brendel (2016); Tabari et al. (2018); Balashankar et al. (2019); Mani & Cooper (2000). There are also numerous applications using text to adjust for confounding (e.g., Olteanu et al., 2017; Hall, 2017; Kiciman et al., 2018; Sridhar et al., 2018; Sridhar & Getoor, 2019; Saha et al., 2019; Karell & Freedman, 2019; Zhang et al., 2020). Of these, Pryzant et al. (2020) also address non-parametric es-
timation of the causal effect of text attributes. Their focus is primarily on mismeasurement of the treatments, while our motivation is robust estimation.
This paper also relates to work on causal estimation with (near) overlap violations. D’Amour et al. (2021) points out high-dimensional adjustment (e.g., Rassen et al., 2011; Louizos et al., 2017; Li et al., 2016; Athey et al., 2017) suffers from overlap issues. Extra assumptions such as sparsity are often needed to meet the overlap condition. These results do not directly apply here because we assume there exists a low-dimensional summary that suffices to handle confounding.
D’Amour & Franks (2021) studies summary statistics that suffice for identification, which they call deconfounding scores. The supervised representation learning approach in this paper can be viewed as an extremal case of the deconfounding score. However, they consider the case where ordinary overlap holds with all observed features, with the aim of using both the outcome model and propensity score to find efficient statistical estimation procedures (in a linear-gaussian setting). This does not make sense in the setting we consider. Additionally, our main statistical result (robustness to outcome model estimation) is new.
2 NOTATION AND PROBLEM SETUP
We follow the causal setup of Pryzant et al. (2020). We are interested in estimating the causal effect of treatment A on outcome Y . For example, how does writing a negative sentiment (A) review (X ) affect product sales (Y )? There are two immediate challenges to estimating such effects with observed text data. First, we do not actually observe A, which is the intent of the writer. Instead, we only observe Ã, a version of A that is inferred from the text itself. In this paper, we will assume that A= Ã almost surely—e.g., a reader can always tell if a review was meant to be negative or positive. This assumption is often reasonable, and follows Pryzant et al. (2020). The next challenge is that the treatment may be correlated with other aspects of the text (Z) that are also relevant to the outcome— e.g., the product category of the item being reviewed. Such Z can act as confounding variables, and must somehow be adjusted for in a causal estimation problem.
Each unit (Ai , Zi , X i , Yi) is drawn independently and identically from an unknown distribution P. Figure 1 shows the causal relationships among variables, where solid arrows represent causal relations, and the dotted line represents possible correlations between two variables. We assume that text X contains all common causes of à and the outcome Y .
3 IDENTIFICATION AND CAUSAL ESTIMAND
The first task is to translate the qualitative causal question of interest—what is the effect of A on Y —into a causal estimand. This estimand must both be faithful to the qualitative question and be identifiable from observational data under reasonable assumptions. The key challenges here are that we only observe à (not A itself), there are unknown confounding variables influencing the text, and à is a deterministic function of the text, leading to overlap violations if we naively adjust for all the text. Our high-level idea is to split the text into abstract (unknown) parts depending on whether they are confounding—affect both à and Y —or whether they affect à alone. The part of the text that affects only à is not necessary for causal adjustment, and can be thrown away. If this part contains “enough” information about Ã, then throwing it away can eliminate our ability to perfectly predict Ã, thus fixing the overlap issue. We now turn to formalizing this idea, showing how it can be used to define an estimand and to identify this estimand from observational data.
Causal model The first idea is to decompose the text into three parts: one part affected by only A, one part affected interactively by A and Z , and another part affected only by Z . We use XA, XA∧Z and XZ to denote them, respectively; see Figure 2 for the corresponding causal model. Note that there could be additional information in the text in addition to these three parts. However, since they are irrelevant to both A and Z , we do not need to consider them in the model.
Controlled direct effect (CDE) The treatment A affects the outcome through two paths. Both “directly” through XA—the part of the text determined just by the treatment—and also through a path going through XA∧Z—the part of the text that relies on interaction effects with other factors. Our formal causal effect aims at capturing the effect of A through only the first, direct, path.
CDE := EXA∧Z ,XZ |A=1 E[Y | XA∧Z , XZ , do(A= 1)] −E[Y | XA∧Z , XZ , do(A= 0)] . (3.1)
Here, do is Pearl’s do notation, and the estimand is a variant of the controlled direct effect (Pearl, 2009). Intuitively, it can be interpreted as the expected change in the outcome induced by changing the treatment from 1 to 0 while keeping part of the text affected by Z the same as it would have been had we set A = 1. This is a reasonable formalization of the qualitative “effect of A on Y ”. Of course, it is not the only possible formalization. Its advantage is that, as we will see, it can be identified and estimated under reasonable conditions.
Identification To identify CDE we must rewrite the expression in terms of observable quantities. There are three challenges: we need to get rid of the do operator, we don’t observe
A (only Ã), and the variables XA∧Z , XZ are unknown (they are latent parts of X ). Informally, the identification argument is as follows. First, XA∧Z , XZ block all backdoor paths (common causes) in Figure 2. Moreover, because we have thrown away XA, we now satisfy overlap. Accordingly, the do operator can be replaced by conditioning following the usual causal-adjustment argument. Next, A= Ã almost surely, so we can just replace A with Ã. Now, our estimand has been reduced to:
˜CDE := EXA∧Z ,XZ |Ã=1 E[Y | XA∧Z , XZ , Ã= 1]−E[Y | XA∧Z , XZ , Ã= 0)] . (3.2)
The final step is to deal with the unknown XA∧Z , XZ . To fix this issue, we first define the conditional outcome Q according to:
Q(Ã, X ) := E(Y | Ã, X ). (3.3) A key insight here is that, subject to the causal model in Figure 2, we have Q(Ã, X ) = E(Y | Ã, XA∧Z , XZ). But this is exactly the quantity in (3.2). Moreover, Q(Ã, X ) is an observable data quantity (it depends only on the distribution of the observed quantities). In summary:
Theorem 1. Assume the following: 1. (Causal structure) The causal relationships among A, Ã, Z, Y , and X satisfy the causal DAG in Figure 2; 2. (Overlap) 0< P(A= 1 | XA∧Z , XZ)< 1; 3. (Intention equals perception) A= Ã almost surely with respect to all interventional distributions. Then, the CDE is identified from observational data as
CDE= τCDE := EX |Ã=1 E[Y | η(X ), Ã= 1]−E[Y | η(X ), Ã= 0] , (3.4)
where η(X ) := (Q(0, X ),Q(1, X )).
The proof is in Appendix B.
We give the result in terms of an abstract sufficient statistic η(X ) to emphasize that the actual conditional expectation model is not required, only some statistic that is informationally equivalent. We emphasize that, regardless of whether the overlap condition holds or not, the propensity score of η(X ) is accessible and meaningful. Therefore, we can easily identify when identification fails as long as η(X ) is well-estimated.
4 METHOD
Our ultimate goal is to draw a conclusion about whether the treatment has a causal effect on the outcome. Following the previous section, we have reduced this problem to estimating τCDE, defined in Theorem 1. The task now is to develop an estimation procedure, including uncertainty quantification.
4.1 OUTCOME ONLY ESTIMATOR
We start by introducing the naive outcome only estimator as a first approach to CDE estimation. The estimator is adapted from Pryzant et al. (2020). The observation here is that, taking η(X ) = (Q(0, X ),Q(1, X )) in (3.4), we have τCDE = EX |A=1 [ E(Y | A= 1, X )−E(Y | A= 0, X ) ] . (4.1) Since Q(A, X ) is a function of the whole text data X , it is estimable from observational data. Namely, it is the solution to the square error risk:
Q = argmin Q̃ E[(Y − Q̃(A, X ))2]. (4.2)
With a finite sample, we can estimate Q as Q̂ by fitting a machine-learning model to minimize the (possibly regularized) square error empirical risk. That is, fit a model using mean square error as the objective function. Then, a straightforward estimator is:
τ̂Q := 1 n1 ∑ i:Ai=1
Q̂1(X i)− Q̂0(X i), (4.3) where n1 is the number of treated units.
It should be noted that the model for Q is not arbitrary. One significant issue for those models which directly regress Y on A and X is when overlap does not hold, the model could ignore A and only use X as the covariate. As a result, we need to choose a class of models that force the use of the treatment A. To address this, we use a two-headed model that regress Y on X for A= 0/1 separately in the conditional outcome learning model (See Section 4.2 and Figure 3).
As discussed in the introduction Section 1, this estimator yields a consistent point estimate, but does not offer a simple approach for uncertainty quantification. A natural guess for an estimate of its variance is:
ˆvar(τ̂Q) := 1 n ˆvar(Q̂1(X i)− Q̂0(X i) | Q̂). (4.4) That is, just compute the variance of the mean conditional on the fitted model. However, this procedure yields asymptotically valid confidence intervals only if the outcome model converges extremely quickly; i.e., if E[(Q̂ −Q)2] 12 = o(n− 12 ). We could instead bootstrap, refitting Q̂ on each bootstrap sample. However, with modern language models, this can be prohibitively computationally expensive.
4.2 TREATMENT IGNORANT EFFECT ESTIMATION (TI-ESTIMATOR)
Following Theorem 1, it suffices to adjust for η(X ) = (Q(0, X ),Q(1, X )). Accordingly, we use the following pipeline. We first estimate Q̂0(X ) and Q̂1(X ) (using a neural language model), as with the outcome-only estimator. Then, we take η̂(X ) := (Q̂0(X ), Q̂1(X )) and estimate ĝη ≈ P(A= 1 | η̂). That is, we estimate the propensity score corresponding to the estimated representation. Finally, we plug the estimated Q̂ and ĝη into a standard double machine learning estimator (Chernozhukov et al., 2016).
We describe the three steps in detail.
Q-Net In the first stage, we estimate the conditional outcomes and hence obtain the estimated two-dimensional confounding vector η̂(X ). For concreteness, we will use the dragonnet architecture of Shi et al. (2019). Specifically, we train DistilBERT (Sanh et al., 2019) modified to include three heads, as shown in Figure 3. Two of the heads correspond to Q̂0(X ) and Q̂1(X ) respectively. As discussed in the Section 4.1, applying two heads can force the model to use the treatment A. The final head is a single linear layer predicting the treatment. This propensity score prediction head can help prevent (implicit) regularization of the model from throwing away XA∧Z information that is necessary for identification. The output of this head is not used for the estimation since its purpose is to force the DistilBERT representation to preserve all confounding information. This has been shown to improve causal estimation (Shi et al., 2019; Veitch et al., 2019).
We train the model by minimizing the objective function
L (θ ;X) = 1 n ∑ i Q̂ai (x i;θ )− yi 2 +αCrossEntropy (ai , gu(x i)) + βLmlm(x i) , (4.5) where θ are the model parameters, α, β are hyperparameters and Lmlm(·) is the masked language modeling objective of DistilBERT.
There is a final nuance. In practice, we split the data into K-folds. For each fold j, we train a model Q̂− j on the other K −1 folds. Then, we make predictions for the data points in fold j using Q̂− j . Slightly abusing notation, we use Q̂a(x) to denote the predictions obtained in this manner.
Propensity score estimation Next, we define η̂(x) := (Q̂0(x), Q̂1(x)) and estimate the propensity score ĝη(x) ≈ P(A = 1 | η̂(x)). To do this, we fit a nonparametric estimator to the binary classification task of predicting A from η̂(X ) in a cross fitting or K-fold fashion. The important insight here is that since η̂(X ) is
2-dimensional, non-parametric estimation is possible at a fast rate. In Section 5, we try several methods and find that kernel regression usually works well.
We also define gη(X ) := P(A = 1 | η(X )) as the idealized propensity score. The idea is that as η̂ → η, we will also have ĝη → gη so long as we have a valid non-parametric estimate.
CDE estimation The final stage is to combine the estimated outcome model and propensity score into a CDE estimator. To that end, we define the influence curve of τCDE as follows:
ϕ(X ;Q, gη,τ CDE) := A · (Y −Q(0, X )) p − gη(X ) p 1− gη(X ) ·(1−A)·(Y −Q(0, X ))−AτCDE, (4.6) where p = P (A= 1). Then, the standard double machine learning estimator of τCDE Chernozhukov et al. (2016), and the α-level confidence interval of this estimator, is given by
τ̂TI = 1 n n∑ i=1 ϕ̂i , C I TI = τ̂TI − z1−α/2ŝd(ϕ̂i − Ai · τ̂TI/p̂), τ̂TI + z1−α/2ŝd(ϕ̂i − Ai · τ̂TI/p̂) ,
(4.7) where
ϕ̂i = Ai · Yi − Q̂0(X i)
p̂ − ĝη(X i) p̂ 1− ĝη(X i) · (1− Ai) · Yi − Q̂0(X i) , i = 1, · · · , n, (4.8)
p̂ = 1n ∑n
i=1 Ai , z1−α/2 is the α/2-upper quantile of the standard normal, and ŝd(·) is the sample standard deviation.
Validity We now have an estimation procedure. It remains to give conditions under which this procedure is valid. In particular, we require that it should yield a consistent estimate and asymptotically correct confidence intervals.
Theorem 2. Assume the following.
1. The mis-estimation of conditional outcomes can be bounded as follows
max a∈{0,1}E[(Q̂a(X )−Q(a, X )) 2] 1 2 = o(n− 14 ). (4.9)
2. The propensity score function P(A= 1|·, ·) is Lipschitz continuous onR2, and ∃ ϵ > 0, P ϵ ≤ gη(X )≤ 1− ϵ = 1
3. The propensity score estimate converges at least as quickly as k nearest neighbor; i.e., E[ ĝη(X )− P(A= 1 | η̂(X )
2 | η̂(X )] 12 = O(n− 14 ) Györfi et al. (2002); 4. There exist positive constants C1, C2, c, and q > 2 such that
E[|Y |q] 1q ≤ C2, sup η∈supp(η(X )) E[(Y −Q(A, X )2 | η(X ) = η)] ≤ C2, E[(Y −Q(A, X )2)] 12 ≥ c, max
a∈{0,1}E[ Q̂a(X )−Q(a, X ) ] 1q ≤ C1.
Then, the estimator τ̂TI is consistent and p n(τ̂TI −τCDE) d→ N(0,σ2) (4.10) where σ2 = E ϕ(X ;Q, gη,τCDE) 2 .
The proof is provided in Appendix A. The key point from this theorem is that we get asymptotic normality at the (fast) p
nrate while requiring only a slow (n1/4) convergence rate of Q. Intuitively, the reason is simply that, because η̂(X ) is only 2-dimensional, it is always possible to nonparametrically estimate the propensity score from η̂ at a fast rate—even naive KNN works! Effectively, this means the rate at which we estimate the true propensity score gη(X ) = P(A= 1 | η(X )) is dominated by the rate at which we estimate η(X ), which is in turn determined by the rate for Q̂. Now, the key property of the double ML estimator is that convergence only depends on the product of the convergence rates of Q̂ and ĝ. Accordingly, this procedure is robust in the sense that we only need to estimate Q̂ at the square root of the rate we needed for the naive Q-only procedure. This is much more plausible in practice. As we will see in Section 5, the TI-estimator dramatically improves the quality of the estimated confidence intervals and reduces the absolute bias of estimation.
Remark 3. In addition to robustness to noisy estimation of Q, there are some other advantages this estimation procedure inherits from the double ML estimator. If Q̂ is consistent, then the estimator is nonparametrically efficient in the sense that no other non-parametric estimator has a smaller asymptotic variance. That is, the procedure using the data as efficiently as possible.
5 EXPERIMENTS
We empirically study the method’s capability to provide accurate causal estimates with good uncertainty quantification Testing using semi-synthetic data (where ground truth causal effects are known), we find that the estimation procedure yields accurate causal estimates and confidence intervals. In particular, the TI-estimator has significantly lower absolute bias and vastly better uncertainty quantification than the Q-only method.
Additionally, we study the effect of the choice of nonparametric propensity score estimator and the choice of double machine-learning estimator, and the method’s robustness in regard to Q̂’s miscalibration. These results are reported in Appendices C and D. Although these
choices do not matter asymptotically, we find they have a significant impact in actual finite sample estimation. We find that, in general, kernel regression works well for propensity score estimation and the vanilla the Augmented Inverse Probability of Treatment weighted Estimator (AIPTW) corresponding to the CDE works well.
Finally, we reproduce the real-data analysis from Pryzant et al. (2020). We find that politeness has a positive effect on reducing email response time.
5.1 AMAZON REVIEWS
Dataset We closely follow the setup of Pryzant et al. (2020). We use publicly available Amazon reviews for music products as the basis for our semi-synthetic data. We include reviews for mp3, CD and vinyl, and among these exclude reviews for products costing more than $100 or shorter than 5 words. The treatment A is whether the review is five stars (A= 1) or one/two stars (A= 0).
To have a ground truth causal effect, we must now simulate the outcome. To produce a realistic dataset, we choose a real variable as the confounder. Namely, the confounder C is whether the product is a CD (C = 1) or not (C = 0). Then, outcome Y is generated according to Y ← βaA+βc (π(C)− βo)+γN(0,1). The true causal effect is controlled by βa. We choose βa = 1.0, 0.0 to generate data with and without causal effects. In this setting, βa is the oracle value of our causal estimand. The strength of confounding is controlled by βc . We choose βc = 50.0, 100.0. The ground-truth propensity score is π(C) = P(A= 1|C). We set it to have the value π(0) = 0.8 and π(1) = 0.6 (by subsampling the data). βo is an offset E[π(C)] = π(0)P(C = 0) + π(1)P(C = 1), where P(C = a), a = 0,1 are estimated from data. Finally, the noise level is controlled by γ; we choose 1.0 and 4.0 to simulate data with small and large noise. The final dataset has 10, 685 data entries.
Protocol For the language model, we use the pretrained distilbert-base-uncased model provided by the transformers package. The model is trained in the k-folding fashion with 5 folds. We apply the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 2e−5 and a batch size of 64. The maximum number of epochs is set as 20, with early stopping based on validation loss with a patience of 6. Each experiment is replicated with five different seeds and the final Q̂(a, x i) predictions are obtained by averaging the predictions from the 5 resulting models. The propensity model is implemented by running the Gaussian process regression using GaussianProcessClassifier in the sklearn package with DotProduct + WhiteKernel kernel. (We choose different random state for the GPR to guarantee the convergence of the GPR.) The coverage experiment uses 100 replicates.
Results The main question here is the efficacy of the estimation procedure. Table 1 compares the outcome-only estimator τ̂Q and the estimator τ̂TI. First, the absolute bias of the new method is significantly lower than the absolute bias of the outcome-only estimator. This is particularly true where there is moderate to high levels of confounding. Next, we check actual coverage rates over 100 replicates of the experiment. First, we find that the naive approach for the outcome-only estimator fails completely. The nominal confidence interval almost never actually includes the true effect. It is wildly optimistic. By contrast, the confidence intervals from the new method often cover the true value. This is an enormous improvement over the baseline. Nevertheless, they still do not actually achieve their nominal (95%) coverage. This may be because the Q̂ estimate is still not good enough for the asymptotics to kick in, and we are not yet justified in ignoring the uncertainty from model fitting.
5.2 APPLICATION: CONSUMER COMPLAINTS TO THE FINANCIAL PROTECTION BUREAU
We follow the same pipeline of the real data experiment in (Pryzant et al., 2020, §6.2). The dataset is consumers complaints made to the financial protection. Treatment A is politeness (measured using Yeomans et al. (2018)) and the outcome Y is a binary indicator of whether complaints receive a response within 15 days.
We use the same training procedure as for the simulation data. Table 2 shows point estimates and their 95% confidence intervals. Notice that the naive estimator show a significant negative effect of politeness on reducing response time. On the other hand, the more accurate AIPTW method as well as the outcome-only estimator have confidence intervals that cover only positive values, so we conclude that consumers’ politeness has a positive effect on response time. This matches our intuitions that being more polite should increase the probability of receiving a timely reply.
6 DISCUSSION
In this paper, we address the estimation of the causal effect of a text document attribute using observational data. The key challenge is that we must adjust for the text—to handle confounding—but adjusting for all of the text violates overlap. We saw that this issue could be effectively circumvented with a suitable choice of estimand and estimation procedure. In particular, we have seen an estimand that corresponds to the qualitative causal question, and an estimator that is valid even when the outcome model is learned slowly. The procedure also circumvents the need for bootstrapping, which is prohibitively expensive in our setting.
There are some limitations. The actual coverage proportion of our estimator is below the nominal level. This is presumably due to the imperfect fit of the conditional outcome model. Diagnostics (see Appendix D) show that as conditional outcome estimations become more accurate, the TI estimator becomes less biased, and its coverage increases. It seems plausible that the issue could be resolved by using more powerful language models.
Although we have focused on text in this paper, the problem of causal estimation with apparent overlap violation exists in any problem where we must adjust for unstructured and high-dimensional covariates. Another interesting direction for future work is to understand how analogous procedures work outside the text setting.
ACKNOWLEDGEMENT
Thanks to Alexander D’Amour for feedback on an earlier draft. We acknowledge the University of Chicago’s Research Computing Center for providing computing resources. This work was partially supported by Open Philanthropy.
A PROOF OF ASYMPTOTIC NORMALITY
Theorem 2. Assume the following.
1. The mis-estimation of conditional outcomes can be bounded as follows
max a∈{0,1}E[(Q̂a(X )−Q(a, X )) 2] 1 2 = o(n− 14 ). (4.9)
2. The propensity score function P(A= 1|·, ·) is Lipschitz continuous onR2, and ∃ ϵ > 0, P ϵ ≤ gη(X )≤ 1− ϵ = 1
3. The propensity score estimate converges at least as quickly as k nearest neighbor; i.e., E[ ĝη(X )− P(A= 1 | η̂(X )
2 | η̂(X )] 12 = O(n− 14 ) Györfi et al. (2002); 4. There exist positive constants C1, C2, c, and q > 2 such that
E[|Y |q] 1q ≤ C2, sup η∈supp(η(X )) E[(Y −Q(A, X )2 | η(X ) = η)] ≤ C2, E[(Y −Q(A, X )2)] 12 ≥ c, max
a∈{0,1}E[ Q̂a(X )−Q(a, X ) ] 1q ≤ C1.
Then, the estimator τ̂TI is consistent and
p n(τ̂TI −τCDE) d→ N(0,σ2) (4.10)
where σ2 = E ϕ(X ;Q, gη,τCDE)
2 .
Proof. We first prove that misestimation of propensity score has the rate n− 14 . For simplicity, we use fg , f̂g : (u, v) ∈ R2 → R to denote conditional probability P(A = 1|u, v) = fg(u, v) and the estimated propensity function by running the nonparametric regression P̂(A = 1|u, v) = f̂g(u, v). Specifically, we have fg(Q(0, X ),Q(1, X )) = gη(X ) and f̂g(Q̂0(X ), Q̂1(X )) = P̂(A = 1|Q̂0(X ), Q̂1(X )) = ĝη(X ). Since E[ Q̂0(X )−Q(0, X ) 2 ] 1 2 , E[ Q̂1(X )−Q(1, X ) 2 ] 1 2 = o(n−1/4) and fg is Lipschitz continuous, we have
E fg(Q̂0(X ), Q̂1(X ))− fg (Q(0, X ),Q(1, X )) 2
1 2
≤L ·E Q̂0(X ), Q̂1(X ) − (Q(0, X ),Q(1, X )) 2
2
1 2
=L · ¦E Q̂0(X )−Q(0, X ) 2 + E Q̂1(X )−Q(1, X ) 2 © 12 =o(n−1/4)
(A.1)
Since the true propensity function fg is Lipschitz continuous on R2, the mean squared error rate of the k nearest neighbor is O(n−1/2) Györfi et al. (2002). In addition, since the propensity score function and its estimation are bounded under 1, we have the following equation
E f̂g(Q̂0(X ), Q̂1(X ))− fg(Q̂0(X ), Q̂1(X )) 2 = O(n−1/2), (A.2)
due to the dominated convergence theorem. By (A.1) and (A.2), we can bound the mean squared error of estimated propensity score in the following form:
E ĝη(X )− gη(X ) 2
≤E ĝη(X )− fg(Q̂0(X ), Q̂1(X )) 2 +E fg(Q̂0(X ), Q̂1(X ))− gη(X ) 2 =E fg(Q̂0(X ), Q̂1(X ))− fg (Q(0, X ),Q(1, X )) 2+ E f̂g(Q̂0(X ), Q̂1(X ))− fg(Q̂0(X ), Q̂1(X )) 2
=O(n−1/2),
(A.3)
that is E ĝη(X )− gη(X ) 2 12 = O(n− 14 ).
Before we apply the conclusion of Theorem 5.1 in (Chernozhukov et al., 2017b), we need to check all assumptions in Assumption 5.1 hold in Chernozhukov et al. (2017b). Let C :=max ¦ (2Cq1 + 2 q) 1 q , C2 © .
(a) E[Y − Q(A, X ) | η(X ), A] = 0, E[A − gη(X ) | η(X )] = 0 are easily checked by invoking definitions of Q and gη.
(b) E[|Y |q] 1q ≤ C , E[(Y −Q(A, X ))2] 12 ≥ c, and supη∈supp(η(X ))E[(Y −Q(A, X ))2 | η(X ) = η] ≤ C are guaranteed by the fourth condition in the theorem.
(c) P ϵ ≤ gη(X )≤ 1− ϵ = 1 is the second condition in the theorem.
(d) Since propensity score function and its estimation are bounded under 1, we have
E[ Q̂1(X )−Q(1, X ) q] +E[ Q̂0(X )−Q(0, X ) q] +E[ ĝη(X )− gη(X ) q] 1q
≤ (Cq1 + Cq1 + 2q) 1 q ≤ C
(e) Based on (A.3) and condition 1 in the theorem, we have
E[ Q̂1(X )−Q(1, X ) 2 ] +E[ Q̂0(X )−Q(0, X ) 2 ] +E[ ĝη(X )− gη(X ) 2 ] 1 2
≤ o(n− 12 ) + o(n− 12 ) +O(n− 12 ) 12 ≤ O(n− 14 ), E[ Q̂0(X )−Q(0, X ) 2 ] 1 2 ·E[ ĝη(X )− gη(X ) 2] 12 = o(n− 12 )
(f) Based on condition 3 in the theorem, we have
sup x∈supp(X )
E[ ĝη(X )− P(A= 1 | η̂(X )) 2 | η̂(X ) = η̂(x)] = O(n− 12 ).
We consider a smaller positive constant ϵ̃ instead of ϵ. Note that for ϵ̃ < ϵ, we still have P(ϵ̃ ≤ gη(X )≤ 1− ϵ̃) = 1. Then,
P
sup
x∈supp(X )
ĝη(x)− 12 > 12 − ϵ̃ = P inf x∈supp(X ) ĝη(x)< ϵ̃ + P
sup x∈supp(X ) ĝη(x)> 1− ϵ̃
≤P inf x∈supp(X )P(A= 1 | η̂(X ) = η̂(x))− infx∈supp(X ) ĝη(x)> ϵ − ϵ̃
+ P
sup x∈supp(X ) ĝη(x)− sup x∈supp(X ) P(A= 1 | η̂(X ) = η̂(x))> 1− ϵ̃ − (1− ϵ)
≤E
infx∈supp(X ) ĝη(x)− infx∈supp(X ) P(A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2 + E supx∈supp(X ) ĝη(x)− supx∈supp(X ) P(A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2
≤2supx∈supp(X )E
ĝη(X )− P (A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2 =O(n− 12 )
Hence, P(supx∈supp(X ) ĝη(x)− 12 ≤ 12 − ϵ̃)≥ 1−O(n− 12 ).
With (a)-(f), we can invoke the conclusion in Theorem 5.1 in (Chernozhukov et al., 2017b), and get the asymptotic normality of the TI estimator.
B PROOF OF CAUSAL IDENTIFICATION
Theorem 1. Assume the following: 1. (Causal structure) The causal relationships among A, Ã, Z, Y , and X satisfy the causal DAG in Figure 2; 2. (Overlap) 0< P(A= 1 | XA∧Z , XZ)< 1; 3. (Intention equals perception) A= Ã almost surely with respect to all interventional distributions. Then, the CDE is identified from observational data as
CDE= τCDE := EX |Ã=1 E[Y | η(X ), Ã= 1]−E[Y | η(X ), Ã= 0] , (3.4)
where η(X ) := (Q(0, X ),Q(1, X )).
Proof. We first prove that this two-dimensional confounding part η(X ) satisfies positivity. Since (Q(0, X ), Q(1, X )) = (E [Y | A= 1, XA∧Z , XZ] , E [Y | A= 0, XA∧Z , XZ]) is a function of (XA∧Z , XZ), the following equations hold:
P(A= 1 | Q(0, X ),Q(1, X )) =E(A | Q(0, X ),Q(1, X )) =E [E (A | XA∧Z , XZ) | Q(0, X ),Q(1, X )] =E [P(A= 1| XA∧Z , XZ) | Q(0, X ),Q(1, X )] .
(B.1)
As 0 < P(A= 1| XA∧Z , XZ) < 1, we have 0 < P(A= 1| Q(0, X ),Q(1, X )) < 1. Furthermore, we have 0< P(Ã= 1| Q(0, X ),Q(1, X ))< 1 due to almost everywhere equivalence of A and Ã.
Since A= Ã, we can rewrite (3.1) by replacing A with à in the following form:
CDE=EXA∧Z ,XZ | Ã=1 E(Y | do(Ã= 1), XA∧Z , XZ)−E(Y | do(Ã= 0), XA∧Z , XZ)
=EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1, XA∧Z , XZ)−E(Y | Ã= 0, XA∧Z , XZ) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1, X )−E(Y | Ã= 0, X ) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1,Q(0, X ),Q(1, X )) −E E(Y | Ã= 0,Q(0, X ),Q(1, X )) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1,η(X )) −E E(Y | Ã= 0,η(X )) =EX | Ã=1 E(Y | Ã= 1,η(X )) −E E(Y | Ã= 0,η(X )) .
(B.2)
The equivalence of the first and the second line is because XA∧Z , XZ block all backdoor paths between à and Y (See Figure 2) and 0 < P(à = 1| Q(0, X ),Q(1, X )) < 1. Thus, the “do-operation” in the first line can be safely removed. Equivalence of the second line and the third line is due to Q(Ã, X ) = E Y | Ã, XA∧Z , XZ , which is subject to the causal model in Figure 2. The last equation is based on the fact that η(X ) is a function of only XA∧Z and XZ . (It can be easily checked by using the definition of the expectation.)
(B.2) shows that (Q(0, X ),Q(1, X )) is a two-dimensional confounding variable such that CDE is identifiable when we adjust for it as the confounding part.
Note that if f and h are two invertible functions on R, ( f (Q(0, X )), h(Q(1, X ))) also suffices the identification for CDE. Since the sigma algebra should be the same for (Q(0, X ),Q(1, X )) and f (Q(0, X )), h(Q(1, X )), i.e.,
σ (Q(0, X ),Q(1, X )) = σ ( f (Q(0, X )), h(Q(1, X ))) .
Hence, we have
P (A= 1 | Q(0, X ),Q(1, X )) = P (A= 1 | f (Q(0, X )), h(Q(1, X ))) , E (Y | Q(0, X ),Q(1, X )) = E (Y | f (Q(0, X )), h(Q(1, X ))) . (B.3)
C ADDITIONAL EXPERIMENTS
We conduct additional experiments to show how the estimation of causal effect changes 1) over different nonparametric models for the propensity score estimation, and 2) when using different double machine learning estimators on causal estimation. Specifically, for the first study, we apply different nonparametric models and the logistic regression to the estimated confounding part η̂(X ) =
Q̂0(X ), Q̂1(X ) to obtain propensity scores. We use ATT AIPTW in all above cases for causal effect estimation. For the second study, we fix the first two stages of the TI estimator, i.e. we apply Q-Net for the conditional outcomes and compute propensity scores with the Gaussian process regression where the kernel function is the summation of dot product and white noise. Estimated conditional outcomes and propensity scores are plugged into different double machine learning estimators. We make the following conclusions with results of above experiments.
The choice of nonparametric models is significant. Table 3 summarizes results with applying different regression models for the propensity estimation. We can see that suitable nonparametric models will strongly increase the coverage proportion over true causal estimand. Therefore, we conclude that the accuracy in causal estimation is highly dependent on the choice of nonparametric models. In practice, when there is some prior information about the propensity score function, we should apply the most suitable nonparametric model to increase the reliability of our causal estimation.
The ATT AIPTW is consistently the best double machine learning estimator. Table 4 shows results by applying different double machine learning estimators. We apply both estimators for the average treatment effect (ATE) and the controlled direct effect (CDE). The bias of “unadjusted” estimator τ̂naive is also included in Table 4 (a). For absolute bias,
ATT AIPTW τ̂TI has comparable results with other double machine learning estimators in most cases. For coverage proportion of confidence intervals, though it has lower rates in some cases, τ̂TI has consistently the best performance. Especially in high confounding situations, the advantage of τ̂TI is obvious.
Estimator For each dataset, we compute estimators as follows. n1 and n0 stands for the number of individuals in the treated and controlled group. n= n1+ n0 is the total number of individuals.
• “Unadjusted” baseline estimator: τ̂naive = 1n1 ∑ i:Ai=1 Yi − 1n0 ∑ i:Ai=0 Yi
• “Outcome-only” estimator: τ̂Q = 1n1 ∑ i:Ai=1 Q̂1,i − Q̂0,i
• ATT AIPTW: τ̂TI = 1n1 ∑ i:Ai=1 Ai(Yi − Q̂0,i)− (1− Ai)(Yi − Q̂0,i) ĝi1− ĝi
D DISCUSSION OF LOW COVERAGE
In this section, we discuss why the confidence intervals we get (See Table 1) have lower coverage than the nominated level 95%. We conduct diagnostics and find that the inaccuracy of Q’s estimations is responsible for the low coverage. We compute absolute biases, variances, and coverages of τTI’s with different mean squared errors Ê[(Q− Q̂)2] by using different numbers of datasets. According to Figure 4–Figure 5, as the mean squared error of Q increases, the bias of τTI grows and the coverage of τTI drops. Specifically, the highest coverage of each setting is almost 95% (use 50 datasets with most accurate conditional outcome estimations). In practice, one direct way to improve the TI estimator’s accuracy is to apply better NLP models so that more accurate conditional outcome estimations can be obtained. | 1. What is the focus and contribution of the paper regarding causal effect estimation?
2. What are the strengths of the proposed approach, particularly in terms of its ability to adjust for confounding variables?
3. What are the weaknesses of the paper, especially regarding the assumptions made about the causal DAG?
4. Do you have any questions or concerns regarding the CDE formulation and its purpose?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper addresses the problem of estimating the causal effect of an attribute of a text on some outcome variable under a setting where overlap is violated, i.e., the treatment variable is fully determined by the text features. Under the assumption that the problem satisfies the constraints of a given causal model (Figure 1), the authors propose an identification formula for the target effect that adjusts for part of the text to block confounding while resolving the overlap violation. Then, they propose an estimation procedure for this formula using standard double machine learning. Empirical evaluation shows the advantage of the proposed technique over baseline work.
Strengths And Weaknesses
Strengths: The evaluation of the proposed method shows a clear advantage over standard previous methods for estimation.
Weaknesses and Comments:
CDE formulation: The authors state the following while explaining the formulation of the CDE expression on page 4: "our formal causal effect aims at capturing the effect of A through only the first, direct, path". Obviously, identifying the total effect of A on Y is different than identifying the direct effect, or at least the variant suggested in Equation 3.1. I'm missing the reasoning for why this is the purpose of the computation. Further clarification from the authors would be appreciated.
Generality of the causal DAG (Figure 1): The model assumes the absence of confounding between
X
and
Y
and between
A
and
Y
. In general, it is not clear how valid this assumption is for estimating the causal effects of attributes of a text document even if it is justified for the suggested problem of sentiment effect on sales.
Citations: Most of the introduction states technical claims without any citation to back it. For example, the second paragraph discusses the identification of causal effects and the required conditions but there are no citations to support that. Another example is in the following paragraph which states that "it is often reasonable to assume that text data has information about all common causes..."
typo: p.5, "this estimator is yields".
Clarity, Quality, Novelty And Reproducibility
In general, the paper is clear and the authors attempt to explain most of the methodology. Some concerns about clarity are raised in the section above. |
ICLR | Title
Causal Estimation for Text Data with (Apparent) Overlap Violations
Abstract
Consider the problem of estimating the causal effect of some attribute of a text document; for example: what effect does writing a polite vs. rude email have on response time? To estimate a causal effect from observational data, we need to adjust for confounding aspects of the text that affect both the treatment and outcome—e.g., the topic or writing level of the text. These confounding aspects are unknown a priori, so it seems natural to adjust for the entirety of the text (e.g., using a transformer). However, causal identification and estimation procedures rely on the assumption of overlap: for all levels of the adjustment variables, there is randomness leftover so that every unit could have (not) received treatment. Since the treatment here is itself an attribute of the text, it is perfectly determined, and overlap is apparently violated. The purpose of this paper is to show how to handle causal identification and obtain robust causal estimation in the presence of apparent overlap violations. In brief, the idea is to use supervised representation learning to produce a data representation that preserves confounding information while eliminating information that is only predictive of the treatment. This representation then suffices for adjustment and satisfies overlap. Adapting results on non-parametric estimation, we find that this procedure is robust to conditional outcome misestimation, yielding a low-absolute-bias estimator with valid uncertainty quantification under weak conditions. Empirical results show strong improvements in bias and uncertainty quantification relative to the natural baseline. Code, demo data and a tutorial are available at https://github.com/gl-ybnbxb/TI-estimator.
1 INTRODUCTION
We consider the problem of estimating the causal effect of an attribute of a passage of text on some downstream outcome. For example, what is the effect of writing a polite or rude email on the amount of time it takes to get a response? In principle, we might hope to answer such questions with a randomized experiment. However, this can be difficult in practice—e.g., if poor outcomes are costly or take long to gather. Accordingly, in this paper, we will be interested in estimating such effects using observational data.
There are three steps to estimating causal effects using observational data (See Chapter 36 Murphy (2023)). First, we need to specify a concrete causal quantity as our estimand. That is, give a formal quantity target of estimation corresponding to the high-level question of interest. The next step is causal identification: we need to prove that this causal estimator can, in principle, be estimated using only observational data. The standard approach for identification relies on adjusting for confounding variables that affect both the treatment and the outcome. For identification to hold, our adjustment variables must satisfy two conditions: unconfoundedness and overlap. The former requires the adjustment variables contain sufficient information on all common causes. The latter requires that the adjustment variable does not contain enough information about treatment assignment to let us perfectly predict it. Intuitively, to disentangle the effect of treatment from the effect of confounding, we must observe each treatment state at all levels of confounding. The final
step is estimation using a finite data sample. Here, overlap also turns out to be critically important as a major determinant of the best possible accuracy (asymptotic variance) of the estimator Chernozhukov et al. (2016).
Since the treatment is a linguistic property, it is often reasonable to assume that text data has information about all common causes of the treatment and the outcome. Thus, we may aim to satisfy unconfoundedness in the text setting by adjusting for all the text as the confounding part. However, doing so brings about overlap violation. Since the treatment is a linguistic property determined by the text, the probability of treatment given any text is either 0 or 1. The polite/rude tone is determined by the text itself. Therefore, overlap does not hold if we naively adjust for all the text as the confounding part. This problem is the main subject of this paper. Or, more precisely, our goal is to find a causal estimand, causal identification conditions, and a robust estimation procedure that will allow us to effectively estimate causal effects even in the presence of such (apparent) overlap violations.
In fact, there is an obvious first approach: simply use a standard plug-in estimation procedure that relies only on modeling the outcome from the text and treatment variables. In particular, do not make any explicit use of the propensity score, the probability each unit is treated. Pryzant et al. (2020) use an approach of this kind and show it is reasonable in some situations. Indeed, we will see in Sections 3 and 4 that this procedure can be interpreted as a point estimator of a controlled causal effect. Even once we understand what the implied causal estimand is, this approach has a major drawback: the estimator is only accurate when the text-outcome model converges at a very fast rate. This is particularly an issue in the text setting, where we would like to use large, flexible, deep learning models for this relationship. In practice, we find that this procedure works poorly: the estimator has significant absolute bias and (the natural approach to) uncertainty quantification almost never includes the estimand true value; see Section 5.
The contribution of this paper is a method for robustly estimating causal effects in text. The main idea is to break estimation into a two-stage procedure, where in the first stage we learn a representation of the text that preserves enough information to account for confounding, but throws away enough information to avoid overlap issues. Then, we use this representation as the adjustment variables in a standard double machine-learning estimation procedure Chernozhukov et al. (2016; 2017a). To establish this method, the contributions of this paper are:
1. We give a formal causal estimand corresponding to the text-attribute question. We show this estimand is causally identified under weak conditions, even in the presence of apparent overlap issues.
2. We show how to efficiently estimate this quantity using the adapted double-ML technique just described. We show that this estimator admits a central limit theorem at a fast ( p n) rate under weak conditions on the rate at which the ML model
learns the text-outcome relationship (namely, convergence at n1/4 rate). This implies absolute bias decreases rapidly, and an (asymptotically) valid procedure for uncertainty quantification.
3. We test the performance of this procedure empirically, finding significant improvements in bias and uncertainty quantification relative to the outcome-model-only baseline.
Related work The most related literature is on causal inference with text variables. Papers include treating text as treatment Pryzant et al. (2020); Wood-Doughty et al. (2018); Egami et al. (2018); Fong & Grimmer (2016); Wang & Culotta (2019); Tan et al. (2014)), as outcome Egami et al. (2018); Sridhar & Getoor (2019), as confounder Veitch et al. (2019); Roberts et al. (2020); Mozer et al. (2020); Keith et al. (2020), and discovering or predicting causality from text del Prado Martin & Brendel (2016); Tabari et al. (2018); Balashankar et al. (2019); Mani & Cooper (2000). There are also numerous applications using text to adjust for confounding (e.g., Olteanu et al., 2017; Hall, 2017; Kiciman et al., 2018; Sridhar et al., 2018; Sridhar & Getoor, 2019; Saha et al., 2019; Karell & Freedman, 2019; Zhang et al., 2020). Of these, Pryzant et al. (2020) also address non-parametric es-
timation of the causal effect of text attributes. Their focus is primarily on mismeasurement of the treatments, while our motivation is robust estimation.
This paper also relates to work on causal estimation with (near) overlap violations. D’Amour et al. (2021) points out high-dimensional adjustment (e.g., Rassen et al., 2011; Louizos et al., 2017; Li et al., 2016; Athey et al., 2017) suffers from overlap issues. Extra assumptions such as sparsity are often needed to meet the overlap condition. These results do not directly apply here because we assume there exists a low-dimensional summary that suffices to handle confounding.
D’Amour & Franks (2021) studies summary statistics that suffice for identification, which they call deconfounding scores. The supervised representation learning approach in this paper can be viewed as an extremal case of the deconfounding score. However, they consider the case where ordinary overlap holds with all observed features, with the aim of using both the outcome model and propensity score to find efficient statistical estimation procedures (in a linear-gaussian setting). This does not make sense in the setting we consider. Additionally, our main statistical result (robustness to outcome model estimation) is new.
2 NOTATION AND PROBLEM SETUP
We follow the causal setup of Pryzant et al. (2020). We are interested in estimating the causal effect of treatment A on outcome Y . For example, how does writing a negative sentiment (A) review (X ) affect product sales (Y )? There are two immediate challenges to estimating such effects with observed text data. First, we do not actually observe A, which is the intent of the writer. Instead, we only observe Ã, a version of A that is inferred from the text itself. In this paper, we will assume that A= Ã almost surely—e.g., a reader can always tell if a review was meant to be negative or positive. This assumption is often reasonable, and follows Pryzant et al. (2020). The next challenge is that the treatment may be correlated with other aspects of the text (Z) that are also relevant to the outcome— e.g., the product category of the item being reviewed. Such Z can act as confounding variables, and must somehow be adjusted for in a causal estimation problem.
Each unit (Ai , Zi , X i , Yi) is drawn independently and identically from an unknown distribution P. Figure 1 shows the causal relationships among variables, where solid arrows represent causal relations, and the dotted line represents possible correlations between two variables. We assume that text X contains all common causes of à and the outcome Y .
3 IDENTIFICATION AND CAUSAL ESTIMAND
The first task is to translate the qualitative causal question of interest—what is the effect of A on Y —into a causal estimand. This estimand must both be faithful to the qualitative question and be identifiable from observational data under reasonable assumptions. The key challenges here are that we only observe à (not A itself), there are unknown confounding variables influencing the text, and à is a deterministic function of the text, leading to overlap violations if we naively adjust for all the text. Our high-level idea is to split the text into abstract (unknown) parts depending on whether they are confounding—affect both à and Y —or whether they affect à alone. The part of the text that affects only à is not necessary for causal adjustment, and can be thrown away. If this part contains “enough” information about Ã, then throwing it away can eliminate our ability to perfectly predict Ã, thus fixing the overlap issue. We now turn to formalizing this idea, showing how it can be used to define an estimand and to identify this estimand from observational data.
Causal model The first idea is to decompose the text into three parts: one part affected by only A, one part affected interactively by A and Z , and another part affected only by Z . We use XA, XA∧Z and XZ to denote them, respectively; see Figure 2 for the corresponding causal model. Note that there could be additional information in the text in addition to these three parts. However, since they are irrelevant to both A and Z , we do not need to consider them in the model.
Controlled direct effect (CDE) The treatment A affects the outcome through two paths. Both “directly” through XA—the part of the text determined just by the treatment—and also through a path going through XA∧Z—the part of the text that relies on interaction effects with other factors. Our formal causal effect aims at capturing the effect of A through only the first, direct, path.
CDE := EXA∧Z ,XZ |A=1 E[Y | XA∧Z , XZ , do(A= 1)] −E[Y | XA∧Z , XZ , do(A= 0)] . (3.1)
Here, do is Pearl’s do notation, and the estimand is a variant of the controlled direct effect (Pearl, 2009). Intuitively, it can be interpreted as the expected change in the outcome induced by changing the treatment from 1 to 0 while keeping part of the text affected by Z the same as it would have been had we set A = 1. This is a reasonable formalization of the qualitative “effect of A on Y ”. Of course, it is not the only possible formalization. Its advantage is that, as we will see, it can be identified and estimated under reasonable conditions.
Identification To identify CDE we must rewrite the expression in terms of observable quantities. There are three challenges: we need to get rid of the do operator, we don’t observe
A (only Ã), and the variables XA∧Z , XZ are unknown (they are latent parts of X ). Informally, the identification argument is as follows. First, XA∧Z , XZ block all backdoor paths (common causes) in Figure 2. Moreover, because we have thrown away XA, we now satisfy overlap. Accordingly, the do operator can be replaced by conditioning following the usual causal-adjustment argument. Next, A= Ã almost surely, so we can just replace A with Ã. Now, our estimand has been reduced to:
˜CDE := EXA∧Z ,XZ |Ã=1 E[Y | XA∧Z , XZ , Ã= 1]−E[Y | XA∧Z , XZ , Ã= 0)] . (3.2)
The final step is to deal with the unknown XA∧Z , XZ . To fix this issue, we first define the conditional outcome Q according to:
Q(Ã, X ) := E(Y | Ã, X ). (3.3) A key insight here is that, subject to the causal model in Figure 2, we have Q(Ã, X ) = E(Y | Ã, XA∧Z , XZ). But this is exactly the quantity in (3.2). Moreover, Q(Ã, X ) is an observable data quantity (it depends only on the distribution of the observed quantities). In summary:
Theorem 1. Assume the following: 1. (Causal structure) The causal relationships among A, Ã, Z, Y , and X satisfy the causal DAG in Figure 2; 2. (Overlap) 0< P(A= 1 | XA∧Z , XZ)< 1; 3. (Intention equals perception) A= Ã almost surely with respect to all interventional distributions. Then, the CDE is identified from observational data as
CDE= τCDE := EX |Ã=1 E[Y | η(X ), Ã= 1]−E[Y | η(X ), Ã= 0] , (3.4)
where η(X ) := (Q(0, X ),Q(1, X )).
The proof is in Appendix B.
We give the result in terms of an abstract sufficient statistic η(X ) to emphasize that the actual conditional expectation model is not required, only some statistic that is informationally equivalent. We emphasize that, regardless of whether the overlap condition holds or not, the propensity score of η(X ) is accessible and meaningful. Therefore, we can easily identify when identification fails as long as η(X ) is well-estimated.
4 METHOD
Our ultimate goal is to draw a conclusion about whether the treatment has a causal effect on the outcome. Following the previous section, we have reduced this problem to estimating τCDE, defined in Theorem 1. The task now is to develop an estimation procedure, including uncertainty quantification.
4.1 OUTCOME ONLY ESTIMATOR
We start by introducing the naive outcome only estimator as a first approach to CDE estimation. The estimator is adapted from Pryzant et al. (2020). The observation here is that, taking η(X ) = (Q(0, X ),Q(1, X )) in (3.4), we have τCDE = EX |A=1 [ E(Y | A= 1, X )−E(Y | A= 0, X ) ] . (4.1) Since Q(A, X ) is a function of the whole text data X , it is estimable from observational data. Namely, it is the solution to the square error risk:
Q = argmin Q̃ E[(Y − Q̃(A, X ))2]. (4.2)
With a finite sample, we can estimate Q as Q̂ by fitting a machine-learning model to minimize the (possibly regularized) square error empirical risk. That is, fit a model using mean square error as the objective function. Then, a straightforward estimator is:
τ̂Q := 1 n1 ∑ i:Ai=1
Q̂1(X i)− Q̂0(X i), (4.3) where n1 is the number of treated units.
It should be noted that the model for Q is not arbitrary. One significant issue for those models which directly regress Y on A and X is when overlap does not hold, the model could ignore A and only use X as the covariate. As a result, we need to choose a class of models that force the use of the treatment A. To address this, we use a two-headed model that regress Y on X for A= 0/1 separately in the conditional outcome learning model (See Section 4.2 and Figure 3).
As discussed in the introduction Section 1, this estimator yields a consistent point estimate, but does not offer a simple approach for uncertainty quantification. A natural guess for an estimate of its variance is:
ˆvar(τ̂Q) := 1 n ˆvar(Q̂1(X i)− Q̂0(X i) | Q̂). (4.4) That is, just compute the variance of the mean conditional on the fitted model. However, this procedure yields asymptotically valid confidence intervals only if the outcome model converges extremely quickly; i.e., if E[(Q̂ −Q)2] 12 = o(n− 12 ). We could instead bootstrap, refitting Q̂ on each bootstrap sample. However, with modern language models, this can be prohibitively computationally expensive.
4.2 TREATMENT IGNORANT EFFECT ESTIMATION (TI-ESTIMATOR)
Following Theorem 1, it suffices to adjust for η(X ) = (Q(0, X ),Q(1, X )). Accordingly, we use the following pipeline. We first estimate Q̂0(X ) and Q̂1(X ) (using a neural language model), as with the outcome-only estimator. Then, we take η̂(X ) := (Q̂0(X ), Q̂1(X )) and estimate ĝη ≈ P(A= 1 | η̂). That is, we estimate the propensity score corresponding to the estimated representation. Finally, we plug the estimated Q̂ and ĝη into a standard double machine learning estimator (Chernozhukov et al., 2016).
We describe the three steps in detail.
Q-Net In the first stage, we estimate the conditional outcomes and hence obtain the estimated two-dimensional confounding vector η̂(X ). For concreteness, we will use the dragonnet architecture of Shi et al. (2019). Specifically, we train DistilBERT (Sanh et al., 2019) modified to include three heads, as shown in Figure 3. Two of the heads correspond to Q̂0(X ) and Q̂1(X ) respectively. As discussed in the Section 4.1, applying two heads can force the model to use the treatment A. The final head is a single linear layer predicting the treatment. This propensity score prediction head can help prevent (implicit) regularization of the model from throwing away XA∧Z information that is necessary for identification. The output of this head is not used for the estimation since its purpose is to force the DistilBERT representation to preserve all confounding information. This has been shown to improve causal estimation (Shi et al., 2019; Veitch et al., 2019).
We train the model by minimizing the objective function
L (θ ;X) = 1 n ∑ i Q̂ai (x i;θ )− yi 2 +αCrossEntropy (ai , gu(x i)) + βLmlm(x i) , (4.5) where θ are the model parameters, α, β are hyperparameters and Lmlm(·) is the masked language modeling objective of DistilBERT.
There is a final nuance. In practice, we split the data into K-folds. For each fold j, we train a model Q̂− j on the other K −1 folds. Then, we make predictions for the data points in fold j using Q̂− j . Slightly abusing notation, we use Q̂a(x) to denote the predictions obtained in this manner.
Propensity score estimation Next, we define η̂(x) := (Q̂0(x), Q̂1(x)) and estimate the propensity score ĝη(x) ≈ P(A = 1 | η̂(x)). To do this, we fit a nonparametric estimator to the binary classification task of predicting A from η̂(X ) in a cross fitting or K-fold fashion. The important insight here is that since η̂(X ) is
2-dimensional, non-parametric estimation is possible at a fast rate. In Section 5, we try several methods and find that kernel regression usually works well.
We also define gη(X ) := P(A = 1 | η(X )) as the idealized propensity score. The idea is that as η̂ → η, we will also have ĝη → gη so long as we have a valid non-parametric estimate.
CDE estimation The final stage is to combine the estimated outcome model and propensity score into a CDE estimator. To that end, we define the influence curve of τCDE as follows:
ϕ(X ;Q, gη,τ CDE) := A · (Y −Q(0, X )) p − gη(X ) p 1− gη(X ) ·(1−A)·(Y −Q(0, X ))−AτCDE, (4.6) where p = P (A= 1). Then, the standard double machine learning estimator of τCDE Chernozhukov et al. (2016), and the α-level confidence interval of this estimator, is given by
τ̂TI = 1 n n∑ i=1 ϕ̂i , C I TI = τ̂TI − z1−α/2ŝd(ϕ̂i − Ai · τ̂TI/p̂), τ̂TI + z1−α/2ŝd(ϕ̂i − Ai · τ̂TI/p̂) ,
(4.7) where
ϕ̂i = Ai · Yi − Q̂0(X i)
p̂ − ĝη(X i) p̂ 1− ĝη(X i) · (1− Ai) · Yi − Q̂0(X i) , i = 1, · · · , n, (4.8)
p̂ = 1n ∑n
i=1 Ai , z1−α/2 is the α/2-upper quantile of the standard normal, and ŝd(·) is the sample standard deviation.
Validity We now have an estimation procedure. It remains to give conditions under which this procedure is valid. In particular, we require that it should yield a consistent estimate and asymptotically correct confidence intervals.
Theorem 2. Assume the following.
1. The mis-estimation of conditional outcomes can be bounded as follows
max a∈{0,1}E[(Q̂a(X )−Q(a, X )) 2] 1 2 = o(n− 14 ). (4.9)
2. The propensity score function P(A= 1|·, ·) is Lipschitz continuous onR2, and ∃ ϵ > 0, P ϵ ≤ gη(X )≤ 1− ϵ = 1
3. The propensity score estimate converges at least as quickly as k nearest neighbor; i.e., E[ ĝη(X )− P(A= 1 | η̂(X )
2 | η̂(X )] 12 = O(n− 14 ) Györfi et al. (2002); 4. There exist positive constants C1, C2, c, and q > 2 such that
E[|Y |q] 1q ≤ C2, sup η∈supp(η(X )) E[(Y −Q(A, X )2 | η(X ) = η)] ≤ C2, E[(Y −Q(A, X )2)] 12 ≥ c, max
a∈{0,1}E[ Q̂a(X )−Q(a, X ) ] 1q ≤ C1.
Then, the estimator τ̂TI is consistent and p n(τ̂TI −τCDE) d→ N(0,σ2) (4.10) where σ2 = E ϕ(X ;Q, gη,τCDE) 2 .
The proof is provided in Appendix A. The key point from this theorem is that we get asymptotic normality at the (fast) p
nrate while requiring only a slow (n1/4) convergence rate of Q. Intuitively, the reason is simply that, because η̂(X ) is only 2-dimensional, it is always possible to nonparametrically estimate the propensity score from η̂ at a fast rate—even naive KNN works! Effectively, this means the rate at which we estimate the true propensity score gη(X ) = P(A= 1 | η(X )) is dominated by the rate at which we estimate η(X ), which is in turn determined by the rate for Q̂. Now, the key property of the double ML estimator is that convergence only depends on the product of the convergence rates of Q̂ and ĝ. Accordingly, this procedure is robust in the sense that we only need to estimate Q̂ at the square root of the rate we needed for the naive Q-only procedure. This is much more plausible in practice. As we will see in Section 5, the TI-estimator dramatically improves the quality of the estimated confidence intervals and reduces the absolute bias of estimation.
Remark 3. In addition to robustness to noisy estimation of Q, there are some other advantages this estimation procedure inherits from the double ML estimator. If Q̂ is consistent, then the estimator is nonparametrically efficient in the sense that no other non-parametric estimator has a smaller asymptotic variance. That is, the procedure using the data as efficiently as possible.
5 EXPERIMENTS
We empirically study the method’s capability to provide accurate causal estimates with good uncertainty quantification Testing using semi-synthetic data (where ground truth causal effects are known), we find that the estimation procedure yields accurate causal estimates and confidence intervals. In particular, the TI-estimator has significantly lower absolute bias and vastly better uncertainty quantification than the Q-only method.
Additionally, we study the effect of the choice of nonparametric propensity score estimator and the choice of double machine-learning estimator, and the method’s robustness in regard to Q̂’s miscalibration. These results are reported in Appendices C and D. Although these
choices do not matter asymptotically, we find they have a significant impact in actual finite sample estimation. We find that, in general, kernel regression works well for propensity score estimation and the vanilla the Augmented Inverse Probability of Treatment weighted Estimator (AIPTW) corresponding to the CDE works well.
Finally, we reproduce the real-data analysis from Pryzant et al. (2020). We find that politeness has a positive effect on reducing email response time.
5.1 AMAZON REVIEWS
Dataset We closely follow the setup of Pryzant et al. (2020). We use publicly available Amazon reviews for music products as the basis for our semi-synthetic data. We include reviews for mp3, CD and vinyl, and among these exclude reviews for products costing more than $100 or shorter than 5 words. The treatment A is whether the review is five stars (A= 1) or one/two stars (A= 0).
To have a ground truth causal effect, we must now simulate the outcome. To produce a realistic dataset, we choose a real variable as the confounder. Namely, the confounder C is whether the product is a CD (C = 1) or not (C = 0). Then, outcome Y is generated according to Y ← βaA+βc (π(C)− βo)+γN(0,1). The true causal effect is controlled by βa. We choose βa = 1.0, 0.0 to generate data with and without causal effects. In this setting, βa is the oracle value of our causal estimand. The strength of confounding is controlled by βc . We choose βc = 50.0, 100.0. The ground-truth propensity score is π(C) = P(A= 1|C). We set it to have the value π(0) = 0.8 and π(1) = 0.6 (by subsampling the data). βo is an offset E[π(C)] = π(0)P(C = 0) + π(1)P(C = 1), where P(C = a), a = 0,1 are estimated from data. Finally, the noise level is controlled by γ; we choose 1.0 and 4.0 to simulate data with small and large noise. The final dataset has 10, 685 data entries.
Protocol For the language model, we use the pretrained distilbert-base-uncased model provided by the transformers package. The model is trained in the k-folding fashion with 5 folds. We apply the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 2e−5 and a batch size of 64. The maximum number of epochs is set as 20, with early stopping based on validation loss with a patience of 6. Each experiment is replicated with five different seeds and the final Q̂(a, x i) predictions are obtained by averaging the predictions from the 5 resulting models. The propensity model is implemented by running the Gaussian process regression using GaussianProcessClassifier in the sklearn package with DotProduct + WhiteKernel kernel. (We choose different random state for the GPR to guarantee the convergence of the GPR.) The coverage experiment uses 100 replicates.
Results The main question here is the efficacy of the estimation procedure. Table 1 compares the outcome-only estimator τ̂Q and the estimator τ̂TI. First, the absolute bias of the new method is significantly lower than the absolute bias of the outcome-only estimator. This is particularly true where there is moderate to high levels of confounding. Next, we check actual coverage rates over 100 replicates of the experiment. First, we find that the naive approach for the outcome-only estimator fails completely. The nominal confidence interval almost never actually includes the true effect. It is wildly optimistic. By contrast, the confidence intervals from the new method often cover the true value. This is an enormous improvement over the baseline. Nevertheless, they still do not actually achieve their nominal (95%) coverage. This may be because the Q̂ estimate is still not good enough for the asymptotics to kick in, and we are not yet justified in ignoring the uncertainty from model fitting.
5.2 APPLICATION: CONSUMER COMPLAINTS TO THE FINANCIAL PROTECTION BUREAU
We follow the same pipeline of the real data experiment in (Pryzant et al., 2020, §6.2). The dataset is consumers complaints made to the financial protection. Treatment A is politeness (measured using Yeomans et al. (2018)) and the outcome Y is a binary indicator of whether complaints receive a response within 15 days.
We use the same training procedure as for the simulation data. Table 2 shows point estimates and their 95% confidence intervals. Notice that the naive estimator show a significant negative effect of politeness on reducing response time. On the other hand, the more accurate AIPTW method as well as the outcome-only estimator have confidence intervals that cover only positive values, so we conclude that consumers’ politeness has a positive effect on response time. This matches our intuitions that being more polite should increase the probability of receiving a timely reply.
6 DISCUSSION
In this paper, we address the estimation of the causal effect of a text document attribute using observational data. The key challenge is that we must adjust for the text—to handle confounding—but adjusting for all of the text violates overlap. We saw that this issue could be effectively circumvented with a suitable choice of estimand and estimation procedure. In particular, we have seen an estimand that corresponds to the qualitative causal question, and an estimator that is valid even when the outcome model is learned slowly. The procedure also circumvents the need for bootstrapping, which is prohibitively expensive in our setting.
There are some limitations. The actual coverage proportion of our estimator is below the nominal level. This is presumably due to the imperfect fit of the conditional outcome model. Diagnostics (see Appendix D) show that as conditional outcome estimations become more accurate, the TI estimator becomes less biased, and its coverage increases. It seems plausible that the issue could be resolved by using more powerful language models.
Although we have focused on text in this paper, the problem of causal estimation with apparent overlap violation exists in any problem where we must adjust for unstructured and high-dimensional covariates. Another interesting direction for future work is to understand how analogous procedures work outside the text setting.
ACKNOWLEDGEMENT
Thanks to Alexander D’Amour for feedback on an earlier draft. We acknowledge the University of Chicago’s Research Computing Center for providing computing resources. This work was partially supported by Open Philanthropy.
A PROOF OF ASYMPTOTIC NORMALITY
Theorem 2. Assume the following.
1. The mis-estimation of conditional outcomes can be bounded as follows
max a∈{0,1}E[(Q̂a(X )−Q(a, X )) 2] 1 2 = o(n− 14 ). (4.9)
2. The propensity score function P(A= 1|·, ·) is Lipschitz continuous onR2, and ∃ ϵ > 0, P ϵ ≤ gη(X )≤ 1− ϵ = 1
3. The propensity score estimate converges at least as quickly as k nearest neighbor; i.e., E[ ĝη(X )− P(A= 1 | η̂(X )
2 | η̂(X )] 12 = O(n− 14 ) Györfi et al. (2002); 4. There exist positive constants C1, C2, c, and q > 2 such that
E[|Y |q] 1q ≤ C2, sup η∈supp(η(X )) E[(Y −Q(A, X )2 | η(X ) = η)] ≤ C2, E[(Y −Q(A, X )2)] 12 ≥ c, max
a∈{0,1}E[ Q̂a(X )−Q(a, X ) ] 1q ≤ C1.
Then, the estimator τ̂TI is consistent and
p n(τ̂TI −τCDE) d→ N(0,σ2) (4.10)
where σ2 = E ϕ(X ;Q, gη,τCDE)
2 .
Proof. We first prove that misestimation of propensity score has the rate n− 14 . For simplicity, we use fg , f̂g : (u, v) ∈ R2 → R to denote conditional probability P(A = 1|u, v) = fg(u, v) and the estimated propensity function by running the nonparametric regression P̂(A = 1|u, v) = f̂g(u, v). Specifically, we have fg(Q(0, X ),Q(1, X )) = gη(X ) and f̂g(Q̂0(X ), Q̂1(X )) = P̂(A = 1|Q̂0(X ), Q̂1(X )) = ĝη(X ). Since E[ Q̂0(X )−Q(0, X ) 2 ] 1 2 , E[ Q̂1(X )−Q(1, X ) 2 ] 1 2 = o(n−1/4) and fg is Lipschitz continuous, we have
E fg(Q̂0(X ), Q̂1(X ))− fg (Q(0, X ),Q(1, X )) 2
1 2
≤L ·E Q̂0(X ), Q̂1(X ) − (Q(0, X ),Q(1, X )) 2
2
1 2
=L · ¦E Q̂0(X )−Q(0, X ) 2 + E Q̂1(X )−Q(1, X ) 2 © 12 =o(n−1/4)
(A.1)
Since the true propensity function fg is Lipschitz continuous on R2, the mean squared error rate of the k nearest neighbor is O(n−1/2) Györfi et al. (2002). In addition, since the propensity score function and its estimation are bounded under 1, we have the following equation
E f̂g(Q̂0(X ), Q̂1(X ))− fg(Q̂0(X ), Q̂1(X )) 2 = O(n−1/2), (A.2)
due to the dominated convergence theorem. By (A.1) and (A.2), we can bound the mean squared error of estimated propensity score in the following form:
E ĝη(X )− gη(X ) 2
≤E ĝη(X )− fg(Q̂0(X ), Q̂1(X )) 2 +E fg(Q̂0(X ), Q̂1(X ))− gη(X ) 2 =E fg(Q̂0(X ), Q̂1(X ))− fg (Q(0, X ),Q(1, X )) 2+ E f̂g(Q̂0(X ), Q̂1(X ))− fg(Q̂0(X ), Q̂1(X )) 2
=O(n−1/2),
(A.3)
that is E ĝη(X )− gη(X ) 2 12 = O(n− 14 ).
Before we apply the conclusion of Theorem 5.1 in (Chernozhukov et al., 2017b), we need to check all assumptions in Assumption 5.1 hold in Chernozhukov et al. (2017b). Let C :=max ¦ (2Cq1 + 2 q) 1 q , C2 © .
(a) E[Y − Q(A, X ) | η(X ), A] = 0, E[A − gη(X ) | η(X )] = 0 are easily checked by invoking definitions of Q and gη.
(b) E[|Y |q] 1q ≤ C , E[(Y −Q(A, X ))2] 12 ≥ c, and supη∈supp(η(X ))E[(Y −Q(A, X ))2 | η(X ) = η] ≤ C are guaranteed by the fourth condition in the theorem.
(c) P ϵ ≤ gη(X )≤ 1− ϵ = 1 is the second condition in the theorem.
(d) Since propensity score function and its estimation are bounded under 1, we have
E[ Q̂1(X )−Q(1, X ) q] +E[ Q̂0(X )−Q(0, X ) q] +E[ ĝη(X )− gη(X ) q] 1q
≤ (Cq1 + Cq1 + 2q) 1 q ≤ C
(e) Based on (A.3) and condition 1 in the theorem, we have
E[ Q̂1(X )−Q(1, X ) 2 ] +E[ Q̂0(X )−Q(0, X ) 2 ] +E[ ĝη(X )− gη(X ) 2 ] 1 2
≤ o(n− 12 ) + o(n− 12 ) +O(n− 12 ) 12 ≤ O(n− 14 ), E[ Q̂0(X )−Q(0, X ) 2 ] 1 2 ·E[ ĝη(X )− gη(X ) 2] 12 = o(n− 12 )
(f) Based on condition 3 in the theorem, we have
sup x∈supp(X )
E[ ĝη(X )− P(A= 1 | η̂(X )) 2 | η̂(X ) = η̂(x)] = O(n− 12 ).
We consider a smaller positive constant ϵ̃ instead of ϵ. Note that for ϵ̃ < ϵ, we still have P(ϵ̃ ≤ gη(X )≤ 1− ϵ̃) = 1. Then,
P
sup
x∈supp(X )
ĝη(x)− 12 > 12 − ϵ̃ = P inf x∈supp(X ) ĝη(x)< ϵ̃ + P
sup x∈supp(X ) ĝη(x)> 1− ϵ̃
≤P inf x∈supp(X )P(A= 1 | η̂(X ) = η̂(x))− infx∈supp(X ) ĝη(x)> ϵ − ϵ̃
+ P
sup x∈supp(X ) ĝη(x)− sup x∈supp(X ) P(A= 1 | η̂(X ) = η̂(x))> 1− ϵ̃ − (1− ϵ)
≤E
infx∈supp(X ) ĝη(x)− infx∈supp(X ) P(A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2 + E supx∈supp(X ) ĝη(x)− supx∈supp(X ) P(A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2
≤2supx∈supp(X )E
ĝη(X )− P (A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2 =O(n− 12 )
Hence, P(supx∈supp(X ) ĝη(x)− 12 ≤ 12 − ϵ̃)≥ 1−O(n− 12 ).
With (a)-(f), we can invoke the conclusion in Theorem 5.1 in (Chernozhukov et al., 2017b), and get the asymptotic normality of the TI estimator.
B PROOF OF CAUSAL IDENTIFICATION
Theorem 1. Assume the following: 1. (Causal structure) The causal relationships among A, Ã, Z, Y , and X satisfy the causal DAG in Figure 2; 2. (Overlap) 0< P(A= 1 | XA∧Z , XZ)< 1; 3. (Intention equals perception) A= Ã almost surely with respect to all interventional distributions. Then, the CDE is identified from observational data as
CDE= τCDE := EX |Ã=1 E[Y | η(X ), Ã= 1]−E[Y | η(X ), Ã= 0] , (3.4)
where η(X ) := (Q(0, X ),Q(1, X )).
Proof. We first prove that this two-dimensional confounding part η(X ) satisfies positivity. Since (Q(0, X ), Q(1, X )) = (E [Y | A= 1, XA∧Z , XZ] , E [Y | A= 0, XA∧Z , XZ]) is a function of (XA∧Z , XZ), the following equations hold:
P(A= 1 | Q(0, X ),Q(1, X )) =E(A | Q(0, X ),Q(1, X )) =E [E (A | XA∧Z , XZ) | Q(0, X ),Q(1, X )] =E [P(A= 1| XA∧Z , XZ) | Q(0, X ),Q(1, X )] .
(B.1)
As 0 < P(A= 1| XA∧Z , XZ) < 1, we have 0 < P(A= 1| Q(0, X ),Q(1, X )) < 1. Furthermore, we have 0< P(Ã= 1| Q(0, X ),Q(1, X ))< 1 due to almost everywhere equivalence of A and Ã.
Since A= Ã, we can rewrite (3.1) by replacing A with à in the following form:
CDE=EXA∧Z ,XZ | Ã=1 E(Y | do(Ã= 1), XA∧Z , XZ)−E(Y | do(Ã= 0), XA∧Z , XZ)
=EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1, XA∧Z , XZ)−E(Y | Ã= 0, XA∧Z , XZ) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1, X )−E(Y | Ã= 0, X ) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1,Q(0, X ),Q(1, X )) −E E(Y | Ã= 0,Q(0, X ),Q(1, X )) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1,η(X )) −E E(Y | Ã= 0,η(X )) =EX | Ã=1 E(Y | Ã= 1,η(X )) −E E(Y | Ã= 0,η(X )) .
(B.2)
The equivalence of the first and the second line is because XA∧Z , XZ block all backdoor paths between à and Y (See Figure 2) and 0 < P(à = 1| Q(0, X ),Q(1, X )) < 1. Thus, the “do-operation” in the first line can be safely removed. Equivalence of the second line and the third line is due to Q(Ã, X ) = E Y | Ã, XA∧Z , XZ , which is subject to the causal model in Figure 2. The last equation is based on the fact that η(X ) is a function of only XA∧Z and XZ . (It can be easily checked by using the definition of the expectation.)
(B.2) shows that (Q(0, X ),Q(1, X )) is a two-dimensional confounding variable such that CDE is identifiable when we adjust for it as the confounding part.
Note that if f and h are two invertible functions on R, ( f (Q(0, X )), h(Q(1, X ))) also suffices the identification for CDE. Since the sigma algebra should be the same for (Q(0, X ),Q(1, X )) and f (Q(0, X )), h(Q(1, X )), i.e.,
σ (Q(0, X ),Q(1, X )) = σ ( f (Q(0, X )), h(Q(1, X ))) .
Hence, we have
P (A= 1 | Q(0, X ),Q(1, X )) = P (A= 1 | f (Q(0, X )), h(Q(1, X ))) , E (Y | Q(0, X ),Q(1, X )) = E (Y | f (Q(0, X )), h(Q(1, X ))) . (B.3)
C ADDITIONAL EXPERIMENTS
We conduct additional experiments to show how the estimation of causal effect changes 1) over different nonparametric models for the propensity score estimation, and 2) when using different double machine learning estimators on causal estimation. Specifically, for the first study, we apply different nonparametric models and the logistic regression to the estimated confounding part η̂(X ) =
Q̂0(X ), Q̂1(X ) to obtain propensity scores. We use ATT AIPTW in all above cases for causal effect estimation. For the second study, we fix the first two stages of the TI estimator, i.e. we apply Q-Net for the conditional outcomes and compute propensity scores with the Gaussian process regression where the kernel function is the summation of dot product and white noise. Estimated conditional outcomes and propensity scores are plugged into different double machine learning estimators. We make the following conclusions with results of above experiments.
The choice of nonparametric models is significant. Table 3 summarizes results with applying different regression models for the propensity estimation. We can see that suitable nonparametric models will strongly increase the coverage proportion over true causal estimand. Therefore, we conclude that the accuracy in causal estimation is highly dependent on the choice of nonparametric models. In practice, when there is some prior information about the propensity score function, we should apply the most suitable nonparametric model to increase the reliability of our causal estimation.
The ATT AIPTW is consistently the best double machine learning estimator. Table 4 shows results by applying different double machine learning estimators. We apply both estimators for the average treatment effect (ATE) and the controlled direct effect (CDE). The bias of “unadjusted” estimator τ̂naive is also included in Table 4 (a). For absolute bias,
ATT AIPTW τ̂TI has comparable results with other double machine learning estimators in most cases. For coverage proportion of confidence intervals, though it has lower rates in some cases, τ̂TI has consistently the best performance. Especially in high confounding situations, the advantage of τ̂TI is obvious.
Estimator For each dataset, we compute estimators as follows. n1 and n0 stands for the number of individuals in the treated and controlled group. n= n1+ n0 is the total number of individuals.
• “Unadjusted” baseline estimator: τ̂naive = 1n1 ∑ i:Ai=1 Yi − 1n0 ∑ i:Ai=0 Yi
• “Outcome-only” estimator: τ̂Q = 1n1 ∑ i:Ai=1 Q̂1,i − Q̂0,i
• ATT AIPTW: τ̂TI = 1n1 ∑ i:Ai=1 Ai(Yi − Q̂0,i)− (1− Ai)(Yi − Q̂0,i) ĝi1− ĝi
D DISCUSSION OF LOW COVERAGE
In this section, we discuss why the confidence intervals we get (See Table 1) have lower coverage than the nominated level 95%. We conduct diagnostics and find that the inaccuracy of Q’s estimations is responsible for the low coverage. We compute absolute biases, variances, and coverages of τTI’s with different mean squared errors Ê[(Q− Q̂)2] by using different numbers of datasets. According to Figure 4–Figure 5, as the mean squared error of Q increases, the bias of τTI grows and the coverage of τTI drops. Specifically, the highest coverage of each setting is almost 95% (use 50 datasets with most accurate conditional outcome estimations). In practice, one direct way to improve the TI estimator’s accuracy is to apply better NLP models so that more accurate conditional outcome estimations can be obtained. | 1. What is the focus of the paper regarding causal inference with text data?
2. What are the strengths of the proposed approach, particularly in addressing positivity issues?
3. What are the weaknesses of the paper, especially regarding its theoretical results and applications?
4. Are there any concerns or questions regarding the paper's content, such as the sufficiency statistic or the phrasing of certain sections?
5. How would you assess the clarity, quality, novelty, and reproducibility of the paper overall? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a way to overcome the violation of positivity in causal inference with text data. Following the problem setup of Pryzant et al. (2020), a piece of text X contains both the treatment (assumed binary) and confounders. The authors find a sufficient statistic
η
that prevents the violation of positivity. Another contribution of this paper is to combine the approach with the double machine learning idea and obtain easy valid confidence intervals.
Strengths And Weaknesses
Strengths
The problem is this paper is quite practical and interesting.
Theorem 1 seems to be clever. It finds a sufficient statistic that blocks the paths that lead to violation of positivity.
The paper is clear and easy to follow.
Weaknesses
Theorem 2 is a rather simple application of the DML approach. The theoretical results are also straightforward corollaries based on Chernokhukov et al (2016).
Request for clarifications:
I am confused about several parts of the paper and I would like the authors to respond before I make the final decision.
The positivity issue: Given positivity violation, the estimator of
Q
(
¬
A
,
X
)
should be quite unreliable, where I assumed that the natural pair is
(
X
,
A
)
and
¬
A
is the treatment that is not observed. There should be very few examples of
X
and
¬
A
. This might lead to violations of the assumptions of Theorem 2. Can you comment on this?
Beginning of Section 4.2: Why is
η
a good sufficient statistic for
g
^
?
End of Section 3: The phrase "propensity score of
η
(
X
)
" does not make sense.
Start of Section 4.1: The phrase "naive outcome regression" is the right choice of words.
Clarity, Quality, Novelty And Reproducibility
The paper is clear and easy to follow.
Most of the novelty is in Theorem 1 and the architecture of Q-Net. The paper meets the bars of novelty. |
ICLR | Title
Causal Estimation for Text Data with (Apparent) Overlap Violations
Abstract
Consider the problem of estimating the causal effect of some attribute of a text document; for example: what effect does writing a polite vs. rude email have on response time? To estimate a causal effect from observational data, we need to adjust for confounding aspects of the text that affect both the treatment and outcome—e.g., the topic or writing level of the text. These confounding aspects are unknown a priori, so it seems natural to adjust for the entirety of the text (e.g., using a transformer). However, causal identification and estimation procedures rely on the assumption of overlap: for all levels of the adjustment variables, there is randomness leftover so that every unit could have (not) received treatment. Since the treatment here is itself an attribute of the text, it is perfectly determined, and overlap is apparently violated. The purpose of this paper is to show how to handle causal identification and obtain robust causal estimation in the presence of apparent overlap violations. In brief, the idea is to use supervised representation learning to produce a data representation that preserves confounding information while eliminating information that is only predictive of the treatment. This representation then suffices for adjustment and satisfies overlap. Adapting results on non-parametric estimation, we find that this procedure is robust to conditional outcome misestimation, yielding a low-absolute-bias estimator with valid uncertainty quantification under weak conditions. Empirical results show strong improvements in bias and uncertainty quantification relative to the natural baseline. Code, demo data and a tutorial are available at https://github.com/gl-ybnbxb/TI-estimator.
1 INTRODUCTION
We consider the problem of estimating the causal effect of an attribute of a passage of text on some downstream outcome. For example, what is the effect of writing a polite or rude email on the amount of time it takes to get a response? In principle, we might hope to answer such questions with a randomized experiment. However, this can be difficult in practice—e.g., if poor outcomes are costly or take long to gather. Accordingly, in this paper, we will be interested in estimating such effects using observational data.
There are three steps to estimating causal effects using observational data (See Chapter 36 Murphy (2023)). First, we need to specify a concrete causal quantity as our estimand. That is, give a formal quantity target of estimation corresponding to the high-level question of interest. The next step is causal identification: we need to prove that this causal estimator can, in principle, be estimated using only observational data. The standard approach for identification relies on adjusting for confounding variables that affect both the treatment and the outcome. For identification to hold, our adjustment variables must satisfy two conditions: unconfoundedness and overlap. The former requires the adjustment variables contain sufficient information on all common causes. The latter requires that the adjustment variable does not contain enough information about treatment assignment to let us perfectly predict it. Intuitively, to disentangle the effect of treatment from the effect of confounding, we must observe each treatment state at all levels of confounding. The final
step is estimation using a finite data sample. Here, overlap also turns out to be critically important as a major determinant of the best possible accuracy (asymptotic variance) of the estimator Chernozhukov et al. (2016).
Since the treatment is a linguistic property, it is often reasonable to assume that text data has information about all common causes of the treatment and the outcome. Thus, we may aim to satisfy unconfoundedness in the text setting by adjusting for all the text as the confounding part. However, doing so brings about overlap violation. Since the treatment is a linguistic property determined by the text, the probability of treatment given any text is either 0 or 1. The polite/rude tone is determined by the text itself. Therefore, overlap does not hold if we naively adjust for all the text as the confounding part. This problem is the main subject of this paper. Or, more precisely, our goal is to find a causal estimand, causal identification conditions, and a robust estimation procedure that will allow us to effectively estimate causal effects even in the presence of such (apparent) overlap violations.
In fact, there is an obvious first approach: simply use a standard plug-in estimation procedure that relies only on modeling the outcome from the text and treatment variables. In particular, do not make any explicit use of the propensity score, the probability each unit is treated. Pryzant et al. (2020) use an approach of this kind and show it is reasonable in some situations. Indeed, we will see in Sections 3 and 4 that this procedure can be interpreted as a point estimator of a controlled causal effect. Even once we understand what the implied causal estimand is, this approach has a major drawback: the estimator is only accurate when the text-outcome model converges at a very fast rate. This is particularly an issue in the text setting, where we would like to use large, flexible, deep learning models for this relationship. In practice, we find that this procedure works poorly: the estimator has significant absolute bias and (the natural approach to) uncertainty quantification almost never includes the estimand true value; see Section 5.
The contribution of this paper is a method for robustly estimating causal effects in text. The main idea is to break estimation into a two-stage procedure, where in the first stage we learn a representation of the text that preserves enough information to account for confounding, but throws away enough information to avoid overlap issues. Then, we use this representation as the adjustment variables in a standard double machine-learning estimation procedure Chernozhukov et al. (2016; 2017a). To establish this method, the contributions of this paper are:
1. We give a formal causal estimand corresponding to the text-attribute question. We show this estimand is causally identified under weak conditions, even in the presence of apparent overlap issues.
2. We show how to efficiently estimate this quantity using the adapted double-ML technique just described. We show that this estimator admits a central limit theorem at a fast ( p n) rate under weak conditions on the rate at which the ML model
learns the text-outcome relationship (namely, convergence at n1/4 rate). This implies absolute bias decreases rapidly, and an (asymptotically) valid procedure for uncertainty quantification.
3. We test the performance of this procedure empirically, finding significant improvements in bias and uncertainty quantification relative to the outcome-model-only baseline.
Related work The most related literature is on causal inference with text variables. Papers include treating text as treatment Pryzant et al. (2020); Wood-Doughty et al. (2018); Egami et al. (2018); Fong & Grimmer (2016); Wang & Culotta (2019); Tan et al. (2014)), as outcome Egami et al. (2018); Sridhar & Getoor (2019), as confounder Veitch et al. (2019); Roberts et al. (2020); Mozer et al. (2020); Keith et al. (2020), and discovering or predicting causality from text del Prado Martin & Brendel (2016); Tabari et al. (2018); Balashankar et al. (2019); Mani & Cooper (2000). There are also numerous applications using text to adjust for confounding (e.g., Olteanu et al., 2017; Hall, 2017; Kiciman et al., 2018; Sridhar et al., 2018; Sridhar & Getoor, 2019; Saha et al., 2019; Karell & Freedman, 2019; Zhang et al., 2020). Of these, Pryzant et al. (2020) also address non-parametric es-
timation of the causal effect of text attributes. Their focus is primarily on mismeasurement of the treatments, while our motivation is robust estimation.
This paper also relates to work on causal estimation with (near) overlap violations. D’Amour et al. (2021) points out high-dimensional adjustment (e.g., Rassen et al., 2011; Louizos et al., 2017; Li et al., 2016; Athey et al., 2017) suffers from overlap issues. Extra assumptions such as sparsity are often needed to meet the overlap condition. These results do not directly apply here because we assume there exists a low-dimensional summary that suffices to handle confounding.
D’Amour & Franks (2021) studies summary statistics that suffice for identification, which they call deconfounding scores. The supervised representation learning approach in this paper can be viewed as an extremal case of the deconfounding score. However, they consider the case where ordinary overlap holds with all observed features, with the aim of using both the outcome model and propensity score to find efficient statistical estimation procedures (in a linear-gaussian setting). This does not make sense in the setting we consider. Additionally, our main statistical result (robustness to outcome model estimation) is new.
2 NOTATION AND PROBLEM SETUP
We follow the causal setup of Pryzant et al. (2020). We are interested in estimating the causal effect of treatment A on outcome Y . For example, how does writing a negative sentiment (A) review (X ) affect product sales (Y )? There are two immediate challenges to estimating such effects with observed text data. First, we do not actually observe A, which is the intent of the writer. Instead, we only observe Ã, a version of A that is inferred from the text itself. In this paper, we will assume that A= Ã almost surely—e.g., a reader can always tell if a review was meant to be negative or positive. This assumption is often reasonable, and follows Pryzant et al. (2020). The next challenge is that the treatment may be correlated with other aspects of the text (Z) that are also relevant to the outcome— e.g., the product category of the item being reviewed. Such Z can act as confounding variables, and must somehow be adjusted for in a causal estimation problem.
Each unit (Ai , Zi , X i , Yi) is drawn independently and identically from an unknown distribution P. Figure 1 shows the causal relationships among variables, where solid arrows represent causal relations, and the dotted line represents possible correlations between two variables. We assume that text X contains all common causes of à and the outcome Y .
3 IDENTIFICATION AND CAUSAL ESTIMAND
The first task is to translate the qualitative causal question of interest—what is the effect of A on Y —into a causal estimand. This estimand must both be faithful to the qualitative question and be identifiable from observational data under reasonable assumptions. The key challenges here are that we only observe à (not A itself), there are unknown confounding variables influencing the text, and à is a deterministic function of the text, leading to overlap violations if we naively adjust for all the text. Our high-level idea is to split the text into abstract (unknown) parts depending on whether they are confounding—affect both à and Y —or whether they affect à alone. The part of the text that affects only à is not necessary for causal adjustment, and can be thrown away. If this part contains “enough” information about Ã, then throwing it away can eliminate our ability to perfectly predict Ã, thus fixing the overlap issue. We now turn to formalizing this idea, showing how it can be used to define an estimand and to identify this estimand from observational data.
Causal model The first idea is to decompose the text into three parts: one part affected by only A, one part affected interactively by A and Z , and another part affected only by Z . We use XA, XA∧Z and XZ to denote them, respectively; see Figure 2 for the corresponding causal model. Note that there could be additional information in the text in addition to these three parts. However, since they are irrelevant to both A and Z , we do not need to consider them in the model.
Controlled direct effect (CDE) The treatment A affects the outcome through two paths. Both “directly” through XA—the part of the text determined just by the treatment—and also through a path going through XA∧Z—the part of the text that relies on interaction effects with other factors. Our formal causal effect aims at capturing the effect of A through only the first, direct, path.
CDE := EXA∧Z ,XZ |A=1 E[Y | XA∧Z , XZ , do(A= 1)] −E[Y | XA∧Z , XZ , do(A= 0)] . (3.1)
Here, do is Pearl’s do notation, and the estimand is a variant of the controlled direct effect (Pearl, 2009). Intuitively, it can be interpreted as the expected change in the outcome induced by changing the treatment from 1 to 0 while keeping part of the text affected by Z the same as it would have been had we set A = 1. This is a reasonable formalization of the qualitative “effect of A on Y ”. Of course, it is not the only possible formalization. Its advantage is that, as we will see, it can be identified and estimated under reasonable conditions.
Identification To identify CDE we must rewrite the expression in terms of observable quantities. There are three challenges: we need to get rid of the do operator, we don’t observe
A (only Ã), and the variables XA∧Z , XZ are unknown (they are latent parts of X ). Informally, the identification argument is as follows. First, XA∧Z , XZ block all backdoor paths (common causes) in Figure 2. Moreover, because we have thrown away XA, we now satisfy overlap. Accordingly, the do operator can be replaced by conditioning following the usual causal-adjustment argument. Next, A= Ã almost surely, so we can just replace A with Ã. Now, our estimand has been reduced to:
˜CDE := EXA∧Z ,XZ |Ã=1 E[Y | XA∧Z , XZ , Ã= 1]−E[Y | XA∧Z , XZ , Ã= 0)] . (3.2)
The final step is to deal with the unknown XA∧Z , XZ . To fix this issue, we first define the conditional outcome Q according to:
Q(Ã, X ) := E(Y | Ã, X ). (3.3) A key insight here is that, subject to the causal model in Figure 2, we have Q(Ã, X ) = E(Y | Ã, XA∧Z , XZ). But this is exactly the quantity in (3.2). Moreover, Q(Ã, X ) is an observable data quantity (it depends only on the distribution of the observed quantities). In summary:
Theorem 1. Assume the following: 1. (Causal structure) The causal relationships among A, Ã, Z, Y , and X satisfy the causal DAG in Figure 2; 2. (Overlap) 0< P(A= 1 | XA∧Z , XZ)< 1; 3. (Intention equals perception) A= Ã almost surely with respect to all interventional distributions. Then, the CDE is identified from observational data as
CDE= τCDE := EX |Ã=1 E[Y | η(X ), Ã= 1]−E[Y | η(X ), Ã= 0] , (3.4)
where η(X ) := (Q(0, X ),Q(1, X )).
The proof is in Appendix B.
We give the result in terms of an abstract sufficient statistic η(X ) to emphasize that the actual conditional expectation model is not required, only some statistic that is informationally equivalent. We emphasize that, regardless of whether the overlap condition holds or not, the propensity score of η(X ) is accessible and meaningful. Therefore, we can easily identify when identification fails as long as η(X ) is well-estimated.
4 METHOD
Our ultimate goal is to draw a conclusion about whether the treatment has a causal effect on the outcome. Following the previous section, we have reduced this problem to estimating τCDE, defined in Theorem 1. The task now is to develop an estimation procedure, including uncertainty quantification.
4.1 OUTCOME ONLY ESTIMATOR
We start by introducing the naive outcome only estimator as a first approach to CDE estimation. The estimator is adapted from Pryzant et al. (2020). The observation here is that, taking η(X ) = (Q(0, X ),Q(1, X )) in (3.4), we have τCDE = EX |A=1 [ E(Y | A= 1, X )−E(Y | A= 0, X ) ] . (4.1) Since Q(A, X ) is a function of the whole text data X , it is estimable from observational data. Namely, it is the solution to the square error risk:
Q = argmin Q̃ E[(Y − Q̃(A, X ))2]. (4.2)
With a finite sample, we can estimate Q as Q̂ by fitting a machine-learning model to minimize the (possibly regularized) square error empirical risk. That is, fit a model using mean square error as the objective function. Then, a straightforward estimator is:
τ̂Q := 1 n1 ∑ i:Ai=1
Q̂1(X i)− Q̂0(X i), (4.3) where n1 is the number of treated units.
It should be noted that the model for Q is not arbitrary. One significant issue for those models which directly regress Y on A and X is when overlap does not hold, the model could ignore A and only use X as the covariate. As a result, we need to choose a class of models that force the use of the treatment A. To address this, we use a two-headed model that regress Y on X for A= 0/1 separately in the conditional outcome learning model (See Section 4.2 and Figure 3).
As discussed in the introduction Section 1, this estimator yields a consistent point estimate, but does not offer a simple approach for uncertainty quantification. A natural guess for an estimate of its variance is:
ˆvar(τ̂Q) := 1 n ˆvar(Q̂1(X i)− Q̂0(X i) | Q̂). (4.4) That is, just compute the variance of the mean conditional on the fitted model. However, this procedure yields asymptotically valid confidence intervals only if the outcome model converges extremely quickly; i.e., if E[(Q̂ −Q)2] 12 = o(n− 12 ). We could instead bootstrap, refitting Q̂ on each bootstrap sample. However, with modern language models, this can be prohibitively computationally expensive.
4.2 TREATMENT IGNORANT EFFECT ESTIMATION (TI-ESTIMATOR)
Following Theorem 1, it suffices to adjust for η(X ) = (Q(0, X ),Q(1, X )). Accordingly, we use the following pipeline. We first estimate Q̂0(X ) and Q̂1(X ) (using a neural language model), as with the outcome-only estimator. Then, we take η̂(X ) := (Q̂0(X ), Q̂1(X )) and estimate ĝη ≈ P(A= 1 | η̂). That is, we estimate the propensity score corresponding to the estimated representation. Finally, we plug the estimated Q̂ and ĝη into a standard double machine learning estimator (Chernozhukov et al., 2016).
We describe the three steps in detail.
Q-Net In the first stage, we estimate the conditional outcomes and hence obtain the estimated two-dimensional confounding vector η̂(X ). For concreteness, we will use the dragonnet architecture of Shi et al. (2019). Specifically, we train DistilBERT (Sanh et al., 2019) modified to include three heads, as shown in Figure 3. Two of the heads correspond to Q̂0(X ) and Q̂1(X ) respectively. As discussed in the Section 4.1, applying two heads can force the model to use the treatment A. The final head is a single linear layer predicting the treatment. This propensity score prediction head can help prevent (implicit) regularization of the model from throwing away XA∧Z information that is necessary for identification. The output of this head is not used for the estimation since its purpose is to force the DistilBERT representation to preserve all confounding information. This has been shown to improve causal estimation (Shi et al., 2019; Veitch et al., 2019).
We train the model by minimizing the objective function
L (θ ;X) = 1 n ∑ i Q̂ai (x i;θ )− yi 2 +αCrossEntropy (ai , gu(x i)) + βLmlm(x i) , (4.5) where θ are the model parameters, α, β are hyperparameters and Lmlm(·) is the masked language modeling objective of DistilBERT.
There is a final nuance. In practice, we split the data into K-folds. For each fold j, we train a model Q̂− j on the other K −1 folds. Then, we make predictions for the data points in fold j using Q̂− j . Slightly abusing notation, we use Q̂a(x) to denote the predictions obtained in this manner.
Propensity score estimation Next, we define η̂(x) := (Q̂0(x), Q̂1(x)) and estimate the propensity score ĝη(x) ≈ P(A = 1 | η̂(x)). To do this, we fit a nonparametric estimator to the binary classification task of predicting A from η̂(X ) in a cross fitting or K-fold fashion. The important insight here is that since η̂(X ) is
2-dimensional, non-parametric estimation is possible at a fast rate. In Section 5, we try several methods and find that kernel regression usually works well.
We also define gη(X ) := P(A = 1 | η(X )) as the idealized propensity score. The idea is that as η̂ → η, we will also have ĝη → gη so long as we have a valid non-parametric estimate.
CDE estimation The final stage is to combine the estimated outcome model and propensity score into a CDE estimator. To that end, we define the influence curve of τCDE as follows:
ϕ(X ;Q, gη,τ CDE) := A · (Y −Q(0, X )) p − gη(X ) p 1− gη(X ) ·(1−A)·(Y −Q(0, X ))−AτCDE, (4.6) where p = P (A= 1). Then, the standard double machine learning estimator of τCDE Chernozhukov et al. (2016), and the α-level confidence interval of this estimator, is given by
τ̂TI = 1 n n∑ i=1 ϕ̂i , C I TI = τ̂TI − z1−α/2ŝd(ϕ̂i − Ai · τ̂TI/p̂), τ̂TI + z1−α/2ŝd(ϕ̂i − Ai · τ̂TI/p̂) ,
(4.7) where
ϕ̂i = Ai · Yi − Q̂0(X i)
p̂ − ĝη(X i) p̂ 1− ĝη(X i) · (1− Ai) · Yi − Q̂0(X i) , i = 1, · · · , n, (4.8)
p̂ = 1n ∑n
i=1 Ai , z1−α/2 is the α/2-upper quantile of the standard normal, and ŝd(·) is the sample standard deviation.
Validity We now have an estimation procedure. It remains to give conditions under which this procedure is valid. In particular, we require that it should yield a consistent estimate and asymptotically correct confidence intervals.
Theorem 2. Assume the following.
1. The mis-estimation of conditional outcomes can be bounded as follows
max a∈{0,1}E[(Q̂a(X )−Q(a, X )) 2] 1 2 = o(n− 14 ). (4.9)
2. The propensity score function P(A= 1|·, ·) is Lipschitz continuous onR2, and ∃ ϵ > 0, P ϵ ≤ gη(X )≤ 1− ϵ = 1
3. The propensity score estimate converges at least as quickly as k nearest neighbor; i.e., E[ ĝη(X )− P(A= 1 | η̂(X )
2 | η̂(X )] 12 = O(n− 14 ) Györfi et al. (2002); 4. There exist positive constants C1, C2, c, and q > 2 such that
E[|Y |q] 1q ≤ C2, sup η∈supp(η(X )) E[(Y −Q(A, X )2 | η(X ) = η)] ≤ C2, E[(Y −Q(A, X )2)] 12 ≥ c, max
a∈{0,1}E[ Q̂a(X )−Q(a, X ) ] 1q ≤ C1.
Then, the estimator τ̂TI is consistent and p n(τ̂TI −τCDE) d→ N(0,σ2) (4.10) where σ2 = E ϕ(X ;Q, gη,τCDE) 2 .
The proof is provided in Appendix A. The key point from this theorem is that we get asymptotic normality at the (fast) p
nrate while requiring only a slow (n1/4) convergence rate of Q. Intuitively, the reason is simply that, because η̂(X ) is only 2-dimensional, it is always possible to nonparametrically estimate the propensity score from η̂ at a fast rate—even naive KNN works! Effectively, this means the rate at which we estimate the true propensity score gη(X ) = P(A= 1 | η(X )) is dominated by the rate at which we estimate η(X ), which is in turn determined by the rate for Q̂. Now, the key property of the double ML estimator is that convergence only depends on the product of the convergence rates of Q̂ and ĝ. Accordingly, this procedure is robust in the sense that we only need to estimate Q̂ at the square root of the rate we needed for the naive Q-only procedure. This is much more plausible in practice. As we will see in Section 5, the TI-estimator dramatically improves the quality of the estimated confidence intervals and reduces the absolute bias of estimation.
Remark 3. In addition to robustness to noisy estimation of Q, there are some other advantages this estimation procedure inherits from the double ML estimator. If Q̂ is consistent, then the estimator is nonparametrically efficient in the sense that no other non-parametric estimator has a smaller asymptotic variance. That is, the procedure using the data as efficiently as possible.
5 EXPERIMENTS
We empirically study the method’s capability to provide accurate causal estimates with good uncertainty quantification Testing using semi-synthetic data (where ground truth causal effects are known), we find that the estimation procedure yields accurate causal estimates and confidence intervals. In particular, the TI-estimator has significantly lower absolute bias and vastly better uncertainty quantification than the Q-only method.
Additionally, we study the effect of the choice of nonparametric propensity score estimator and the choice of double machine-learning estimator, and the method’s robustness in regard to Q̂’s miscalibration. These results are reported in Appendices C and D. Although these
choices do not matter asymptotically, we find they have a significant impact in actual finite sample estimation. We find that, in general, kernel regression works well for propensity score estimation and the vanilla the Augmented Inverse Probability of Treatment weighted Estimator (AIPTW) corresponding to the CDE works well.
Finally, we reproduce the real-data analysis from Pryzant et al. (2020). We find that politeness has a positive effect on reducing email response time.
5.1 AMAZON REVIEWS
Dataset We closely follow the setup of Pryzant et al. (2020). We use publicly available Amazon reviews for music products as the basis for our semi-synthetic data. We include reviews for mp3, CD and vinyl, and among these exclude reviews for products costing more than $100 or shorter than 5 words. The treatment A is whether the review is five stars (A= 1) or one/two stars (A= 0).
To have a ground truth causal effect, we must now simulate the outcome. To produce a realistic dataset, we choose a real variable as the confounder. Namely, the confounder C is whether the product is a CD (C = 1) or not (C = 0). Then, outcome Y is generated according to Y ← βaA+βc (π(C)− βo)+γN(0,1). The true causal effect is controlled by βa. We choose βa = 1.0, 0.0 to generate data with and without causal effects. In this setting, βa is the oracle value of our causal estimand. The strength of confounding is controlled by βc . We choose βc = 50.0, 100.0. The ground-truth propensity score is π(C) = P(A= 1|C). We set it to have the value π(0) = 0.8 and π(1) = 0.6 (by subsampling the data). βo is an offset E[π(C)] = π(0)P(C = 0) + π(1)P(C = 1), where P(C = a), a = 0,1 are estimated from data. Finally, the noise level is controlled by γ; we choose 1.0 and 4.0 to simulate data with small and large noise. The final dataset has 10, 685 data entries.
Protocol For the language model, we use the pretrained distilbert-base-uncased model provided by the transformers package. The model is trained in the k-folding fashion with 5 folds. We apply the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 2e−5 and a batch size of 64. The maximum number of epochs is set as 20, with early stopping based on validation loss with a patience of 6. Each experiment is replicated with five different seeds and the final Q̂(a, x i) predictions are obtained by averaging the predictions from the 5 resulting models. The propensity model is implemented by running the Gaussian process regression using GaussianProcessClassifier in the sklearn package with DotProduct + WhiteKernel kernel. (We choose different random state for the GPR to guarantee the convergence of the GPR.) The coverage experiment uses 100 replicates.
Results The main question here is the efficacy of the estimation procedure. Table 1 compares the outcome-only estimator τ̂Q and the estimator τ̂TI. First, the absolute bias of the new method is significantly lower than the absolute bias of the outcome-only estimator. This is particularly true where there is moderate to high levels of confounding. Next, we check actual coverage rates over 100 replicates of the experiment. First, we find that the naive approach for the outcome-only estimator fails completely. The nominal confidence interval almost never actually includes the true effect. It is wildly optimistic. By contrast, the confidence intervals from the new method often cover the true value. This is an enormous improvement over the baseline. Nevertheless, they still do not actually achieve their nominal (95%) coverage. This may be because the Q̂ estimate is still not good enough for the asymptotics to kick in, and we are not yet justified in ignoring the uncertainty from model fitting.
5.2 APPLICATION: CONSUMER COMPLAINTS TO THE FINANCIAL PROTECTION BUREAU
We follow the same pipeline of the real data experiment in (Pryzant et al., 2020, §6.2). The dataset is consumers complaints made to the financial protection. Treatment A is politeness (measured using Yeomans et al. (2018)) and the outcome Y is a binary indicator of whether complaints receive a response within 15 days.
We use the same training procedure as for the simulation data. Table 2 shows point estimates and their 95% confidence intervals. Notice that the naive estimator show a significant negative effect of politeness on reducing response time. On the other hand, the more accurate AIPTW method as well as the outcome-only estimator have confidence intervals that cover only positive values, so we conclude that consumers’ politeness has a positive effect on response time. This matches our intuitions that being more polite should increase the probability of receiving a timely reply.
6 DISCUSSION
In this paper, we address the estimation of the causal effect of a text document attribute using observational data. The key challenge is that we must adjust for the text—to handle confounding—but adjusting for all of the text violates overlap. We saw that this issue could be effectively circumvented with a suitable choice of estimand and estimation procedure. In particular, we have seen an estimand that corresponds to the qualitative causal question, and an estimator that is valid even when the outcome model is learned slowly. The procedure also circumvents the need for bootstrapping, which is prohibitively expensive in our setting.
There are some limitations. The actual coverage proportion of our estimator is below the nominal level. This is presumably due to the imperfect fit of the conditional outcome model. Diagnostics (see Appendix D) show that as conditional outcome estimations become more accurate, the TI estimator becomes less biased, and its coverage increases. It seems plausible that the issue could be resolved by using more powerful language models.
Although we have focused on text in this paper, the problem of causal estimation with apparent overlap violation exists in any problem where we must adjust for unstructured and high-dimensional covariates. Another interesting direction for future work is to understand how analogous procedures work outside the text setting.
ACKNOWLEDGEMENT
Thanks to Alexander D’Amour for feedback on an earlier draft. We acknowledge the University of Chicago’s Research Computing Center for providing computing resources. This work was partially supported by Open Philanthropy.
A PROOF OF ASYMPTOTIC NORMALITY
Theorem 2. Assume the following.
1. The mis-estimation of conditional outcomes can be bounded as follows
max a∈{0,1}E[(Q̂a(X )−Q(a, X )) 2] 1 2 = o(n− 14 ). (4.9)
2. The propensity score function P(A= 1|·, ·) is Lipschitz continuous onR2, and ∃ ϵ > 0, P ϵ ≤ gη(X )≤ 1− ϵ = 1
3. The propensity score estimate converges at least as quickly as k nearest neighbor; i.e., E[ ĝη(X )− P(A= 1 | η̂(X )
2 | η̂(X )] 12 = O(n− 14 ) Györfi et al. (2002); 4. There exist positive constants C1, C2, c, and q > 2 such that
E[|Y |q] 1q ≤ C2, sup η∈supp(η(X )) E[(Y −Q(A, X )2 | η(X ) = η)] ≤ C2, E[(Y −Q(A, X )2)] 12 ≥ c, max
a∈{0,1}E[ Q̂a(X )−Q(a, X ) ] 1q ≤ C1.
Then, the estimator τ̂TI is consistent and
p n(τ̂TI −τCDE) d→ N(0,σ2) (4.10)
where σ2 = E ϕ(X ;Q, gη,τCDE)
2 .
Proof. We first prove that misestimation of propensity score has the rate n− 14 . For simplicity, we use fg , f̂g : (u, v) ∈ R2 → R to denote conditional probability P(A = 1|u, v) = fg(u, v) and the estimated propensity function by running the nonparametric regression P̂(A = 1|u, v) = f̂g(u, v). Specifically, we have fg(Q(0, X ),Q(1, X )) = gη(X ) and f̂g(Q̂0(X ), Q̂1(X )) = P̂(A = 1|Q̂0(X ), Q̂1(X )) = ĝη(X ). Since E[ Q̂0(X )−Q(0, X ) 2 ] 1 2 , E[ Q̂1(X )−Q(1, X ) 2 ] 1 2 = o(n−1/4) and fg is Lipschitz continuous, we have
E fg(Q̂0(X ), Q̂1(X ))− fg (Q(0, X ),Q(1, X )) 2
1 2
≤L ·E Q̂0(X ), Q̂1(X ) − (Q(0, X ),Q(1, X )) 2
2
1 2
=L · ¦E Q̂0(X )−Q(0, X ) 2 + E Q̂1(X )−Q(1, X ) 2 © 12 =o(n−1/4)
(A.1)
Since the true propensity function fg is Lipschitz continuous on R2, the mean squared error rate of the k nearest neighbor is O(n−1/2) Györfi et al. (2002). In addition, since the propensity score function and its estimation are bounded under 1, we have the following equation
E f̂g(Q̂0(X ), Q̂1(X ))− fg(Q̂0(X ), Q̂1(X )) 2 = O(n−1/2), (A.2)
due to the dominated convergence theorem. By (A.1) and (A.2), we can bound the mean squared error of estimated propensity score in the following form:
E ĝη(X )− gη(X ) 2
≤E ĝη(X )− fg(Q̂0(X ), Q̂1(X )) 2 +E fg(Q̂0(X ), Q̂1(X ))− gη(X ) 2 =E fg(Q̂0(X ), Q̂1(X ))− fg (Q(0, X ),Q(1, X )) 2+ E f̂g(Q̂0(X ), Q̂1(X ))− fg(Q̂0(X ), Q̂1(X )) 2
=O(n−1/2),
(A.3)
that is E ĝη(X )− gη(X ) 2 12 = O(n− 14 ).
Before we apply the conclusion of Theorem 5.1 in (Chernozhukov et al., 2017b), we need to check all assumptions in Assumption 5.1 hold in Chernozhukov et al. (2017b). Let C :=max ¦ (2Cq1 + 2 q) 1 q , C2 © .
(a) E[Y − Q(A, X ) | η(X ), A] = 0, E[A − gη(X ) | η(X )] = 0 are easily checked by invoking definitions of Q and gη.
(b) E[|Y |q] 1q ≤ C , E[(Y −Q(A, X ))2] 12 ≥ c, and supη∈supp(η(X ))E[(Y −Q(A, X ))2 | η(X ) = η] ≤ C are guaranteed by the fourth condition in the theorem.
(c) P ϵ ≤ gη(X )≤ 1− ϵ = 1 is the second condition in the theorem.
(d) Since propensity score function and its estimation are bounded under 1, we have
E[ Q̂1(X )−Q(1, X ) q] +E[ Q̂0(X )−Q(0, X ) q] +E[ ĝη(X )− gη(X ) q] 1q
≤ (Cq1 + Cq1 + 2q) 1 q ≤ C
(e) Based on (A.3) and condition 1 in the theorem, we have
E[ Q̂1(X )−Q(1, X ) 2 ] +E[ Q̂0(X )−Q(0, X ) 2 ] +E[ ĝη(X )− gη(X ) 2 ] 1 2
≤ o(n− 12 ) + o(n− 12 ) +O(n− 12 ) 12 ≤ O(n− 14 ), E[ Q̂0(X )−Q(0, X ) 2 ] 1 2 ·E[ ĝη(X )− gη(X ) 2] 12 = o(n− 12 )
(f) Based on condition 3 in the theorem, we have
sup x∈supp(X )
E[ ĝη(X )− P(A= 1 | η̂(X )) 2 | η̂(X ) = η̂(x)] = O(n− 12 ).
We consider a smaller positive constant ϵ̃ instead of ϵ. Note that for ϵ̃ < ϵ, we still have P(ϵ̃ ≤ gη(X )≤ 1− ϵ̃) = 1. Then,
P
sup
x∈supp(X )
ĝη(x)− 12 > 12 − ϵ̃ = P inf x∈supp(X ) ĝη(x)< ϵ̃ + P
sup x∈supp(X ) ĝη(x)> 1− ϵ̃
≤P inf x∈supp(X )P(A= 1 | η̂(X ) = η̂(x))− infx∈supp(X ) ĝη(x)> ϵ − ϵ̃
+ P
sup x∈supp(X ) ĝη(x)− sup x∈supp(X ) P(A= 1 | η̂(X ) = η̂(x))> 1− ϵ̃ − (1− ϵ)
≤E
infx∈supp(X ) ĝη(x)− infx∈supp(X ) P(A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2 + E supx∈supp(X ) ĝη(x)− supx∈supp(X ) P(A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2
≤2supx∈supp(X )E
ĝη(X )− P (A= 1 | η̂(X ) = η̂(x)) 2
(ϵ − ϵ̃)2 =O(n− 12 )
Hence, P(supx∈supp(X ) ĝη(x)− 12 ≤ 12 − ϵ̃)≥ 1−O(n− 12 ).
With (a)-(f), we can invoke the conclusion in Theorem 5.1 in (Chernozhukov et al., 2017b), and get the asymptotic normality of the TI estimator.
B PROOF OF CAUSAL IDENTIFICATION
Theorem 1. Assume the following: 1. (Causal structure) The causal relationships among A, Ã, Z, Y , and X satisfy the causal DAG in Figure 2; 2. (Overlap) 0< P(A= 1 | XA∧Z , XZ)< 1; 3. (Intention equals perception) A= Ã almost surely with respect to all interventional distributions. Then, the CDE is identified from observational data as
CDE= τCDE := EX |Ã=1 E[Y | η(X ), Ã= 1]−E[Y | η(X ), Ã= 0] , (3.4)
where η(X ) := (Q(0, X ),Q(1, X )).
Proof. We first prove that this two-dimensional confounding part η(X ) satisfies positivity. Since (Q(0, X ), Q(1, X )) = (E [Y | A= 1, XA∧Z , XZ] , E [Y | A= 0, XA∧Z , XZ]) is a function of (XA∧Z , XZ), the following equations hold:
P(A= 1 | Q(0, X ),Q(1, X )) =E(A | Q(0, X ),Q(1, X )) =E [E (A | XA∧Z , XZ) | Q(0, X ),Q(1, X )] =E [P(A= 1| XA∧Z , XZ) | Q(0, X ),Q(1, X )] .
(B.1)
As 0 < P(A= 1| XA∧Z , XZ) < 1, we have 0 < P(A= 1| Q(0, X ),Q(1, X )) < 1. Furthermore, we have 0< P(Ã= 1| Q(0, X ),Q(1, X ))< 1 due to almost everywhere equivalence of A and Ã.
Since A= Ã, we can rewrite (3.1) by replacing A with à in the following form:
CDE=EXA∧Z ,XZ | Ã=1 E(Y | do(Ã= 1), XA∧Z , XZ)−E(Y | do(Ã= 0), XA∧Z , XZ)
=EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1, XA∧Z , XZ)−E(Y | Ã= 0, XA∧Z , XZ) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1, X )−E(Y | Ã= 0, X ) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1,Q(0, X ),Q(1, X )) −E E(Y | Ã= 0,Q(0, X ),Q(1, X )) =EXA∧Z ,XZ | Ã=1 E(Y | Ã= 1,η(X )) −E E(Y | Ã= 0,η(X )) =EX | Ã=1 E(Y | Ã= 1,η(X )) −E E(Y | Ã= 0,η(X )) .
(B.2)
The equivalence of the first and the second line is because XA∧Z , XZ block all backdoor paths between à and Y (See Figure 2) and 0 < P(à = 1| Q(0, X ),Q(1, X )) < 1. Thus, the “do-operation” in the first line can be safely removed. Equivalence of the second line and the third line is due to Q(Ã, X ) = E Y | Ã, XA∧Z , XZ , which is subject to the causal model in Figure 2. The last equation is based on the fact that η(X ) is a function of only XA∧Z and XZ . (It can be easily checked by using the definition of the expectation.)
(B.2) shows that (Q(0, X ),Q(1, X )) is a two-dimensional confounding variable such that CDE is identifiable when we adjust for it as the confounding part.
Note that if f and h are two invertible functions on R, ( f (Q(0, X )), h(Q(1, X ))) also suffices the identification for CDE. Since the sigma algebra should be the same for (Q(0, X ),Q(1, X )) and f (Q(0, X )), h(Q(1, X )), i.e.,
σ (Q(0, X ),Q(1, X )) = σ ( f (Q(0, X )), h(Q(1, X ))) .
Hence, we have
P (A= 1 | Q(0, X ),Q(1, X )) = P (A= 1 | f (Q(0, X )), h(Q(1, X ))) , E (Y | Q(0, X ),Q(1, X )) = E (Y | f (Q(0, X )), h(Q(1, X ))) . (B.3)
C ADDITIONAL EXPERIMENTS
We conduct additional experiments to show how the estimation of causal effect changes 1) over different nonparametric models for the propensity score estimation, and 2) when using different double machine learning estimators on causal estimation. Specifically, for the first study, we apply different nonparametric models and the logistic regression to the estimated confounding part η̂(X ) =
Q̂0(X ), Q̂1(X ) to obtain propensity scores. We use ATT AIPTW in all above cases for causal effect estimation. For the second study, we fix the first two stages of the TI estimator, i.e. we apply Q-Net for the conditional outcomes and compute propensity scores with the Gaussian process regression where the kernel function is the summation of dot product and white noise. Estimated conditional outcomes and propensity scores are plugged into different double machine learning estimators. We make the following conclusions with results of above experiments.
The choice of nonparametric models is significant. Table 3 summarizes results with applying different regression models for the propensity estimation. We can see that suitable nonparametric models will strongly increase the coverage proportion over true causal estimand. Therefore, we conclude that the accuracy in causal estimation is highly dependent on the choice of nonparametric models. In practice, when there is some prior information about the propensity score function, we should apply the most suitable nonparametric model to increase the reliability of our causal estimation.
The ATT AIPTW is consistently the best double machine learning estimator. Table 4 shows results by applying different double machine learning estimators. We apply both estimators for the average treatment effect (ATE) and the controlled direct effect (CDE). The bias of “unadjusted” estimator τ̂naive is also included in Table 4 (a). For absolute bias,
ATT AIPTW τ̂TI has comparable results with other double machine learning estimators in most cases. For coverage proportion of confidence intervals, though it has lower rates in some cases, τ̂TI has consistently the best performance. Especially in high confounding situations, the advantage of τ̂TI is obvious.
Estimator For each dataset, we compute estimators as follows. n1 and n0 stands for the number of individuals in the treated and controlled group. n= n1+ n0 is the total number of individuals.
• “Unadjusted” baseline estimator: τ̂naive = 1n1 ∑ i:Ai=1 Yi − 1n0 ∑ i:Ai=0 Yi
• “Outcome-only” estimator: τ̂Q = 1n1 ∑ i:Ai=1 Q̂1,i − Q̂0,i
• ATT AIPTW: τ̂TI = 1n1 ∑ i:Ai=1 Ai(Yi − Q̂0,i)− (1− Ai)(Yi − Q̂0,i) ĝi1− ĝi
D DISCUSSION OF LOW COVERAGE
In this section, we discuss why the confidence intervals we get (See Table 1) have lower coverage than the nominated level 95%. We conduct diagnostics and find that the inaccuracy of Q’s estimations is responsible for the low coverage. We compute absolute biases, variances, and coverages of τTI’s with different mean squared errors Ê[(Q− Q̂)2] by using different numbers of datasets. According to Figure 4–Figure 5, as the mean squared error of Q increases, the bias of τTI grows and the coverage of τTI drops. Specifically, the highest coverage of each setting is almost 95% (use 50 datasets with most accurate conditional outcome estimations). In practice, one direct way to improve the TI estimator’s accuracy is to apply better NLP models so that more accurate conditional outcome estimations can be obtained. | 1. What is the focus of the paper regarding causal effect estimation?
2. What are the strengths and weaknesses of the proposed approach in addressing the issue of confounders and treatment?
3. How does the proposed method differ from other approaches, particularly the "naively adjust for all the text" method?
4. What are some concerns or suggestions for improving the paper, such as providing more clarification, adding simulations, or discussing the bias of the new method?
5. How does the paper contribute to the field of causal effect estimation, and how does it compare to prior works in this area? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper concerns causal effect estimation when the confounders and treatment are both derived from texts. One issue herein is that when adjusting for the entire text, there is a violation of the overlap assumption which is required for drawing valid causal conclusions. The proposed solution is to use supervised representation learning to produce a data representation that preserves confounding information while eliminating information that is only predictive of the treatment.
Strengths And Weaknesses
+1. the paper is well-written and technically correct. +2. there are theoretical justifications for the proposed methods. +3. the proposed method is backed up by simulations and two real data applications (Amazon reviews and consumer complaints.) +4. the topic of the paper is increasingly important and the paper is very timely.
-1. there seems to be some small gap between the advocated main idea in Figure 2 (namely, one should eliminate the components in X that are predictive of A only) and the proposed method, which does not specifically make an effort to eliminate these components. Specifically, the authors proposed to use a three-head dragonnet where two heads predict Q0 and Q1 and one head predicts A. In the end, only the estimated Q0 and Q1 are used for making the causal effect estimation. Should I understand that the estimated A in the dragonnet represents the components in X predictive of A? If that's the case, how can we ensure that the two heads for Q0 and Q1 do not use information in X_A? If not, how should we make the connection between the conceptual idea in Figure 2 and the implementation in Figure 3? -2. related to (-1) above, it seems that a critical point is that Equation (3.3) Q(\tilde A, X) := E(Y | \tilde A,X) is in fact E (Y | \tilde A, X_A∧Z, X_Z). This seems to suggest that although conceptually CDE as in (3.2) should be based on X_A∧Z, X_Z which are unknown, in reality, one only needs to build models for Q(A,X) using the entire X. In this case, how does the proposed method differ from the naive method that "naively adjust for all the text as the confounding part"? Some clarification is needed. -3. It would be helpful to add the "naively adjust for all the text" method to the simulation, to see how this method is biased and problematic. -4. Clarification needed: following Theorem 1, the author said "We emphasize that, regardless of whether the overlap condition holds or not, the propensity score of η(X) in condition 2 is accessible and meaningful." There is no η(X) in condition 2, and since η(X) is the outcome model, it does not have a propensity score. -5. abbreviation: On page 7, it says "AIPTW is double robust". AIPTW was not previously defined. AIPTW = augmented inverse probability T??? (treatment?) weighting? If you mean the estimator in (4.8), it should be said so. -6. bias: about Table 1, you said "bias of the new method is significantly lower than the bias of the outcome-only estimator". Bias can be both positive and negative, and hence lower bias is not necessarily better. Do you in fact mean average estimation error or root mean square estimation error? In fact, it would be good for you to define the bias that you are reporting in the paper. -7. simulation. It is counter-intuitive that low confounding actually leads to a low coverage rate of the confidence intervals. In randomized trials (zero confounding), shouldn't the confidence interval be valid? Some explanation/discussion is needed. -8. Please explain the specific differences between the current work and Claudia Shi, David M. Blei, and Victor Veitch. Adapting neural networks for the estimation of treatment effects. In Advances in Neural Information Processing Systems, 2019. -9. I am not sure how novel is the proof of Theorem 2. The stated results seem to inherit from double machine learning in Chernozhukov's work. The author should comment on how their proof differs from that of Chernozhukov. This is important as this part is claimed to be the second main contribution of the paper.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written, technically sound, and provides theoretical justifications. Its clarity can be improved in a couple of places. The authors should make a more compelling case for the paper's novelty (both in methods and theory). |
ICLR | Title
Azimuthal Rotational Equivariance in Spherical CNNs
Abstract
In this work, we analyze linear operators from LpSq Ñ LpSq which are equivariant to azimuthal rotations, that is, rotations around the z-axis. Several high-performing neural networks defined on the sphere are equivariant to azimuthal rotations, but not to full SO(3) rotations. Our main result is to show that a linear operator acting on band-limited functions on the sphere is equivariant to azimuthal rotations if and only if it can be realized as a block-diagonal matrix acting on the spherical harmonic expansion coefficients of its input. Further, we show that such an operation can be interpreted as a convolution, or equivalently, a correlation in the spatial domain. Our theoretical findings are backed up with experimental results demonstrating that a state-of-the-art pipeline can be improved by making it equivariant to azimuthal rotations.
1 INTRODUCTION
Signals defined on the surface of a sphere arise frequently in applications. For example, omnidirectional cameras are increasingly being utilized in applications that benefit from a large field of view, such as visual odometry and mapping (Matsuki et al., 2018). Spherical signals may also arise from other forms of measurements, such as magnetoencephalography images of brain activity or from planetary data like surface temperature or wind speed measurements (Mudigonda et al., 2017).
Convolutional neural networks (CNNs) (LeCun et al., 1998) have achieved remarkable success in the analysis of images and other data that is naturally arranged in a planar, rectangular grid, but applying them to spherical data has proven more challenging. The reason for this is that it is impossible to map the sphere to a flat surface without introducing distortions. For instance, in the equirectangular projection of spherical images, objects are increasingly stretched out as they approach the poles.
To remedy this, several alternatives to the 2D convolution specifically designed for functions on the sphere have been proposed in the recent literature. Some of these run a 2D CNN on an equirectangular projection of the spherical signal, but modify the kernel (Su & Grauman, 2019), the sampling pattern of the kernel (Coors et al., 2018), or run the CNN on projections to different tangent planes in order to minimize the effect of the distortion (Eder et al., 2020). Other approaches discretize the sphere, where the vertices are intepreted as nodes in a graph, allowing them to apply the machinery of Laplacian-based graph convolutions (Defferrard et al., 2016; 2019) to the problem.
One of the main properties that make regular CNNs so successful is their equivariance to translations. In fact, it is known from abstract algebra that a convolution is deeply tied to the concept of equivariance (Kondor & Trivedi, 2018). This insight was exploited by Esteves et al. (2018) and Cohen et al. (2018), where they define network layers equivariant to SOp3q rotations. This is accomplished by transforming the input using the spherical Fourier transform (SFT) and then performing convolutions in the spectral domain. These networks have the appealing property of resting firmly on the abstract algebraic underpinnings of convolution.
However, some spherical data comes equipped with a natural orientation. For instance, imagery captured from a self-driving vehicle can be aligned with the gravity direction using information from an inertial measurement unit (IMU). Brain scans and planetary data also come with a preferred orientation, and it has been shown that employing convolutions that utilize this information (and thus are not SOp3q equivariant), can lead to improved performance in classification and segmentation tasks (Jiang et al., 2019). Instead, what is required in these applications is a convolution which
is equivariant to SOp2q transformations, i.e., azimuthal rotations, of the input data, but where the requirement of full SOp3q equivariance may be unnecessarily restrictive. In this paper, we follow the line of work that examines spherical convolutions in the spectral domain (Kondor et al., 2018; Esteves et al., 2018; Cohen et al., 2018), and consider the problem of how to design SOp2q equivariant convolutions. Specifically, we make the following contributions:
• Our main contribution is theoretical: We perform a complete characterization of all possible azimuthal-rotation equivariant linear operators T : L2pS2q Ñ L2pS2q of band-limited functions. We show that these can be parametrized naturally as block diagonal matrices of complex numbers that act on the spherical harmonic expansion coefficients of their input.
• In addition to the Fourier space characterization, we also show how these operations can be interpreted as correlations between the input signal and filters in the spatial domain.
• We demonstrate that the correlations are natural generalizations of the convolutions proposed by Kondor et al. (2018) and Esteves et al. (2018), and how these, as well as those of Jiang et al. (2019) can be formulated as special cases of our framework by parametrizing the elements of the block-diagonal matrices corresponding to the operator T .
• Through experiments on two datasets, we show that by employing azimuthal-rotation equivariant correlations, we can recreate a state-of-the-art neural network architecture (Jiang et al., 2019) in our framework, allowing it to benefit from improved equivariance.
The paper is organized as follows. Sec. 2 gives some preliminaries and Sec. 3 introduces the notion of equivariance. Our main theoretical results on azimuthal equivariance and azimuthal correlations are given in Sec. 4 and Sec. 5, respectively. Sec. 6 provides an in-depth summary of related work and finally, we present experimental results and comparisons in Section 7.
2 PRELIMINARIES
A point ω on the unit sphere S2 can be parametrized by spherical coordinates ωpθ, φq “ pcosφ sin θ, sinφ sin θ, cos θq, where θ is the polar angle measured down from the z-axis and φ the azimuthal angle measured from the x-axis (see Fig. 2). The angle θ varies between 0 and π, while φ varies between 0 and 2π.
The Hilbert space L2pS2q consists of all functions f : S2 Ñ C which are square integrable on the unit sphere, and comes equipped with the inner product
xf, hy “ ż
S2 fh̄dω “
ż 2π
φ“0
ż π
θ“0 fpθ, φqĞhpθ, φq sin θdθdφ.
The spherical harmonics are a set of functions on the sphere that form an orthogonal basis ofL2pS2q. This means that a function f P L2pS2q can be represented in the orthogonal basis via
fpθ, φq “ 8 ÿ
l“0
l ÿ
m“´l f̂lmY
m l pθ, φq, (1)
where Y ml is the spherical harmonic of degree l and order m, and the f̂lm P C are the (spherical) Fourier coefficients. Note that we always have ´l ď m ď l. Here,
Y ml pθ, φq “ p´1qm d
p2l ` 1qpl ´mq! 4πpl `mq! P m l pcos θqeimφ, (2)
where Pml is the associated Legendre function of degree l and order m. For the exact definition of Legendre functions, see Driscoll & Healy (1994). For our purposes, it will be important to know that for fixed m, şπ
θ“0 P m k pcos θqPml pcos θq sin θdθ “ 0 whenever k ‰ l. Similarly, as the spherical
harmonics form a complete orthogonal basis, we have xY ml , Y m 1
l1 y “ δl,l1δm,m1 . Given f , the Fourier coefficients f̂lm can be computed as f̂lm “ xf, Y ml y “ ş S2 fĚY ml dω.
3 EQUIVARIANCE AND LINEAR OPERATORS
With the notation in place, we are now in a position to turn our attention towards equivariance. We will start by looking at the group of translations over reals, which will be insightful for further analysis. Then, we will turn to functions on S2 and the group of rotations SOp3q.
3.1 TRANSLATIONS
For f P L2pRnq and t P Rn, the translation of f by t is the function Λtfpxq given by Λtfpxq “ fpx ´ tq. Clearly, ||Λtf ||2 “ ||f ||2 for all f P L2pRnq. So, Λt is a bounded and linear operator from L2pRnq to L2pRnq. We say that a linear operator T is equivariant with respect to translations if, for all t P Rn, we have
ΛtT “ TΛt, (3)
which means that the linear operator T commutes with translation Λt.
For h P L2pRnq, let Th be the operator performing convolution of f P L2pRnq with h, defined by the function
Thfpxq “ ph ˚ fqpxq “ ż
Rn hpyqfpx´ yqdy, @x P Rn. (4)
This operator is equivariant with respect to translations. Indeed, for t P Rn,
ThΛtfpxq “ ż
Rn hpyqfpx´ y ´ tqdy “ ph ˚ fqpx´ tq “ Thfpx´ tq “ ΛtThfpxq. (5)
It turns out that Eq. (4) is the most general form of a translation equivariant operator for functions on Rn. If T is a translation equivariant operator, then there exists a function h such that Tf “ h ˚ f for all f , provided one is allowed to pick h from the (larger) function space that also includes distributions. This is a classical result in functional analysis, see Hörmander (1960) for a proof.
3.2 ROTATIONS
Now, let us consider a real-valued function f defined on S2. Let R P SOp3q be a rotation. Then we define ΛR as the operator that rotates the graph of the function f : S2 Ñ R by R. Specifically, ΛRfpωq “ fpR´1ωq. An operator T is equivariant with respect to rotations, if for all R P SOp3q, we have
TΛR “ ΛRT,
i.e., if it commutes with the rotation operator.
The characterization of all rotation equivariant operators is well-known in the literature. In Dai & Xu (2013), it is shown that a linear, bounded operator T : L2pS2q Ñ L2pS2q is equivariant to rotations if and only if there exists a sequence of numbers tµ̂lu, l “ 0, 1, . . ., such that
pxTfqlm “ µ̂lf̂lm (6)
is fulfilled for all Fourier coefficients f̂lm indexed by l and m of an arbitrary function f P L2pS2q. Such linear operators T can also be interpretated as convolutions. In Driscoll & Healy (1994), spherical convolution is defined as
ph ˚ fqpωq “ ż
RPSOp3q hpRηqfpR´1ωqdR. (7)
Here, ω is any point on the sphere and η the north pole. The measure is dR “ sin θ dθ dφ dψ in terms of Euler angle coordinates of a rotation. It is also proven that the Fourier transform of the convolution is given by
pzh ˚ fqlm “ 2π
d
4π
p2l ` 1q
8 ÿ l“0 ĥl0f̂lm. (8)
Note that only the coefficients with orderm “ 0 for h are present in the formula. Filters, which only have terms of order m “ 0 in their Fourier expansion, are known as zonal filters and consequently they are constant for fixed θ when varying φ. Also note that the convolution fulfills (6), so it is indeed a rotation equivariant operation. Conversely, it is also clear from (6) that every bounded linear operator T can be associated with such a filter h.
4 AZIMUTHAL-ROTATION EQUIVARIANT LINEAR OPERATORS
At last, we are ready to perform our analysis of azimuthal-rotation equivariant linear operators. Let T : L2pS2q Ñ L2pS2q be a linear operator acting on functions on the unit sphere. What conditions must be imposed on T in order for it to have equivariance to rotations around the z-axis? Formally we should have, for all such azimuthal rotations Rφ P SOp3q, that TΛRφ “ ΛRφT.
Consider the action of T on an arbitrary function f P L2pS2q. Let f̂lm be its expansion coefficients according to (1). Applying T to this function gives, due to the linearity of T ,
Tfpθ, φq “ 8 ÿ
l“0
n ÿ
m“´l f̂lmTY
m l pθ, φq. (9)
The action of T is thus fully determined by its action on the spherical harmonic basis functions. In particular, it will be equivariant if and only if the spherical harmonic basis functions are equivariant.
Let glmpθ, φq “ TY ml pθ, φq be the action of T on Y ml . Since glm P L2pS2q, we can expand it,
glmpθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1Y
m1
l1 pθ, φq. (10)
The requirement that T be equivariant then gives, for all azimuthal angles φ0,
T rY ml pθ, φ´ φ0qs “ glmpθ, φ´ φ0q. (11)
The left hand side becomes, after using the definition of the spherical harmonic Y ml in (2) and moving the constant phase e´imφ0 outside T and using the definition of glm and its expansion (10):
T rY ml pθ, φ´ φ0qs “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1e
´imφ0Y m 1
l1 pθ, φq. (12)
The right hand side of (11) is simply glmpθ, φ´ φ0q “ ř8 l1“0
řl1
m1“´l1 ĝ lm l1m1e
´im1φ0Y m 1
l1 pθ, φq. So the azimuthal equivariance condition (11) requires that the above two expressions are equal, i.e.,
8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1e
´imφ0Y m 1 l1 pθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1e
´im1φ0Y m 1
l1 pθ, φq. (13)
Now, two functions in L2pS2q are identical if and only if their coefficients in the basis are identical. This gives us the constraint ĝlml1m1e ´imφ0 “ ĝlml1m1e´im 1φ0 . Thus, for any non-zero coefficient ĝlml1m1 , we must have that m “ m1. The expansion (10) of glm must therefore have the form
glmpθ, φq “ 8 ÿ
l1“|m| ĝlml1mY m l1 pθ, φq. (14)
The inner sum has vanished, since the terms are only non-zero for m “ m1, and the sum starts at l1 “ |m| since no terms exist in (10) for l1 ă |m|. The coefficients ĝlml1m thus parametrize the azimuthal rotationally equivariant transform T . To summarize, we have proven the following. Proposition 1. A linear, bounded operator T : L2pS2q Ñ L2pS2q is equivariant to azimuthal rotations if and only if there exists a sequence of numbers tµ̂lml1 u such that
pxTfqlm “ 8 ÿ
l1“|m| µ̂lml1 f̂l1m (15)
is fulfilled for all Fourier coefficients f̂lm indexed by l,m of an arbitrary function f P L2pS2q.
Note that T maps a spherical harmonic Y ml to a linear combination of spherical harmonics with the same order m. Thus, another way of stating the proposition is that a bounded linear operator
T : L2pS2q Ñ L2pS2q is equivariant to azimuthal rotations if and only if the subspaces spanned by the spherical harmonics of a fixed degree form invariant subspaces under the action of T .
This result has a natural interpretation if we restrict ourselves to spherical harmonics of some maximum degree L, i.e., when we are working with band-limited signals. This occurs commonly in practice, e.g., with a signal obtained through an equiangular sampling on the sphere. If the expansion coefficients are listed in a vector and grouped by m-values, then the matrix representation of T will be a block diagonal matrix (cf. Fig. 1).
The SOp3q equivariant convolution as defined by Driscoll and Healy, cf. (7), with the transform formula given in (8) is a special case of the above proposition where the matrix T is diagonal and where the coefficients for all spherical harmonics of the same degree are identical, cf. Fig. 1.
5 AZIMUTHAL CONVOLUTIONS AND CORRELATIONS ON S2
Translation and rotation equivariant operations can be realized as convolutions, or equivalently, correlations. Previously we characterized azimuthal-rotation equivariant linear operators in the Fourier domain (Proposition 1). A natural question arises: Can we interpret these operators as convolutions or correlations? Below, we show that these operators can be interpreted in terms of a correlation.
Let h, f : S2 Ñ R. Then, a first attempt at a definition of correlation is ph ‹ fqpφq “ ş
ωPS2 hpR ´1 φ ωqfpωqdω. However, such a correlation is only defined on S1, since it is a function of only the azimuthal angle φ. Instead, we extend the domain to S2 by considering a filter parametrized by the polar angle θ, i.e., h “ hθ. So, by varying θ, we get a different filter and consequently a different correlation response. Definition 1. Let hθ, f : S2 Ñ R. Then, we define the azimuthal correlation as
phθ ‹ fqpθ, φq “ ż
ωPS2 hθpR´1φ ωqfpωqdω.
Let f̂lm and ĥθlm be the Fourier expansion coefficients of f and hθ, Further, we will expand each coefficient ĥθlm in the associated Legendre basis,
ĥθlm “ 8 ÿ
k“|m| ĥklmP
m k pcos θq. (16)
Now we can show that every azimuthal-rotation equivariant linear operator can be expressed as a correlation. See Fig. 2 for an illustration. Proposition 2. For functions hθ, f in L2pS2q, the transform of the correlation is given by
p{hθ ‹ fqlm “ p´1qm d
4πpl `mq! p2l ` 1qpl ´mq!
8 ÿ
l1“|m|
ĥll1mf̂l1m, (17)
where f and hθ are expanded in Fourier and Legendre series according to (16).
The proof is straight-forward by substituting the expansions and simplifying, see Appendix.
6 COMPARISON OF OUR RESULTS WITH THE LITERATURE
Our work is closely related to, and inspired by, recent theoretical developments on spherical CNNs and SOp3q equivariance, including Cohen et al. (2018); Esteves et al. (2018); Kondor & Trivedi (2018); Defferrard et al. (2019). See also Esteves (2020) for a more comprehensive review. However, these works do not consider the case of purely azimuthal equivariance, and the characterization we present in Proposition 1 is, to our knowledge, new. Our second result, Proposition 2 which shows that azimuthal equivariant operators can be expressed as correlations is by no means a surprise, but it does not follow from, for instance, the general framework of Kondor & Trivedi (2018).
Another closely related work is that of Jiang et al. (2019). They present an efficient and powerful framework for analyzing spherical images with state-of-the-art performance. It is based on learning filters that are linear combinations differential operators, or more specifically, BBθ , B Bφ and the Laplacian ∇2. The resulting filters are not SOp3q equivariant, but they are indeed linear and azimuthal equivariant operators (and thus azimuthal correlations, cf. Proposition 2) and consequently a special case of our framework. However, the discretization of the sphere into an icosahedral mesh at lower resolutions breaks the equivariance in Jiang et al. (2019). This hurts the generalization performance as confirmed by our experiments. This limitation can be alleviated by applying fully azimuthal-rotation equivariant operations.
7 EXPERIMENTS
In this section we implement neural networks based on the presented framework. First, we verify that a correlation layer L2pS2q Ñ L2pS2q designed according to Proposition 1 is indeed equivariant to azimuthal rotations. Then, as in Cohen et al. (2018); Jiang et al. (2019), we perform experiments on the Omni-MNIST and ModelNet40 (Wu et al., 2015) datasets, where the task is to perform classification of spherical images and 3D shape models, respectively. We demonstrate that the state-of-the-art architecture of Jiang et al. (2019) can be recreated in our framework, and that the classification results of both approaches on unrotated data are virtually identical, but with our approach showing increased generalization performance to unseen SOp2q rotations on the test set of ModelNet40.
7.1 EQUIVARIANCE ERROR
To evaluate the equivariance error in our network layers, we follow the approach of Cohen et al. (2018). We create prototype networks Φ by composing randomly initialized differential type correlation layers and ReLU. We sample n “ 500 azimuthal rotations Ri and n signals fi with 12 channels each. Next we compute the equivariance error ∆ “ 1n řn i“1 stdpΛRiΦpfiq ´ ΦpΛRifiqq{ stdpΦpfiqq which should be zero for a perfectly equivariant network. The obtained equivariance errors are presented in Fig. 3a. The low order of the error and that the error does not increase much with the number of layers indicates that the correlations are azimuthally equivariant.
7.2 DIGIT CLASSIFICATION ON OMNI-MNIST
The Omni-MNIST dataset is obtained by projecting the images of the MNIST handwritten digit dataset onto the surface of a sphere. The resulting dataset consists of 60,000 spherical images for training, and 10,000 for testing. As in Jiang et al. (2019), the images are projected onto the north pole, and then rotated to the equator.
We run the method of Jiang et al. (2019) as a reference, using their implementation, and also recreate their architecture in our SFT-based framework. The network consists of a stack of residual blocks. Each block contains 1 ˆ 1 convolutions that mix feature maps, as well as spherical convolutional layers that process each feature map in parallel by forming a linear combination of the image, its horizontal and vertical derivatives, and its Laplacian. Note that since all of these operations are SOp2q equivariant, they can each be represented by a block-diagonal matrix acting on the spherical harmonic expansion coefficients of the feature maps. Batch-normalization and ReLU are also performed within the residual blocks. Please refer to the paper of Jiang et al. (2019) for more details about the network architecture.
Downsampling is performed in the spectral domain, where we simply reduce the maximum degree L of included SFT coefficients by a factor of two, rounding up.
Both our implementation and the original implementation by Jiang et al. (2019) are trained with stochastic gradient descent, using the Adam optimizer and a batch size of 16. The initial learning rate is 0.01, and is decreased by a factor of two every ten epochs. For each epoch during training, we compute the accuracy on the test set, using both the original version, and a randomly azimuthally rotated version of the test set. The results are shown in Fig. 4a. The training is also run on an unrotated and a randomly azimuthally rotated version of the test set.
7.3 3D SHAPE CLASSIFICATION ON MODELNET40
ModelNet40 (Wu et al., 2015) is a dataset containing CAD models of 40 different object classes. We run our experiments on the aligned version of the dataset provided by Sedaghat et al. (2017) and follow Cohen et al. (2018) by creating spherical images by ray casting the CAD models onto an enclosing sphere. Each model is represented by six channels – the ray length, the sine of the surface angle and the cosine of the surface angle, as well as the same three properties obtained from ray casting the convex hull of the model.
We compare to the implementation by Jiang et al. (2019). We recreate their network architecture (c.f. Sec. 7.2) and train using stochastic gradient descent with Adam and a batch size of 16. The learning rate is set to 5 ¨ 10´3 and decreased by a factor of 0.7 every 25 epochs. Again we evaluate and train the model both on the aligned ModelNet40 dataset as well as a version of the dataset with each CAD model rotated by a random azimuthal rotation. We train for 100 epochs and show the results in Fig. 4b. Of special note is the case with a non-rotated training set and a rotated test set, where we see that our network manages to generalize better than the one by Jiang et al. (2019). The reason for the low degree of generalization from non-rotated to rotated data in the baseline model might be the hierarchical nature of the icosahedral grid and down-sampling used by Jiang et al. (2019), where a grid point on the sphere will influence the output more if it belongs to a lower level in the hierarchy. If an important point in the input is rotated to a grid point in a higher hierarchy level, the output might change drastically. This problem is not present in our design since we downsample in the frequency domain and therefore the information from every grid point is treated equally.
Additionally, we created another network using the same architecture as before, but with general block-diagonal correlation filters (instead of learning a linear combination of differential filters). The performance results of this network are shown in Fig. 3b. The accuracy of the general network is comparable to the accuracy of the networks with differential feature maps in Fig. 4b, but it should be noted that the number of trainable parameters is much larger.
8 CONCLUSIONS
In this paper, we have examined how to design convolutions which operate on spherical data and are equivariant to SOp2q rotations. Specifically, we have performed a complete characterization of bounded, linear operators from L2pS2q Ñ L2pS2q which exhibit SOp2q equivariance. We showed that, for band-limited signals, these can be realized as block-diagonal matrices in the spectral domain, and also demonstrated how these operators may be interpreted as correlations in the spatial domain. Using this framework, we implemented an existing state-of-the-art pipeline, which in our framework showed better generalization performance to SOp2q rotations not seen during training.
APPENDIX A: PROOF OF PROPOSITION 2
Proof. From the definition of correlation we obtain
phθ ‹ fqpθ, φq “ ż
ωPS2 hθpR´1φ ωqfpωqdω “
ż π
θ1“0
ż 2π
φ1“0 hθpθ1, φ1qfpθ1, φ` φ1q sin θ1dθ1dφ1.
(18)
Substituting in the Fourier expansions of f and h, and employing the orthogonality of the spherical harmonics, we get
phθ ‹ fqpθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĥθl1m1 f̂l1m1e
im1φ. (19)
Now, employing the expansion (16) of ĥθl1m1 into the basis of the associated Legendre functions, we find
phθ ‹ fqpθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1
8 ÿ
k“|m1| ĥkl1m1P
m1 k pcos θqf̂l1m1eim 1φ. (20)
By reindexing this last sum, we can instead write it in the form
phθ ‹ fqpθ, φq “ 8 ÿ
l“0
l ÿ
m“´l
8 ÿ
l1“|m| ĥll1mf̂l1mP
m l pcos θqeimφ “ (21)
“ 8 ÿ
l“0
l ÿ
m“´l
» –p´1qm d
4πpl `mq! p2l ` 1qpl ´mq!
8 ÿ
l1“|m| ĥll1mf̂l1m
fi
flY ml pθ, φq, (22)
where, in the last equality, we have used the definition (2) of the spherical harmonics. Note that the last expression is a spherical harmonic expansion, and the expression in the brackets is the corresponding expansion coefficient, given by
p{hθ ‹ fqlm “ p´1qm d
4πpl `mq! p2l ` 1qpl ´mq!
8 ÿ
l1“|m| ĥll1mf̂l1m. (23)
This proves the proposition. | 1. What is the main contribution of the paper regarding neural networks operating on spherical signals?
2. What are the strengths of the proposed approach, particularly in its practical characterization of equivariant convolutions?
3. Do you have any concerns regarding the proof of Proposition 2?
4. How does the reviewer assess the limitations of the experimental results, such as the lack of comparison to other spherical CNNs and SO(3) equivariant methods?
5. What are some minor suggestions for improving the paper's clarity and presentation? | Review | Review
Summary: The authors discuss neural nets operating on spherical signals that are equivariant not to SO(3) rotations, but only to SO(2) rotations around the poles. The linear equivariant maps are characterized in Fourier space and in terms of an azimuthal correlation. Experimentally, the authors show the method is approximately equivariant and show in simple experiments that the performance is comparable to a prior spherical convolution, with improved equivariance.
Strengths: - The authors make the important observation that in many cases in which spherical signals are processed, the relevant group to which one should be equivariant is SO(2), not SO(3). - The authors make a practical characterization of the equivariant convolutions as restricted linear maps in Fourier space.
Weaknesses: - Proposition 2 seems to lack an argument why Eq 16 forms a complete basis for all functions h. The function h appears to be defined as any family of spherical signals parameterized by a parameter in [-pi/2, pi/2]. If that’s the case, why eq 16? As a concrete example, let \hat{h}^\theta_lm = 1 if l=m=1 and 0 otherwise, so constant in \theta. The only constant associated Legendre polynomial is P^0_0, so this h is not expressible in eq 16. Instead, it seems like there are additional assumptions necessary on the family of spherical functions h to let the decomposition eq 16, and thus proposition 2, work. Hence, it looks like that proposition 2 doesn’t actually characterize all azimuthal correlations. - In its discussion of SO(3) equivariant spherical convolutions, the authors do not mention the lift to SO(3) signals, which allow for more expressive filters than the ones shown in figure 1. - Can the authors clarify figure 2b? I do not understand what is shown. - The architecture used for the experiments is not clearly explained in this paper. Instead the authors refer to Jiang et al. (2019) for details. This makes the paper not self-contained. - The authors appear to not use a fast spherical Fourier transform. Why not? This could greatly help performance. Could the authors comment on the runtime cost of the experiments? - The sampling of the Fourier features to a spherical signal and then applying a point-wise non-linearity is not exactly equivariant (as noted by Kondor et al 2018). Still, the authors note at the end of Sec 6 “This limitation can be alleviated by applying fully azimuthal-rotation equivariant operations.”. Perhaps the authors can comment on that? - The experiments are limited to MNIST and a single real-world dataset. - Out of the many spherical CNNs currently in existence, the authors compare only to a single one. For example, comparisons to SO(3) equivariant methods would be interesting. Furthermore, it would be interesting to compare to SO(3) equivariant methods in which SO(3) equivariance is broken to SO(2) equivariance by adding to the spherical signal a channel that indicates the theta coordinate. - The experimental results are presented in an unclear way. A table would be much clearer. - An obvious approach to the problem of SO(2) equivariance of spherical signals, is to project the sphere to a cylinder and apply planar 2D convolutions that are periodic in one direction and not in the other. This suffers from distortion of the kernel around the poles, but perhaps this wouldn’t be too harmful. An experimental comparison to this method would benefit the paper.
Recommendation: I recommend rejection of this paper. I am not convinced of the correctness of proposition 2 and proposition 1 is similar to equivariance arguments made in prior work. The experiments are limited in their presentation, the number of datasets and the comparisons to prior work.
Suggestions for improvement: - Clarify the issue around eq 16 and proposition 2 - Improve presentation of experimental results and add experimental details - Evaluate the model of more data sets - Compare the model to other spherical convolutions
Minor points / suggestions: - When talking about the Fourier modes as numbers, perhaps clarify if these are reals or complex. - In Def 1 in the equation it is confusing to have theta twice on the left-hand side. It would be clearer if h did not have a subscript on the left-hand side. |
ICLR | Title
Azimuthal Rotational Equivariance in Spherical CNNs
Abstract
In this work, we analyze linear operators from LpSq Ñ LpSq which are equivariant to azimuthal rotations, that is, rotations around the z-axis. Several high-performing neural networks defined on the sphere are equivariant to azimuthal rotations, but not to full SO(3) rotations. Our main result is to show that a linear operator acting on band-limited functions on the sphere is equivariant to azimuthal rotations if and only if it can be realized as a block-diagonal matrix acting on the spherical harmonic expansion coefficients of its input. Further, we show that such an operation can be interpreted as a convolution, or equivalently, a correlation in the spatial domain. Our theoretical findings are backed up with experimental results demonstrating that a state-of-the-art pipeline can be improved by making it equivariant to azimuthal rotations.
1 INTRODUCTION
Signals defined on the surface of a sphere arise frequently in applications. For example, omnidirectional cameras are increasingly being utilized in applications that benefit from a large field of view, such as visual odometry and mapping (Matsuki et al., 2018). Spherical signals may also arise from other forms of measurements, such as magnetoencephalography images of brain activity or from planetary data like surface temperature or wind speed measurements (Mudigonda et al., 2017).
Convolutional neural networks (CNNs) (LeCun et al., 1998) have achieved remarkable success in the analysis of images and other data that is naturally arranged in a planar, rectangular grid, but applying them to spherical data has proven more challenging. The reason for this is that it is impossible to map the sphere to a flat surface without introducing distortions. For instance, in the equirectangular projection of spherical images, objects are increasingly stretched out as they approach the poles.
To remedy this, several alternatives to the 2D convolution specifically designed for functions on the sphere have been proposed in the recent literature. Some of these run a 2D CNN on an equirectangular projection of the spherical signal, but modify the kernel (Su & Grauman, 2019), the sampling pattern of the kernel (Coors et al., 2018), or run the CNN on projections to different tangent planes in order to minimize the effect of the distortion (Eder et al., 2020). Other approaches discretize the sphere, where the vertices are intepreted as nodes in a graph, allowing them to apply the machinery of Laplacian-based graph convolutions (Defferrard et al., 2016; 2019) to the problem.
One of the main properties that make regular CNNs so successful is their equivariance to translations. In fact, it is known from abstract algebra that a convolution is deeply tied to the concept of equivariance (Kondor & Trivedi, 2018). This insight was exploited by Esteves et al. (2018) and Cohen et al. (2018), where they define network layers equivariant to SOp3q rotations. This is accomplished by transforming the input using the spherical Fourier transform (SFT) and then performing convolutions in the spectral domain. These networks have the appealing property of resting firmly on the abstract algebraic underpinnings of convolution.
However, some spherical data comes equipped with a natural orientation. For instance, imagery captured from a self-driving vehicle can be aligned with the gravity direction using information from an inertial measurement unit (IMU). Brain scans and planetary data also come with a preferred orientation, and it has been shown that employing convolutions that utilize this information (and thus are not SOp3q equivariant), can lead to improved performance in classification and segmentation tasks (Jiang et al., 2019). Instead, what is required in these applications is a convolution which
is equivariant to SOp2q transformations, i.e., azimuthal rotations, of the input data, but where the requirement of full SOp3q equivariance may be unnecessarily restrictive. In this paper, we follow the line of work that examines spherical convolutions in the spectral domain (Kondor et al., 2018; Esteves et al., 2018; Cohen et al., 2018), and consider the problem of how to design SOp2q equivariant convolutions. Specifically, we make the following contributions:
• Our main contribution is theoretical: We perform a complete characterization of all possible azimuthal-rotation equivariant linear operators T : L2pS2q Ñ L2pS2q of band-limited functions. We show that these can be parametrized naturally as block diagonal matrices of complex numbers that act on the spherical harmonic expansion coefficients of their input.
• In addition to the Fourier space characterization, we also show how these operations can be interpreted as correlations between the input signal and filters in the spatial domain.
• We demonstrate that the correlations are natural generalizations of the convolutions proposed by Kondor et al. (2018) and Esteves et al. (2018), and how these, as well as those of Jiang et al. (2019) can be formulated as special cases of our framework by parametrizing the elements of the block-diagonal matrices corresponding to the operator T .
• Through experiments on two datasets, we show that by employing azimuthal-rotation equivariant correlations, we can recreate a state-of-the-art neural network architecture (Jiang et al., 2019) in our framework, allowing it to benefit from improved equivariance.
The paper is organized as follows. Sec. 2 gives some preliminaries and Sec. 3 introduces the notion of equivariance. Our main theoretical results on azimuthal equivariance and azimuthal correlations are given in Sec. 4 and Sec. 5, respectively. Sec. 6 provides an in-depth summary of related work and finally, we present experimental results and comparisons in Section 7.
2 PRELIMINARIES
A point ω on the unit sphere S2 can be parametrized by spherical coordinates ωpθ, φq “ pcosφ sin θ, sinφ sin θ, cos θq, where θ is the polar angle measured down from the z-axis and φ the azimuthal angle measured from the x-axis (see Fig. 2). The angle θ varies between 0 and π, while φ varies between 0 and 2π.
The Hilbert space L2pS2q consists of all functions f : S2 Ñ C which are square integrable on the unit sphere, and comes equipped with the inner product
xf, hy “ ż
S2 fh̄dω “
ż 2π
φ“0
ż π
θ“0 fpθ, φqĞhpθ, φq sin θdθdφ.
The spherical harmonics are a set of functions on the sphere that form an orthogonal basis ofL2pS2q. This means that a function f P L2pS2q can be represented in the orthogonal basis via
fpθ, φq “ 8 ÿ
l“0
l ÿ
m“´l f̂lmY
m l pθ, φq, (1)
where Y ml is the spherical harmonic of degree l and order m, and the f̂lm P C are the (spherical) Fourier coefficients. Note that we always have ´l ď m ď l. Here,
Y ml pθ, φq “ p´1qm d
p2l ` 1qpl ´mq! 4πpl `mq! P m l pcos θqeimφ, (2)
where Pml is the associated Legendre function of degree l and order m. For the exact definition of Legendre functions, see Driscoll & Healy (1994). For our purposes, it will be important to know that for fixed m, şπ
θ“0 P m k pcos θqPml pcos θq sin θdθ “ 0 whenever k ‰ l. Similarly, as the spherical
harmonics form a complete orthogonal basis, we have xY ml , Y m 1
l1 y “ δl,l1δm,m1 . Given f , the Fourier coefficients f̂lm can be computed as f̂lm “ xf, Y ml y “ ş S2 fĚY ml dω.
3 EQUIVARIANCE AND LINEAR OPERATORS
With the notation in place, we are now in a position to turn our attention towards equivariance. We will start by looking at the group of translations over reals, which will be insightful for further analysis. Then, we will turn to functions on S2 and the group of rotations SOp3q.
3.1 TRANSLATIONS
For f P L2pRnq and t P Rn, the translation of f by t is the function Λtfpxq given by Λtfpxq “ fpx ´ tq. Clearly, ||Λtf ||2 “ ||f ||2 for all f P L2pRnq. So, Λt is a bounded and linear operator from L2pRnq to L2pRnq. We say that a linear operator T is equivariant with respect to translations if, for all t P Rn, we have
ΛtT “ TΛt, (3)
which means that the linear operator T commutes with translation Λt.
For h P L2pRnq, let Th be the operator performing convolution of f P L2pRnq with h, defined by the function
Thfpxq “ ph ˚ fqpxq “ ż
Rn hpyqfpx´ yqdy, @x P Rn. (4)
This operator is equivariant with respect to translations. Indeed, for t P Rn,
ThΛtfpxq “ ż
Rn hpyqfpx´ y ´ tqdy “ ph ˚ fqpx´ tq “ Thfpx´ tq “ ΛtThfpxq. (5)
It turns out that Eq. (4) is the most general form of a translation equivariant operator for functions on Rn. If T is a translation equivariant operator, then there exists a function h such that Tf “ h ˚ f for all f , provided one is allowed to pick h from the (larger) function space that also includes distributions. This is a classical result in functional analysis, see Hörmander (1960) for a proof.
3.2 ROTATIONS
Now, let us consider a real-valued function f defined on S2. Let R P SOp3q be a rotation. Then we define ΛR as the operator that rotates the graph of the function f : S2 Ñ R by R. Specifically, ΛRfpωq “ fpR´1ωq. An operator T is equivariant with respect to rotations, if for all R P SOp3q, we have
TΛR “ ΛRT,
i.e., if it commutes with the rotation operator.
The characterization of all rotation equivariant operators is well-known in the literature. In Dai & Xu (2013), it is shown that a linear, bounded operator T : L2pS2q Ñ L2pS2q is equivariant to rotations if and only if there exists a sequence of numbers tµ̂lu, l “ 0, 1, . . ., such that
pxTfqlm “ µ̂lf̂lm (6)
is fulfilled for all Fourier coefficients f̂lm indexed by l and m of an arbitrary function f P L2pS2q. Such linear operators T can also be interpretated as convolutions. In Driscoll & Healy (1994), spherical convolution is defined as
ph ˚ fqpωq “ ż
RPSOp3q hpRηqfpR´1ωqdR. (7)
Here, ω is any point on the sphere and η the north pole. The measure is dR “ sin θ dθ dφ dψ in terms of Euler angle coordinates of a rotation. It is also proven that the Fourier transform of the convolution is given by
pzh ˚ fqlm “ 2π
d
4π
p2l ` 1q
8 ÿ l“0 ĥl0f̂lm. (8)
Note that only the coefficients with orderm “ 0 for h are present in the formula. Filters, which only have terms of order m “ 0 in their Fourier expansion, are known as zonal filters and consequently they are constant for fixed θ when varying φ. Also note that the convolution fulfills (6), so it is indeed a rotation equivariant operation. Conversely, it is also clear from (6) that every bounded linear operator T can be associated with such a filter h.
4 AZIMUTHAL-ROTATION EQUIVARIANT LINEAR OPERATORS
At last, we are ready to perform our analysis of azimuthal-rotation equivariant linear operators. Let T : L2pS2q Ñ L2pS2q be a linear operator acting on functions on the unit sphere. What conditions must be imposed on T in order for it to have equivariance to rotations around the z-axis? Formally we should have, for all such azimuthal rotations Rφ P SOp3q, that TΛRφ “ ΛRφT.
Consider the action of T on an arbitrary function f P L2pS2q. Let f̂lm be its expansion coefficients according to (1). Applying T to this function gives, due to the linearity of T ,
Tfpθ, φq “ 8 ÿ
l“0
n ÿ
m“´l f̂lmTY
m l pθ, φq. (9)
The action of T is thus fully determined by its action on the spherical harmonic basis functions. In particular, it will be equivariant if and only if the spherical harmonic basis functions are equivariant.
Let glmpθ, φq “ TY ml pθ, φq be the action of T on Y ml . Since glm P L2pS2q, we can expand it,
glmpθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1Y
m1
l1 pθ, φq. (10)
The requirement that T be equivariant then gives, for all azimuthal angles φ0,
T rY ml pθ, φ´ φ0qs “ glmpθ, φ´ φ0q. (11)
The left hand side becomes, after using the definition of the spherical harmonic Y ml in (2) and moving the constant phase e´imφ0 outside T and using the definition of glm and its expansion (10):
T rY ml pθ, φ´ φ0qs “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1e
´imφ0Y m 1
l1 pθ, φq. (12)
The right hand side of (11) is simply glmpθ, φ´ φ0q “ ř8 l1“0
řl1
m1“´l1 ĝ lm l1m1e
´im1φ0Y m 1
l1 pθ, φq. So the azimuthal equivariance condition (11) requires that the above two expressions are equal, i.e.,
8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1e
´imφ0Y m 1 l1 pθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1e
´im1φ0Y m 1
l1 pθ, φq. (13)
Now, two functions in L2pS2q are identical if and only if their coefficients in the basis are identical. This gives us the constraint ĝlml1m1e ´imφ0 “ ĝlml1m1e´im 1φ0 . Thus, for any non-zero coefficient ĝlml1m1 , we must have that m “ m1. The expansion (10) of glm must therefore have the form
glmpθ, φq “ 8 ÿ
l1“|m| ĝlml1mY m l1 pθ, φq. (14)
The inner sum has vanished, since the terms are only non-zero for m “ m1, and the sum starts at l1 “ |m| since no terms exist in (10) for l1 ă |m|. The coefficients ĝlml1m thus parametrize the azimuthal rotationally equivariant transform T . To summarize, we have proven the following. Proposition 1. A linear, bounded operator T : L2pS2q Ñ L2pS2q is equivariant to azimuthal rotations if and only if there exists a sequence of numbers tµ̂lml1 u such that
pxTfqlm “ 8 ÿ
l1“|m| µ̂lml1 f̂l1m (15)
is fulfilled for all Fourier coefficients f̂lm indexed by l,m of an arbitrary function f P L2pS2q.
Note that T maps a spherical harmonic Y ml to a linear combination of spherical harmonics with the same order m. Thus, another way of stating the proposition is that a bounded linear operator
T : L2pS2q Ñ L2pS2q is equivariant to azimuthal rotations if and only if the subspaces spanned by the spherical harmonics of a fixed degree form invariant subspaces under the action of T .
This result has a natural interpretation if we restrict ourselves to spherical harmonics of some maximum degree L, i.e., when we are working with band-limited signals. This occurs commonly in practice, e.g., with a signal obtained through an equiangular sampling on the sphere. If the expansion coefficients are listed in a vector and grouped by m-values, then the matrix representation of T will be a block diagonal matrix (cf. Fig. 1).
The SOp3q equivariant convolution as defined by Driscoll and Healy, cf. (7), with the transform formula given in (8) is a special case of the above proposition where the matrix T is diagonal and where the coefficients for all spherical harmonics of the same degree are identical, cf. Fig. 1.
5 AZIMUTHAL CONVOLUTIONS AND CORRELATIONS ON S2
Translation and rotation equivariant operations can be realized as convolutions, or equivalently, correlations. Previously we characterized azimuthal-rotation equivariant linear operators in the Fourier domain (Proposition 1). A natural question arises: Can we interpret these operators as convolutions or correlations? Below, we show that these operators can be interpreted in terms of a correlation.
Let h, f : S2 Ñ R. Then, a first attempt at a definition of correlation is ph ‹ fqpφq “ ş
ωPS2 hpR ´1 φ ωqfpωqdω. However, such a correlation is only defined on S1, since it is a function of only the azimuthal angle φ. Instead, we extend the domain to S2 by considering a filter parametrized by the polar angle θ, i.e., h “ hθ. So, by varying θ, we get a different filter and consequently a different correlation response. Definition 1. Let hθ, f : S2 Ñ R. Then, we define the azimuthal correlation as
phθ ‹ fqpθ, φq “ ż
ωPS2 hθpR´1φ ωqfpωqdω.
Let f̂lm and ĥθlm be the Fourier expansion coefficients of f and hθ, Further, we will expand each coefficient ĥθlm in the associated Legendre basis,
ĥθlm “ 8 ÿ
k“|m| ĥklmP
m k pcos θq. (16)
Now we can show that every azimuthal-rotation equivariant linear operator can be expressed as a correlation. See Fig. 2 for an illustration. Proposition 2. For functions hθ, f in L2pS2q, the transform of the correlation is given by
p{hθ ‹ fqlm “ p´1qm d
4πpl `mq! p2l ` 1qpl ´mq!
8 ÿ
l1“|m|
ĥll1mf̂l1m, (17)
where f and hθ are expanded in Fourier and Legendre series according to (16).
The proof is straight-forward by substituting the expansions and simplifying, see Appendix.
6 COMPARISON OF OUR RESULTS WITH THE LITERATURE
Our work is closely related to, and inspired by, recent theoretical developments on spherical CNNs and SOp3q equivariance, including Cohen et al. (2018); Esteves et al. (2018); Kondor & Trivedi (2018); Defferrard et al. (2019). See also Esteves (2020) for a more comprehensive review. However, these works do not consider the case of purely azimuthal equivariance, and the characterization we present in Proposition 1 is, to our knowledge, new. Our second result, Proposition 2 which shows that azimuthal equivariant operators can be expressed as correlations is by no means a surprise, but it does not follow from, for instance, the general framework of Kondor & Trivedi (2018).
Another closely related work is that of Jiang et al. (2019). They present an efficient and powerful framework for analyzing spherical images with state-of-the-art performance. It is based on learning filters that are linear combinations differential operators, or more specifically, BBθ , B Bφ and the Laplacian ∇2. The resulting filters are not SOp3q equivariant, but they are indeed linear and azimuthal equivariant operators (and thus azimuthal correlations, cf. Proposition 2) and consequently a special case of our framework. However, the discretization of the sphere into an icosahedral mesh at lower resolutions breaks the equivariance in Jiang et al. (2019). This hurts the generalization performance as confirmed by our experiments. This limitation can be alleviated by applying fully azimuthal-rotation equivariant operations.
7 EXPERIMENTS
In this section we implement neural networks based on the presented framework. First, we verify that a correlation layer L2pS2q Ñ L2pS2q designed according to Proposition 1 is indeed equivariant to azimuthal rotations. Then, as in Cohen et al. (2018); Jiang et al. (2019), we perform experiments on the Omni-MNIST and ModelNet40 (Wu et al., 2015) datasets, where the task is to perform classification of spherical images and 3D shape models, respectively. We demonstrate that the state-of-the-art architecture of Jiang et al. (2019) can be recreated in our framework, and that the classification results of both approaches on unrotated data are virtually identical, but with our approach showing increased generalization performance to unseen SOp2q rotations on the test set of ModelNet40.
7.1 EQUIVARIANCE ERROR
To evaluate the equivariance error in our network layers, we follow the approach of Cohen et al. (2018). We create prototype networks Φ by composing randomly initialized differential type correlation layers and ReLU. We sample n “ 500 azimuthal rotations Ri and n signals fi with 12 channels each. Next we compute the equivariance error ∆ “ 1n řn i“1 stdpΛRiΦpfiq ´ ΦpΛRifiqq{ stdpΦpfiqq which should be zero for a perfectly equivariant network. The obtained equivariance errors are presented in Fig. 3a. The low order of the error and that the error does not increase much with the number of layers indicates that the correlations are azimuthally equivariant.
7.2 DIGIT CLASSIFICATION ON OMNI-MNIST
The Omni-MNIST dataset is obtained by projecting the images of the MNIST handwritten digit dataset onto the surface of a sphere. The resulting dataset consists of 60,000 spherical images for training, and 10,000 for testing. As in Jiang et al. (2019), the images are projected onto the north pole, and then rotated to the equator.
We run the method of Jiang et al. (2019) as a reference, using their implementation, and also recreate their architecture in our SFT-based framework. The network consists of a stack of residual blocks. Each block contains 1 ˆ 1 convolutions that mix feature maps, as well as spherical convolutional layers that process each feature map in parallel by forming a linear combination of the image, its horizontal and vertical derivatives, and its Laplacian. Note that since all of these operations are SOp2q equivariant, they can each be represented by a block-diagonal matrix acting on the spherical harmonic expansion coefficients of the feature maps. Batch-normalization and ReLU are also performed within the residual blocks. Please refer to the paper of Jiang et al. (2019) for more details about the network architecture.
Downsampling is performed in the spectral domain, where we simply reduce the maximum degree L of included SFT coefficients by a factor of two, rounding up.
Both our implementation and the original implementation by Jiang et al. (2019) are trained with stochastic gradient descent, using the Adam optimizer and a batch size of 16. The initial learning rate is 0.01, and is decreased by a factor of two every ten epochs. For each epoch during training, we compute the accuracy on the test set, using both the original version, and a randomly azimuthally rotated version of the test set. The results are shown in Fig. 4a. The training is also run on an unrotated and a randomly azimuthally rotated version of the test set.
7.3 3D SHAPE CLASSIFICATION ON MODELNET40
ModelNet40 (Wu et al., 2015) is a dataset containing CAD models of 40 different object classes. We run our experiments on the aligned version of the dataset provided by Sedaghat et al. (2017) and follow Cohen et al. (2018) by creating spherical images by ray casting the CAD models onto an enclosing sphere. Each model is represented by six channels – the ray length, the sine of the surface angle and the cosine of the surface angle, as well as the same three properties obtained from ray casting the convex hull of the model.
We compare to the implementation by Jiang et al. (2019). We recreate their network architecture (c.f. Sec. 7.2) and train using stochastic gradient descent with Adam and a batch size of 16. The learning rate is set to 5 ¨ 10´3 and decreased by a factor of 0.7 every 25 epochs. Again we evaluate and train the model both on the aligned ModelNet40 dataset as well as a version of the dataset with each CAD model rotated by a random azimuthal rotation. We train for 100 epochs and show the results in Fig. 4b. Of special note is the case with a non-rotated training set and a rotated test set, where we see that our network manages to generalize better than the one by Jiang et al. (2019). The reason for the low degree of generalization from non-rotated to rotated data in the baseline model might be the hierarchical nature of the icosahedral grid and down-sampling used by Jiang et al. (2019), where a grid point on the sphere will influence the output more if it belongs to a lower level in the hierarchy. If an important point in the input is rotated to a grid point in a higher hierarchy level, the output might change drastically. This problem is not present in our design since we downsample in the frequency domain and therefore the information from every grid point is treated equally.
Additionally, we created another network using the same architecture as before, but with general block-diagonal correlation filters (instead of learning a linear combination of differential filters). The performance results of this network are shown in Fig. 3b. The accuracy of the general network is comparable to the accuracy of the networks with differential feature maps in Fig. 4b, but it should be noted that the number of trainable parameters is much larger.
8 CONCLUSIONS
In this paper, we have examined how to design convolutions which operate on spherical data and are equivariant to SOp2q rotations. Specifically, we have performed a complete characterization of bounded, linear operators from L2pS2q Ñ L2pS2q which exhibit SOp2q equivariance. We showed that, for band-limited signals, these can be realized as block-diagonal matrices in the spectral domain, and also demonstrated how these operators may be interpreted as correlations in the spatial domain. Using this framework, we implemented an existing state-of-the-art pipeline, which in our framework showed better generalization performance to SOp2q rotations not seen during training.
APPENDIX A: PROOF OF PROPOSITION 2
Proof. From the definition of correlation we obtain
phθ ‹ fqpθ, φq “ ż
ωPS2 hθpR´1φ ωqfpωqdω “
ż π
θ1“0
ż 2π
φ1“0 hθpθ1, φ1qfpθ1, φ` φ1q sin θ1dθ1dφ1.
(18)
Substituting in the Fourier expansions of f and h, and employing the orthogonality of the spherical harmonics, we get
phθ ‹ fqpθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĥθl1m1 f̂l1m1e
im1φ. (19)
Now, employing the expansion (16) of ĥθl1m1 into the basis of the associated Legendre functions, we find
phθ ‹ fqpθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1
8 ÿ
k“|m1| ĥkl1m1P
m1 k pcos θqf̂l1m1eim 1φ. (20)
By reindexing this last sum, we can instead write it in the form
phθ ‹ fqpθ, φq “ 8 ÿ
l“0
l ÿ
m“´l
8 ÿ
l1“|m| ĥll1mf̂l1mP
m l pcos θqeimφ “ (21)
“ 8 ÿ
l“0
l ÿ
m“´l
» –p´1qm d
4πpl `mq! p2l ` 1qpl ´mq!
8 ÿ
l1“|m| ĥll1mf̂l1m
fi
flY ml pθ, φq, (22)
where, in the last equality, we have used the definition (2) of the spherical harmonics. Note that the last expression is a spherical harmonic expansion, and the expression in the brackets is the corresponding expansion coefficient, given by
p{hθ ‹ fqlm “ p´1qm d
4πpl `mq! p2l ` 1qpl ´mq!
8 ÿ
l1“|m| ĥll1mf̂l1m. (23)
This proves the proposition. | 1. What is the focus of the paper regarding convolution operations on the sphere?
2. What are the strengths and weaknesses of the proposed convolution operator compared to prior works?
3. Do you have any concerns about the experimental results and their relevance to real-world applications?
4. How does the reviewer assess the clarity and presentation of the paper's content?
5. Are there any questions or aspects that the reviewer would like the authors to address or improve upon? | Review | Review
This paper derives the theoretical basis for a convolution operator that is equivariant to azimuthal rotation. Prior works on spherical convolution study convolution operators that are equivariant to SO(3) rotation. In many use cases, however, full rotational invariance is not necessary or even harmful. This paper addresses this issue by deriving a more general convolution operator on the sphere and shows that prior works can be considered as a special case of the proposed operator. The proposed convolution definition has the desirable property of equivariant to SO(2) rotation with strong theoretical guarantee. It is also a generic operation that may be applied as the basic building block of spherical CNN in many applications.
While the proposed operator has the desirable rotational equivariant property, the authors fall short on discussing other important properties of CNN. Except translational invariance, the convolution operator in ordinary CNN also benefits from 1) localized kernel, and 2) being computational and memory efficient. The proposed convolution kernel is global and has a high computational and memory overhead, which may limit its applicability to real world data. The authors should provide some discussion regarding these aspects and show that they do not outweigh the benefits on target applications.
Beside the theoretical properties, another aspect that may be improved in this paper is the experiment. The purpose of the experiments are not clearly described and provide limited information. Given that the main contribution of this work is a more generalized convolution operation, the experiments should try to demonstrate how this generalized convolution operator is superior to prior spherical convolution operators. Some potential applications that may benefit from the proposed operator include object detection, scene understanding, etc. The authors should try to evaluate the method on these applications instead of MNIST. Also, the authors should explain why 1) the error increases in Fig 3a, and 2) why the model still falls short to generalize to rotated examples (or demonstrate this gap is irrelevant to rotational equivarience.).
Another comment on the paper is that the presentation may be improved. More specifically, given that there are multiple convolution definitions and different types of equivariance, the authors should try to avoid the confusion in the writing. |
ICLR | Title
Azimuthal Rotational Equivariance in Spherical CNNs
Abstract
In this work, we analyze linear operators from LpSq Ñ LpSq which are equivariant to azimuthal rotations, that is, rotations around the z-axis. Several high-performing neural networks defined on the sphere are equivariant to azimuthal rotations, but not to full SO(3) rotations. Our main result is to show that a linear operator acting on band-limited functions on the sphere is equivariant to azimuthal rotations if and only if it can be realized as a block-diagonal matrix acting on the spherical harmonic expansion coefficients of its input. Further, we show that such an operation can be interpreted as a convolution, or equivalently, a correlation in the spatial domain. Our theoretical findings are backed up with experimental results demonstrating that a state-of-the-art pipeline can be improved by making it equivariant to azimuthal rotations.
1 INTRODUCTION
Signals defined on the surface of a sphere arise frequently in applications. For example, omnidirectional cameras are increasingly being utilized in applications that benefit from a large field of view, such as visual odometry and mapping (Matsuki et al., 2018). Spherical signals may also arise from other forms of measurements, such as magnetoencephalography images of brain activity or from planetary data like surface temperature or wind speed measurements (Mudigonda et al., 2017).
Convolutional neural networks (CNNs) (LeCun et al., 1998) have achieved remarkable success in the analysis of images and other data that is naturally arranged in a planar, rectangular grid, but applying them to spherical data has proven more challenging. The reason for this is that it is impossible to map the sphere to a flat surface without introducing distortions. For instance, in the equirectangular projection of spherical images, objects are increasingly stretched out as they approach the poles.
To remedy this, several alternatives to the 2D convolution specifically designed for functions on the sphere have been proposed in the recent literature. Some of these run a 2D CNN on an equirectangular projection of the spherical signal, but modify the kernel (Su & Grauman, 2019), the sampling pattern of the kernel (Coors et al., 2018), or run the CNN on projections to different tangent planes in order to minimize the effect of the distortion (Eder et al., 2020). Other approaches discretize the sphere, where the vertices are intepreted as nodes in a graph, allowing them to apply the machinery of Laplacian-based graph convolutions (Defferrard et al., 2016; 2019) to the problem.
One of the main properties that make regular CNNs so successful is their equivariance to translations. In fact, it is known from abstract algebra that a convolution is deeply tied to the concept of equivariance (Kondor & Trivedi, 2018). This insight was exploited by Esteves et al. (2018) and Cohen et al. (2018), where they define network layers equivariant to SOp3q rotations. This is accomplished by transforming the input using the spherical Fourier transform (SFT) and then performing convolutions in the spectral domain. These networks have the appealing property of resting firmly on the abstract algebraic underpinnings of convolution.
However, some spherical data comes equipped with a natural orientation. For instance, imagery captured from a self-driving vehicle can be aligned with the gravity direction using information from an inertial measurement unit (IMU). Brain scans and planetary data also come with a preferred orientation, and it has been shown that employing convolutions that utilize this information (and thus are not SOp3q equivariant), can lead to improved performance in classification and segmentation tasks (Jiang et al., 2019). Instead, what is required in these applications is a convolution which
is equivariant to SOp2q transformations, i.e., azimuthal rotations, of the input data, but where the requirement of full SOp3q equivariance may be unnecessarily restrictive. In this paper, we follow the line of work that examines spherical convolutions in the spectral domain (Kondor et al., 2018; Esteves et al., 2018; Cohen et al., 2018), and consider the problem of how to design SOp2q equivariant convolutions. Specifically, we make the following contributions:
• Our main contribution is theoretical: We perform a complete characterization of all possible azimuthal-rotation equivariant linear operators T : L2pS2q Ñ L2pS2q of band-limited functions. We show that these can be parametrized naturally as block diagonal matrices of complex numbers that act on the spherical harmonic expansion coefficients of their input.
• In addition to the Fourier space characterization, we also show how these operations can be interpreted as correlations between the input signal and filters in the spatial domain.
• We demonstrate that the correlations are natural generalizations of the convolutions proposed by Kondor et al. (2018) and Esteves et al. (2018), and how these, as well as those of Jiang et al. (2019) can be formulated as special cases of our framework by parametrizing the elements of the block-diagonal matrices corresponding to the operator T .
• Through experiments on two datasets, we show that by employing azimuthal-rotation equivariant correlations, we can recreate a state-of-the-art neural network architecture (Jiang et al., 2019) in our framework, allowing it to benefit from improved equivariance.
The paper is organized as follows. Sec. 2 gives some preliminaries and Sec. 3 introduces the notion of equivariance. Our main theoretical results on azimuthal equivariance and azimuthal correlations are given in Sec. 4 and Sec. 5, respectively. Sec. 6 provides an in-depth summary of related work and finally, we present experimental results and comparisons in Section 7.
2 PRELIMINARIES
A point ω on the unit sphere S2 can be parametrized by spherical coordinates ωpθ, φq “ pcosφ sin θ, sinφ sin θ, cos θq, where θ is the polar angle measured down from the z-axis and φ the azimuthal angle measured from the x-axis (see Fig. 2). The angle θ varies between 0 and π, while φ varies between 0 and 2π.
The Hilbert space L2pS2q consists of all functions f : S2 Ñ C which are square integrable on the unit sphere, and comes equipped with the inner product
xf, hy “ ż
S2 fh̄dω “
ż 2π
φ“0
ż π
θ“0 fpθ, φqĞhpθ, φq sin θdθdφ.
The spherical harmonics are a set of functions on the sphere that form an orthogonal basis ofL2pS2q. This means that a function f P L2pS2q can be represented in the orthogonal basis via
fpθ, φq “ 8 ÿ
l“0
l ÿ
m“´l f̂lmY
m l pθ, φq, (1)
where Y ml is the spherical harmonic of degree l and order m, and the f̂lm P C are the (spherical) Fourier coefficients. Note that we always have ´l ď m ď l. Here,
Y ml pθ, φq “ p´1qm d
p2l ` 1qpl ´mq! 4πpl `mq! P m l pcos θqeimφ, (2)
where Pml is the associated Legendre function of degree l and order m. For the exact definition of Legendre functions, see Driscoll & Healy (1994). For our purposes, it will be important to know that for fixed m, şπ
θ“0 P m k pcos θqPml pcos θq sin θdθ “ 0 whenever k ‰ l. Similarly, as the spherical
harmonics form a complete orthogonal basis, we have xY ml , Y m 1
l1 y “ δl,l1δm,m1 . Given f , the Fourier coefficients f̂lm can be computed as f̂lm “ xf, Y ml y “ ş S2 fĚY ml dω.
3 EQUIVARIANCE AND LINEAR OPERATORS
With the notation in place, we are now in a position to turn our attention towards equivariance. We will start by looking at the group of translations over reals, which will be insightful for further analysis. Then, we will turn to functions on S2 and the group of rotations SOp3q.
3.1 TRANSLATIONS
For f P L2pRnq and t P Rn, the translation of f by t is the function Λtfpxq given by Λtfpxq “ fpx ´ tq. Clearly, ||Λtf ||2 “ ||f ||2 for all f P L2pRnq. So, Λt is a bounded and linear operator from L2pRnq to L2pRnq. We say that a linear operator T is equivariant with respect to translations if, for all t P Rn, we have
ΛtT “ TΛt, (3)
which means that the linear operator T commutes with translation Λt.
For h P L2pRnq, let Th be the operator performing convolution of f P L2pRnq with h, defined by the function
Thfpxq “ ph ˚ fqpxq “ ż
Rn hpyqfpx´ yqdy, @x P Rn. (4)
This operator is equivariant with respect to translations. Indeed, for t P Rn,
ThΛtfpxq “ ż
Rn hpyqfpx´ y ´ tqdy “ ph ˚ fqpx´ tq “ Thfpx´ tq “ ΛtThfpxq. (5)
It turns out that Eq. (4) is the most general form of a translation equivariant operator for functions on Rn. If T is a translation equivariant operator, then there exists a function h such that Tf “ h ˚ f for all f , provided one is allowed to pick h from the (larger) function space that also includes distributions. This is a classical result in functional analysis, see Hörmander (1960) for a proof.
3.2 ROTATIONS
Now, let us consider a real-valued function f defined on S2. Let R P SOp3q be a rotation. Then we define ΛR as the operator that rotates the graph of the function f : S2 Ñ R by R. Specifically, ΛRfpωq “ fpR´1ωq. An operator T is equivariant with respect to rotations, if for all R P SOp3q, we have
TΛR “ ΛRT,
i.e., if it commutes with the rotation operator.
The characterization of all rotation equivariant operators is well-known in the literature. In Dai & Xu (2013), it is shown that a linear, bounded operator T : L2pS2q Ñ L2pS2q is equivariant to rotations if and only if there exists a sequence of numbers tµ̂lu, l “ 0, 1, . . ., such that
pxTfqlm “ µ̂lf̂lm (6)
is fulfilled for all Fourier coefficients f̂lm indexed by l and m of an arbitrary function f P L2pS2q. Such linear operators T can also be interpretated as convolutions. In Driscoll & Healy (1994), spherical convolution is defined as
ph ˚ fqpωq “ ż
RPSOp3q hpRηqfpR´1ωqdR. (7)
Here, ω is any point on the sphere and η the north pole. The measure is dR “ sin θ dθ dφ dψ in terms of Euler angle coordinates of a rotation. It is also proven that the Fourier transform of the convolution is given by
pzh ˚ fqlm “ 2π
d
4π
p2l ` 1q
8 ÿ l“0 ĥl0f̂lm. (8)
Note that only the coefficients with orderm “ 0 for h are present in the formula. Filters, which only have terms of order m “ 0 in their Fourier expansion, are known as zonal filters and consequently they are constant for fixed θ when varying φ. Also note that the convolution fulfills (6), so it is indeed a rotation equivariant operation. Conversely, it is also clear from (6) that every bounded linear operator T can be associated with such a filter h.
4 AZIMUTHAL-ROTATION EQUIVARIANT LINEAR OPERATORS
At last, we are ready to perform our analysis of azimuthal-rotation equivariant linear operators. Let T : L2pS2q Ñ L2pS2q be a linear operator acting on functions on the unit sphere. What conditions must be imposed on T in order for it to have equivariance to rotations around the z-axis? Formally we should have, for all such azimuthal rotations Rφ P SOp3q, that TΛRφ “ ΛRφT.
Consider the action of T on an arbitrary function f P L2pS2q. Let f̂lm be its expansion coefficients according to (1). Applying T to this function gives, due to the linearity of T ,
Tfpθ, φq “ 8 ÿ
l“0
n ÿ
m“´l f̂lmTY
m l pθ, φq. (9)
The action of T is thus fully determined by its action on the spherical harmonic basis functions. In particular, it will be equivariant if and only if the spherical harmonic basis functions are equivariant.
Let glmpθ, φq “ TY ml pθ, φq be the action of T on Y ml . Since glm P L2pS2q, we can expand it,
glmpθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1Y
m1
l1 pθ, φq. (10)
The requirement that T be equivariant then gives, for all azimuthal angles φ0,
T rY ml pθ, φ´ φ0qs “ glmpθ, φ´ φ0q. (11)
The left hand side becomes, after using the definition of the spherical harmonic Y ml in (2) and moving the constant phase e´imφ0 outside T and using the definition of glm and its expansion (10):
T rY ml pθ, φ´ φ0qs “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1e
´imφ0Y m 1
l1 pθ, φq. (12)
The right hand side of (11) is simply glmpθ, φ´ φ0q “ ř8 l1“0
řl1
m1“´l1 ĝ lm l1m1e
´im1φ0Y m 1
l1 pθ, φq. So the azimuthal equivariance condition (11) requires that the above two expressions are equal, i.e.,
8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1e
´imφ0Y m 1 l1 pθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĝlml1m1e
´im1φ0Y m 1
l1 pθ, φq. (13)
Now, two functions in L2pS2q are identical if and only if their coefficients in the basis are identical. This gives us the constraint ĝlml1m1e ´imφ0 “ ĝlml1m1e´im 1φ0 . Thus, for any non-zero coefficient ĝlml1m1 , we must have that m “ m1. The expansion (10) of glm must therefore have the form
glmpθ, φq “ 8 ÿ
l1“|m| ĝlml1mY m l1 pθ, φq. (14)
The inner sum has vanished, since the terms are only non-zero for m “ m1, and the sum starts at l1 “ |m| since no terms exist in (10) for l1 ă |m|. The coefficients ĝlml1m thus parametrize the azimuthal rotationally equivariant transform T . To summarize, we have proven the following. Proposition 1. A linear, bounded operator T : L2pS2q Ñ L2pS2q is equivariant to azimuthal rotations if and only if there exists a sequence of numbers tµ̂lml1 u such that
pxTfqlm “ 8 ÿ
l1“|m| µ̂lml1 f̂l1m (15)
is fulfilled for all Fourier coefficients f̂lm indexed by l,m of an arbitrary function f P L2pS2q.
Note that T maps a spherical harmonic Y ml to a linear combination of spherical harmonics with the same order m. Thus, another way of stating the proposition is that a bounded linear operator
T : L2pS2q Ñ L2pS2q is equivariant to azimuthal rotations if and only if the subspaces spanned by the spherical harmonics of a fixed degree form invariant subspaces under the action of T .
This result has a natural interpretation if we restrict ourselves to spherical harmonics of some maximum degree L, i.e., when we are working with band-limited signals. This occurs commonly in practice, e.g., with a signal obtained through an equiangular sampling on the sphere. If the expansion coefficients are listed in a vector and grouped by m-values, then the matrix representation of T will be a block diagonal matrix (cf. Fig. 1).
The SOp3q equivariant convolution as defined by Driscoll and Healy, cf. (7), with the transform formula given in (8) is a special case of the above proposition where the matrix T is diagonal and where the coefficients for all spherical harmonics of the same degree are identical, cf. Fig. 1.
5 AZIMUTHAL CONVOLUTIONS AND CORRELATIONS ON S2
Translation and rotation equivariant operations can be realized as convolutions, or equivalently, correlations. Previously we characterized azimuthal-rotation equivariant linear operators in the Fourier domain (Proposition 1). A natural question arises: Can we interpret these operators as convolutions or correlations? Below, we show that these operators can be interpreted in terms of a correlation.
Let h, f : S2 Ñ R. Then, a first attempt at a definition of correlation is ph ‹ fqpφq “ ş
ωPS2 hpR ´1 φ ωqfpωqdω. However, such a correlation is only defined on S1, since it is a function of only the azimuthal angle φ. Instead, we extend the domain to S2 by considering a filter parametrized by the polar angle θ, i.e., h “ hθ. So, by varying θ, we get a different filter and consequently a different correlation response. Definition 1. Let hθ, f : S2 Ñ R. Then, we define the azimuthal correlation as
phθ ‹ fqpθ, φq “ ż
ωPS2 hθpR´1φ ωqfpωqdω.
Let f̂lm and ĥθlm be the Fourier expansion coefficients of f and hθ, Further, we will expand each coefficient ĥθlm in the associated Legendre basis,
ĥθlm “ 8 ÿ
k“|m| ĥklmP
m k pcos θq. (16)
Now we can show that every azimuthal-rotation equivariant linear operator can be expressed as a correlation. See Fig. 2 for an illustration. Proposition 2. For functions hθ, f in L2pS2q, the transform of the correlation is given by
p{hθ ‹ fqlm “ p´1qm d
4πpl `mq! p2l ` 1qpl ´mq!
8 ÿ
l1“|m|
ĥll1mf̂l1m, (17)
where f and hθ are expanded in Fourier and Legendre series according to (16).
The proof is straight-forward by substituting the expansions and simplifying, see Appendix.
6 COMPARISON OF OUR RESULTS WITH THE LITERATURE
Our work is closely related to, and inspired by, recent theoretical developments on spherical CNNs and SOp3q equivariance, including Cohen et al. (2018); Esteves et al. (2018); Kondor & Trivedi (2018); Defferrard et al. (2019). See also Esteves (2020) for a more comprehensive review. However, these works do not consider the case of purely azimuthal equivariance, and the characterization we present in Proposition 1 is, to our knowledge, new. Our second result, Proposition 2 which shows that azimuthal equivariant operators can be expressed as correlations is by no means a surprise, but it does not follow from, for instance, the general framework of Kondor & Trivedi (2018).
Another closely related work is that of Jiang et al. (2019). They present an efficient and powerful framework for analyzing spherical images with state-of-the-art performance. It is based on learning filters that are linear combinations differential operators, or more specifically, BBθ , B Bφ and the Laplacian ∇2. The resulting filters are not SOp3q equivariant, but they are indeed linear and azimuthal equivariant operators (and thus azimuthal correlations, cf. Proposition 2) and consequently a special case of our framework. However, the discretization of the sphere into an icosahedral mesh at lower resolutions breaks the equivariance in Jiang et al. (2019). This hurts the generalization performance as confirmed by our experiments. This limitation can be alleviated by applying fully azimuthal-rotation equivariant operations.
7 EXPERIMENTS
In this section we implement neural networks based on the presented framework. First, we verify that a correlation layer L2pS2q Ñ L2pS2q designed according to Proposition 1 is indeed equivariant to azimuthal rotations. Then, as in Cohen et al. (2018); Jiang et al. (2019), we perform experiments on the Omni-MNIST and ModelNet40 (Wu et al., 2015) datasets, where the task is to perform classification of spherical images and 3D shape models, respectively. We demonstrate that the state-of-the-art architecture of Jiang et al. (2019) can be recreated in our framework, and that the classification results of both approaches on unrotated data are virtually identical, but with our approach showing increased generalization performance to unseen SOp2q rotations on the test set of ModelNet40.
7.1 EQUIVARIANCE ERROR
To evaluate the equivariance error in our network layers, we follow the approach of Cohen et al. (2018). We create prototype networks Φ by composing randomly initialized differential type correlation layers and ReLU. We sample n “ 500 azimuthal rotations Ri and n signals fi with 12 channels each. Next we compute the equivariance error ∆ “ 1n řn i“1 stdpΛRiΦpfiq ´ ΦpΛRifiqq{ stdpΦpfiqq which should be zero for a perfectly equivariant network. The obtained equivariance errors are presented in Fig. 3a. The low order of the error and that the error does not increase much with the number of layers indicates that the correlations are azimuthally equivariant.
7.2 DIGIT CLASSIFICATION ON OMNI-MNIST
The Omni-MNIST dataset is obtained by projecting the images of the MNIST handwritten digit dataset onto the surface of a sphere. The resulting dataset consists of 60,000 spherical images for training, and 10,000 for testing. As in Jiang et al. (2019), the images are projected onto the north pole, and then rotated to the equator.
We run the method of Jiang et al. (2019) as a reference, using their implementation, and also recreate their architecture in our SFT-based framework. The network consists of a stack of residual blocks. Each block contains 1 ˆ 1 convolutions that mix feature maps, as well as spherical convolutional layers that process each feature map in parallel by forming a linear combination of the image, its horizontal and vertical derivatives, and its Laplacian. Note that since all of these operations are SOp2q equivariant, they can each be represented by a block-diagonal matrix acting on the spherical harmonic expansion coefficients of the feature maps. Batch-normalization and ReLU are also performed within the residual blocks. Please refer to the paper of Jiang et al. (2019) for more details about the network architecture.
Downsampling is performed in the spectral domain, where we simply reduce the maximum degree L of included SFT coefficients by a factor of two, rounding up.
Both our implementation and the original implementation by Jiang et al. (2019) are trained with stochastic gradient descent, using the Adam optimizer and a batch size of 16. The initial learning rate is 0.01, and is decreased by a factor of two every ten epochs. For each epoch during training, we compute the accuracy on the test set, using both the original version, and a randomly azimuthally rotated version of the test set. The results are shown in Fig. 4a. The training is also run on an unrotated and a randomly azimuthally rotated version of the test set.
7.3 3D SHAPE CLASSIFICATION ON MODELNET40
ModelNet40 (Wu et al., 2015) is a dataset containing CAD models of 40 different object classes. We run our experiments on the aligned version of the dataset provided by Sedaghat et al. (2017) and follow Cohen et al. (2018) by creating spherical images by ray casting the CAD models onto an enclosing sphere. Each model is represented by six channels – the ray length, the sine of the surface angle and the cosine of the surface angle, as well as the same three properties obtained from ray casting the convex hull of the model.
We compare to the implementation by Jiang et al. (2019). We recreate their network architecture (c.f. Sec. 7.2) and train using stochastic gradient descent with Adam and a batch size of 16. The learning rate is set to 5 ¨ 10´3 and decreased by a factor of 0.7 every 25 epochs. Again we evaluate and train the model both on the aligned ModelNet40 dataset as well as a version of the dataset with each CAD model rotated by a random azimuthal rotation. We train for 100 epochs and show the results in Fig. 4b. Of special note is the case with a non-rotated training set and a rotated test set, where we see that our network manages to generalize better than the one by Jiang et al. (2019). The reason for the low degree of generalization from non-rotated to rotated data in the baseline model might be the hierarchical nature of the icosahedral grid and down-sampling used by Jiang et al. (2019), where a grid point on the sphere will influence the output more if it belongs to a lower level in the hierarchy. If an important point in the input is rotated to a grid point in a higher hierarchy level, the output might change drastically. This problem is not present in our design since we downsample in the frequency domain and therefore the information from every grid point is treated equally.
Additionally, we created another network using the same architecture as before, but with general block-diagonal correlation filters (instead of learning a linear combination of differential filters). The performance results of this network are shown in Fig. 3b. The accuracy of the general network is comparable to the accuracy of the networks with differential feature maps in Fig. 4b, but it should be noted that the number of trainable parameters is much larger.
8 CONCLUSIONS
In this paper, we have examined how to design convolutions which operate on spherical data and are equivariant to SOp2q rotations. Specifically, we have performed a complete characterization of bounded, linear operators from L2pS2q Ñ L2pS2q which exhibit SOp2q equivariance. We showed that, for band-limited signals, these can be realized as block-diagonal matrices in the spectral domain, and also demonstrated how these operators may be interpreted as correlations in the spatial domain. Using this framework, we implemented an existing state-of-the-art pipeline, which in our framework showed better generalization performance to SOp2q rotations not seen during training.
APPENDIX A: PROOF OF PROPOSITION 2
Proof. From the definition of correlation we obtain
phθ ‹ fqpθ, φq “ ż
ωPS2 hθpR´1φ ωqfpωqdω “
ż π
θ1“0
ż 2π
φ1“0 hθpθ1, φ1qfpθ1, φ` φ1q sin θ1dθ1dφ1.
(18)
Substituting in the Fourier expansions of f and h, and employing the orthogonality of the spherical harmonics, we get
phθ ‹ fqpθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1 ĥθl1m1 f̂l1m1e
im1φ. (19)
Now, employing the expansion (16) of ĥθl1m1 into the basis of the associated Legendre functions, we find
phθ ‹ fqpθ, φq “ 8 ÿ
l1“0
l1 ÿ
m1“´l1
8 ÿ
k“|m1| ĥkl1m1P
m1 k pcos θqf̂l1m1eim 1φ. (20)
By reindexing this last sum, we can instead write it in the form
phθ ‹ fqpθ, φq “ 8 ÿ
l“0
l ÿ
m“´l
8 ÿ
l1“|m| ĥll1mf̂l1mP
m l pcos θqeimφ “ (21)
“ 8 ÿ
l“0
l ÿ
m“´l
» –p´1qm d
4πpl `mq! p2l ` 1qpl ´mq!
8 ÿ
l1“|m| ĥll1mf̂l1m
fi
flY ml pθ, φq, (22)
where, in the last equality, we have used the definition (2) of the spherical harmonics. Note that the last expression is a spherical harmonic expansion, and the expression in the brackets is the corresponding expansion coefficient, given by
p{hθ ‹ fqlm “ p´1qm d
4πpl `mq! p2l ` 1qpl ´mq!
8 ÿ
l1“|m| ĥll1mf̂l1m. (23)
This proves the proposition. | 1. What is the main contribution of the paper regarding CNNs?
2. What are the strengths of the proposed approach, particularly in its theoretical foundation?
3. What are the weaknesses of the paper, especially regarding its experimental comparisons?
4. Do you have any concerns about the choice of datasets for the experiments?
5. How does the reviewer assess the novelty and significance of the paper's ideas compared to prior works in spherical CNNs? | Review | Review
The authors present CNNs which are equivariant to azimuthal rotations, rather than the previously studied SO(3) rotations. The authors argue correctly that in many real-world settings, data have a canonical orientation in which they are recorded. In such cases, only equivariance to azimuthal rotations needs to be enforced in the convolutional layers. This formulation is less constrained compared to enforcing equivariance to the entire SO(3) group.
Pros:
The authors present an interesting well-motivated theoretical idea to enforce azimuthal rotation-equivariance. They show that such operators have specific block-diagonal structure in the Fourier domain and how their action on spherical signals can be seen as correlations in the spatial domain.
Their experiments show that, compared to a closely related work by Jiang et al., which uses icosahedral mesh rather than the complete sphere, the proposed work outperforms the baseline significantly on Omni-MNIST and Model40 classification.
Cons:
The authors have not shown comparisons with earlier works on spherical CNNs such as Cohen et al. (2018) and Esteves et al. (2018). I understand that these works care about SO(3) equivariance rather than only azimuthal rotations. However, I think the value of this idea needs to be demonstrated empirically, i.e. utilizing the fact that the data come in a preferred orientation. I think this is an important and required comparison that the authors have missed. Only after looking at the improvements in performance over these papers can we conclude that the proposed method is a valuable addition to literature.
Closely related to the above point, it is not clear why these datasets were chosen for the experiments. Neither Omni-MNIST nor the Model40 (after raycasting on the sphere) have a naturally preferred orientation. In other words, the authors have not used datasets where enforcing equivariance to the entire SO(3) group can be restrictive. I believe the authors should address this point if possible. |
ICLR | Title
How Informative is the Approximation Error from Tensor Decomposition for Neural Network Compression?
Abstract
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weights is a proxy for the performance of the model to compress multiple layers and fine-tune the compressed model. Surprisingly, little research has systematically evaluated which approximation errors can be used to make choices regarding the layer, tensor decomposition method, and level of compression. To close this gap, we perform an experimental study to test if this assumption holds across different layers and types of decompositions, and what the effect of fine-tuning is. We include the approximation error on the features resulting from a compressed layer in our analysis to test if this provides a better proxy, as it explicitly takes the data into account. We find the approximation error on the weights has a positive correlation with the performance error, before as well as after fine-tuning. Basing the approximation error on the features does not improve the correlation significantly. While scaling the approximation error commonly is used to account for the different sizes of layers, the average correlation across layers is smaller than across all choices (i.e. layers, decompositions, and level of compression) before fine-tuning. When calculating the correlation across the different decompositions, the average rank correlation is larger than across all choices. This means multiple decompositions can be considered for compression and the approximation error can be used to choose between them.
N/A
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weights is a proxy for the performance of the model to compress multiple layers and fine-tune the compressed model. Surprisingly, little research has systematically evaluated which approximation errors can be used to make choices regarding the layer, tensor decomposition method, and level of compression. To close this gap, we perform an experimental study to test if this assumption holds across different layers and types of decompositions, and what the effect of fine-tuning is. We include the approximation error on the features resulting from a compressed layer in our analysis to test if this provides a better proxy, as it explicitly takes the data into account. We find the approximation error on the weights has a positive correlation with the performance error, before as well as after fine-tuning. Basing the approximation error on the features does not improve the correlation significantly. While scaling the approximation error commonly is used to account for the different sizes of layers, the average correlation across layers is smaller than across all choices (i.e. layers, decompositions, and level of compression) before fine-tuning. When calculating the correlation across the different decompositions, the average rank correlation is larger than across all choices. This means multiple decompositions can be considered for compression and the approximation error can be used to choose between them.
1 INTRODUCTION
Tensor Decompositions (TD) have shown potential for compressing pre-trained models, such as convolutional neural networks, by replacing the optimized weight tensor with a low-rank multi-linear approximation with fewer parameters (Jaderberg et al., 2014; Lebedev et al., 2015; Kim et al., 2016; Garipov et al., 2016; Kossaifi et al., 2019a). Common compression procedures (Lebedev et al., 2015; Garipov et al., 2016; Hawkins et al., 2021) work by iteratively applying TD on a selected weight tensor, where each time several decomposition choices have to be made regarding (i) the layer to compress, (ii) the type of decomposition, and (iii) the compression level. Selecting the best hyperparameters for these choices at a given iteration requires costly re-evaluating the full model for each option. Recently, Liebenwein et al. (2021) suggested comparing the approximation errors on the decomposed weights as a more efficient alternative, though they only considered matrix decompositions for which analytical bounds on the resulting performance exist. These bounds rely on the Eckhart-Young-Mirsky theorem. For TD, no equivalent theorem is possible (Vannieuwenhoven et al., 2014). While theoretical bounds are not available for more general TD methods, the same concept could still be practical when considering TDs too. We summarize this as the following general assumption:
Assumption 1. A lower TD approximation error on a model’s weight tensor indicates better overall model performance after compression.
While this assumption appears intuitive and reasonable, we observe several gaps in the existing literature: First, most existing TD compression literature only focuses on a few decomposition choices, e.g. fixing the TD method (Lebedev et al., 2015; Kim et al., 2016). Although various error measures and decomposition choices have been studied in separation, no prior work systematically compares different decomposition errors across multiple decomposition choices. Second, different decomposition errors with different properties have been used throughout the literature (Jaderberg et al., 2014), and it is unclear if some error measure should be preferred. Third, a benefit of TD is that no training data is needed for compression, though if labeled data is available, more recent methods combine TD with a subsequent fine-tuning step. Is the approximation error equally valid for the model performance with and without fine-tuning?
Overall, to the best of the authors’ knowledge, no prior work investigates if and which decomposition choices for TD network compression can be made using specific approximation errors. This paper studies empirically to what extent a single decomposition error correlates with the compressed model’s performance across varied decomposition choices, identifying how existing procedures could be improved, and providing support for specific practices. Our contributions are as follows:
• A first empirical study is proposed on the correlation between the approximation error on the model weights that result from compression with TD, and the performance of the compressed model1. Studied decomposition choices include the layer, multiple decomposition methods (CP, Tucker, and Tensor Train), and level of compression. Measurements are made using several models and datasets. We show that the error is indicative of model performance, even when comparing multiple TD methods, though useful correlation only occurs at the higher compression levels.
• Different formulations for the approximation error measure are compared, including measuring the error on the features as motivated by the work Jaderberg et al. (2014); Denil et al. (2014) which considers the data distribution. We further study how using training labeled data for additional fine-tuning affects the correlation.
2 RELATED WORK
There is currently no systematic study on how well the approximation error relates to a compressed neural network’s performance across multiple choices of network layers, TD methods, and compression levels. We here review the most similar and related studies where we distinguish works with theoretical versus empirical validation, different approximation error measures, and the role of fine-tuning after compression.
The relationship between the approximation error on the weights and the performance of the model was studied by theoretical analysis for matrix decompositions. Liebenwein et al. (2021) derive bounds on the model performance for SVD-based compression on the convolutional layers, and thus motivate that the SVD approximation error is a good proxy for the compressed model performance. Arora et al. (2018) derive bounds on the generalization error for convolutional layers based on a compression error from their matrix projection algorithm. Baykal et al. (2019) show how the amount of sparsity introduced in a model’s layers relates to its generalization performance. While these works show that some theoretical bounds can be found for specific compression methods, such bounds are not available for TDs in general. Other works, therefore, study the relationship for TD empirically. For instance, Lebedev et al. (2015) show how CP decomposition rank affects the approximation error, and the resulting accuracy drop as the rank is decreased. Hawkins et al. (2022) observe that, for networks with repeated layer blocks, the approximation error depends on the convolutional layers within the block.
When considering the model’s final task performance, the approximation error on the weights might not be the most relevant measure. To consider the effect on the actual data distribution, Jaderberg et al. (2014) instead propose to compute an error on the approximated output features of a layer after its weights have been compressed. They found that compressing weights by minimizing the error on features, rather than the error on the weights, results in a smaller loss in classification accuracy. However, Jaderberg et al. (2014) do not fine-tune the decomposed model, and only use a toy model with few layers. Denil et al. (2014) try to capture the information from the data via the empirical
1The code for our experiments is available at https://github.com/JSchuurmans/tddl.
covariance matrix. Although this method eliminates the need for multiple passes over the data during the compression step, it is limited to a two-dimensional case. Eo et al. (2021) forego looking at a compression error altogether, and selects the rank based on the accuracy on the validation set. This requires the labels to be present and a forward pass through the whole network, even if the compressed layer is near the input.
Several norms have been used in the literature to quantify the approximation error, which is the difference between the pretrained weights and the compressed weights. The works that decompose pretrained layers (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015; Novikov et al., 2015; Kim et al., 2016) explicitly minimize the Frobenius norm. Hawkins et al. (2021); Liebenwein et al. (2021) calculate the relative Frobenius norm, i.e., the norm of the error proportional to the norm of the pretrained weights, to compare the error for layers of different sizes. Still, it remains unclear which error measure is most informative for the compressed model’s final performance.
In practice, when training data is still available for compression, fine-tuning for the target task after compressing weights could recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). Adding fine-tuning results in a three-step process: pretrain, compress and fine-tune. Optimization thus alternates between minimizing the error of respectively the features, the weights, and finally the features again. While Denton et al. (2014); Kim et al. (2016) compare compressed model performance before and after fine-tuning, they do not investigate how the finetuned network performance relates to the weight compression error. Lebedev et al. (2015) does study the compression error for CP decomposition, but only reports performance with fine-tuning.
3 METHODOLOGY
We consider the task of compressing a pretrained neural network with TD. While TD is a general technique that could be applied to many types of layers, the focus will be on convolutional layers. Due to their ubiquity and suitability to compare different types of higher-order decompositions, as the layer weights are four-dimensional tensors.
Generally, a compression procedure will iteratively apply TD to the weights of selected layers, making several choices on how and what weight tensor to decompose, while ideally maintaining as much of the original network’s performance. In its original uncompressed form, the full-rank weight tensors W ∈ RC×H×W×T of a layer represent a local optimum in the network’s parameter space with respect to the training data and loss, where C is the number of input channels, H and W are the height and width of the convolutional kernel, and T is the number of output channels. When a TD is applied to the weights of a specific layer, this results in a factorized structure W̃ composed of multiple smaller tensor multiplications, which replaces the original weights in the network. Each time TD is applied, several decomposition choices need to be evaluated:
1. Layer: The layer l from the set of network layers L = {1, 2, · · · , L} to decompose. 2. Method: The type of TD method m ∈ M = {CP,Tucker,Tensor Train}. The decomposition
determines the factorized structure of W̃. 3. Compression: The compression level c ∈ C for the selected layer. Here C ⊂ (0, 1] is some finite
set of testable levels, and c = 0.75 means the number of parameters is reduced by 75% and the factorized layer contains only 25% of the parameters. A given compression level is achieved by decomposing the tensor to some rank R, depending on the selected TD method (see Section 3.3).
We will refer to H = L × M × C as the set with possible hyperparameter values for (l,m, c) to consider. Note that compression procedures in the literature might only consider a subset of these choices. For example, a procedure might fix the layer for a given iteration or only consider a single TD method. In practice, it is computationally infeasible to evaluate the compressed network’s performance for every possible hyperparameter choice at every compression iteration, especially when optimizing for performance after fine-tuning. Instead, automated compression procedures will efficiently compare an approximation error ai = e(W̃i,W) between the original and decomposed weights using a particular choice of hyperparameters hi ∈ H. In doing so one relies on Assumption 1 that a lower approximation error is indicative of better compressed performance pi. If annotated training data is available, additional network fine-tuning on the decomposed structure could result in improved performance p⋆i > pi.
In this work, we propose to focus on a single iteration and investigate Assumption 1 in isolation of any specific compression procedure. Our aim is thus to assess how well computing an approximation error e can predict the optimal compression choice from some hypothesis set H, i.e. the choice that results in the lowest compressed network performance error p, or even performance after fine-tuning p⋆. We will study the correlation between approximation error and model performance empirically in our experiments, using the procedure and correlation metric explained in Section 3.1. Details on the different approximation errors that we will explore are covered in Section 3.2. Finally, the considered TD methods are explained in Section 3.3.
3.1 EMPIRICAL EVALUATION OF ERROR-PERFORMANCE CORRELATION
Our proposed empirical evaluation procedure will evaluate a large set of hyperparameters H = {h1, h2, · · · } on multiple convolutional neural networks and datasets (see Section 4) for different options of approximation error metric (see Section 3.2). For a given model, dataset, and approximation error metric e, the procedure evaluates for each set of hyperparameter choices hi ∈ H the approximation error ai = e(W̃i,W), the model performance error pi on the validation split, and the model performance error p⋆i after additional fine-tuning on the training data. We thus obtain sets of measurements A = {a1, a2, · · · }, P = {p1, p2, · · · } and P⋆ = {p⋆1, p⋆2, · · · } for H. When comparing two sets of hyperparameters hi ∈ H and hj ∈ H, we want to establish if the set with the smaller approximation error results in a smaller performance error of the model. In other words, the concordance of pairs of measurements needs to be established. Concordant pairs have a larger (smaller) performance error when the approximation error is larger (smaller) between two sets of hyperparameter choices, i.e. i and j are concordant if ai > aj and pi > pj or if ai < aj and pi < pj , and discordant otherwise. Kendall’s τ is a measure for the rank correlation (Kendall, 1938) or ordinal association between two order sets, in our case between approximation errors e and a model performances P (or P⋆). To avoid confusion with the concept of tensor rank, we will refer to Kendall’s τ simply as correlation. For this correlation measure, the difference between the number of concordant pairs (k) and discordant pairs (d) is scaled with the binomial coefficient m(m− 1)/2 to account for the different ways two measurements can be sampled from a total of m measurements:
τ = 2(k − d)/(m(m− 1)). (1) Kendall’s τ can be interpreted as follows: τ = 1 indicates a perfect positive rank correlation, τ = 0 no correlation, and τ = −1 a strong negative correlation. For a set of hyperparameters H, a useful approximation error e would thus result in a τ close to ±1, indicating it is predictive of the model’s performance. Note that Kendall’s τ does not depend on assumptions about the underlying distribution, whereas Pearson correlation assumes a linear relationship between the two measurements. Kendall’s τ is used over Spearman’s ρ because the interpretation of con- and discordance pairs for Kendall’s τ is closely related to our use case of choosing between hyperparameter sets.
3.2 APPROXIMATION ERRORS
We now discuss various measures e(W̃,W) to quantify the approximation error. The basis is to compute some norm on the difference between these tensors, in this work we use the Frobenius norm as is common in the literature (Lebedev et al., 2015; Hawkins et al., 2022). We shall consider three options to scale the norm, which could help make the error more robust when comparing hypotheses with different layers. Additionally, we can consider two options to compute the error on, namely either directly the weights or on the features. In total, we shall thus explore six different approximation errors in this work. An overview is presented in Table 1.
Normalization The norm between the difference of the weights is referred to as absolute norm and is used in the objective function when decomposing the pretrained weights. The relative norm is used in TD literature to compare errors between different layers (Lebedev et al., 2015; Hawkins et al., 2022), as it is invariant to the size of the weights. Alternatively, the norm of the difference can be scaled to account for the number of parameters, while keeping the distance from the weights.
Target tensor The most common option is to compute approximation error on the decomposed layer’s weights, W. However, Jaderberg et al. (2014) achieved promising results basing the decomposition on the approximation error of the features. Errors in some elements of the weight tensor
might be more permissible if they do not affect the resulting feature space. We therefore also consider the expected error on the features F = X · W, which is the output tensor of the layer after convolving its input with weights W on input data X. Likewise, approximated weights W̃ result in an approximated F̃. In practice, computing the feature space requires input data and is computationally more demanding than computing the weight approximation error. However, it is potentially more representative of an approximation’s effect on the output, plus unlike a later fine-tuning step, it could even be used if only data without labels is available.
3.3 TENSOR DECOMPOSITION METHODS
Our experiments shall consider three popular decomposition methods for convolutional layers, namely CP (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015), Tucker (Kim et al., 2016), and Tensor Train (TT) (Garipov et al., 2016). During the decomposition step the decomposed weights are found by minimizing the approximation error between the pretrained weights and the estimated decomposition: argminW̃ ||W − W̃||. For CP this is done with ALS (Carroll & Chang, 1970; Harshman, 1972), for Tucker with HOSVD (De Lathauwer et al., 2000), and TT-SVD (Oseledets, 2011) for Tensor Train. The ALS algorithm requires a random initialization. We sample from a uniform distribution [0,1) using Tensorly (Kossaifi et al., 2019b). The desired compression level is achieved by finding the corresponding rank, using the package Tensorly-Torch (Kossaifi et al., 2019b). The ranks used for CP, Tucker, and TT are given in Appendix A.1. For completeness, we list all considered decompositions with a 4-way tensor W.
CP decomposition A rank-R CP decomposition (Hitchcock, 1927) sums R rank-one tensors:
W̃CPc,y,x,t = R∑
r=1
Cc,rYy,rXx,rTt,r. (2)
Tucker decomposition A Tucker decomposition (Tucker, 1966) is distinct from a CP by the Tucker core G ∈ RR1×R2×R3×R4 . The Tucker rank is defined as the four-tuple (R1, R2, R3, R4). Since the dimensions of the convolutional weights are small with respect to the width and height dimensions, it is computationally more efficient to contract these with the Tucker core G and form a new core H = G ×2 Y ×3 X , where ×n is the n-mode product (Appendix A.2):
W̃Tuckerc,y,x,t = R1∑
r1=1 R2∑ r2=1 R3∑ r3=1 R4∑ r4=1 Gr1,r2,r3,r4Cc,r1Yy,r2Xx,r3Tt,r4 = R1∑ r1=1 R4∑ r4=1 Hr1,y,x,r4Cc,r1Tt,r4 .
(3)
Tensor Train decomposition Another alternative is the Tensor Train decomposition (Oseledets, 2011), which decomposes a given tensor as a linear chain of 2-way and 3-way tensors, where the first and last tensors are 2-way. The TT-rank in our four-dimensional case is the 3-tuple (R1, R2, R3):
W̃TTc,y,x,t = R1∑
r1=1 R2∑ r2=1 R3∑ r3=1 Cc,r1Yr1,y,r2Xr2,x,r3Tr3,t. (4)
4 EXPERIMENTS
This section provides the implementation details and discusses the results of our empirical approach.
4.1 EXPERIMENTAL SETUP
Datasets The experiments are run on the datasets CIFAR-10 (Krizhevsky, 2009) and FashionMNIST (Xiao et al., 2017). These datasets are common classification benchmarks for testing TD in CNNs (Cheng et al., 2021; Denil et al., 2014; Wu et al., 2020; Garipov et al., 2016; Hawkins et al., 2021). For both datasets, the original training sets are split into the set used for training and validation. The split is made such that equal class distributions are maintained. The details specific to the datasets are as follows: CIFAR-10 has 10 classes distributed equally across 60,000 images of 32 × 32 pixels with 3 color channels. After our validation split, there are 45,000 images in the training set and 5,000 in the validation set. The test set of 10,000 images remains unchanged. Fashion-MNIST has 10 classes distributed equally across 70,000 grayscale images of 28×28 pixels. After our validation split, there are 55,000 images in the training set and 5,000 in the validation set.
Model architecture and training The models used are ResNet-18 (He et al., 2016) and GaripovNet (Garipov et al., 2016). ResNet is a well-performing state-of-the-art convolutional neural network. GaripovNet is a 7-layer convolutional neural network proposed in Garipov et al. (2016) and used by Hawkins et al. (2022) for image classification. These models enable comparison with other works within and beyond TD for deep learning (Gusak et al., 2019; Garipov et al., 2016; Hawkins et al., 2022; Chu & Lee, 2021; Kossaifi et al., 2020). The following hyperparameters are used: ResNet-18 is trained with batch size 128, for 300 epochs, with Adam optimizer and a learning rate of 10−3. At epochs 100 and 150 the learning rate is multiplied by 0.1. GaripovNet is trained with the same settings as the original paper Garipov et al. (2016). The model is trained with Stochastic Gradient Descent (SGD) with a momentum of 0.9 and a learning rate of 0.1, multiplied by 0.1 at epochs 30, 60, and 90.
The validation set is used for early stopping and the selection of the training hyperparameters, i.e. learning rate, schedule, level of annealing, batch size, and optimizer. Training data is augmented with a random crop (padding with 4 pixels and cropping to the original size) and a random horizontal flip. All images are standardized based on the training mean and standard deviation per channel overall training samples. Early stopping is applied for both training the baseline and fine-tuning the decomposed model. The classification error on the test set is used for performance errors P. To Fine-tune after decomposition and obtain performance errors P⋆, the ResNet-18 is optimized for another 25 epochs, and GaripovNet for 10 epochs, using the last learning rate from the training.
Decomposition choices We now explain the considered values for the decomposition choices H explained in Section 3. For both neural network models, neither the first nor the last layer will be decomposed, as these layers already contain a relatively small amount of parameters. For GaripovNet the other five layers are part of L. For ResNet-18, L contains a selection of eight convolutional layers, details of which can be found in Appendix A.3. The set of TD methods that will be considered is M = {CP,Tucker,Tensor Train}, which were discussed in Section 3.3. The set of compression levels is C = {10%, 25%, 50%, 75%, 90%}. Multiple levels of compression are considered as each neural network layer can have different efficiency-performance trade-offs (Lebedev et al., 2015). In the experiments, we evaluate ResNet-18 on CIFAR-10, GaripovNet on CIFAR-10, and GaripovNet on F-MNIST. We exclude ResNet-18 on F-MNIST as the dataset is not sufficiently challenging for this model, and compressing one layer does not lead to a viable impact on the performance due to the model’s size and skip-connections.
Variance The process of decomposing and fine-tuning is repeated for five independent runs for each choice of layer, decomposition method, and compression level to assess and report variance in the results. Note that due to the stochasticity of the ALS algorithms, the random initializations can result in different CP decomposition estimates. The variance in correlation shown in the plots without fine-tuning results from the randomness in the CP initialization. Fine-tuning adds additional variance through its use of batched SGD. The observed variance with fine-tuning accounts for both the randomness from CP initialization as well as from fine-tuning, thereby representing all sources of randomness in our methodology. All runs for a given model and dataset are based on the same pretrained weights, so this is not a source of reported variance. In total, this results in 600 measurements for ResNet-18 and 375 measurements for GaripovNet per dataset.
4.2 EXPERIMENTAL RESULTS
Impact of compression levels on correlation We start by calculating the correlation across the layers and decomposition methods for multiple runs and calculate the averages grouped by compression levels. This is presented in Figure 1, where the bars are the average correlation τ between the Relative Weights and the classification error. The correlation is only based on the Relative Weights, as this is the most common metric in recent literature (Lebedev et al., 2015; Hawkins et al., 2022; Liebenwein et al., 2021). The correlations are grouped by the different levels of compression and represented by different colors. The error bars are ±1 standard deviation, representing the variance from multiple runs.
In Figure 1, it can be seen that the larger the compression, the higher the correlation is. This is a positive result for our use case. In the end, we are interested in making decomposition choices when compressing. The more we compress, the higher the correlation and therefore the more certain we are that basing our choice on the approximation error results in the optimal choice. It can also be noted that a certain level of compression is needed to be able to make choices based on the approximation error. For both models and datasets, the correlation is small when the compression is only 10% and 25%. The variance in the correlation at smaller compression levels is larger than at higher compression levels. When the compression is too small the effect on the performance of the model is too small compared to the observed variance, especially after fine-tuning. In the remainder of the experiments, we therefore focus on compression levels of C = {50%, 75%, 90%}.
Comparison of approximation error measures Works such as Liebenwein et al. (2021) have used a single approximation error, e.g. Relative Weights, to identify which layer to compress next, implicitly assuming that relative errors between layers are indicative of the relative model performance differences. We here compare the various approximation error measures, testing the correlation with performance over all decomposition choices. In Figure 2, the correlation is calculated based on measurements of all combinations of layer, decomposition method, and compression level once. The correlations are averaged over runs, as well as the ±1 standard deviation is calculated over the runs.
Figure 2 shows that the correlations are generally positive and significantly different from zero. This means that the decomposition choices can (to some extent) be based on the approximation error. There is one exception where the correlation is close to zero, namely for the Absolute Weights measure on ResNet-18. The difference in correlations can be explained by the difference in approximation error between layers, a detailed explanation is provided in Appendix A.4. These results suggest that using Absolute-based approximation errors, while they may show high correlations in some cases, are not generally a reliable indicator for future model performance, and that normalized measures should be used instead.
Comparing the different approximation error metrics, we observe the highest correlation with the performance for Relative Weights in all tested cases. Interestingly, the magnitude of the correlation for Feature-based measures is similar to or smaller than the correlation for Weight-based measures. Although the findings of Jaderberg et al. (2014) would suggest a stronger correlation for the features, at least before fine-tuning, we do not observe benefits for basing decomposition choices on the approximation error of the features rather than the weights. Possibly pretraining already ensures all the elements in the weight tensor are equally important for the target data distribution, thus a Weight-based error already reliably reflects resulting errors on the output features. Comparing the error bars with and without fine-tuning, the randomness from fine-tuning has a larger impact on the variance in correlation than the randomness from CP initialization. In summary, our results support the use of the Relative Weights approximation error to make decomposition choices.
Impact of fine-tuning Most works use fine-tuning to recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). The right subfigure of Figure 2 shows the mean and standard deviation of five correlations, per model and per dataset after fine-tuning. After fine-tuning, the correlation between the approximation error and the performance error is smaller than before fine-tuning for GaripovNet, as additional training adapts the model and reduces the performance gap between the different choices, but this effect is not observed for ResNet-18 where the correlation was already lower. However, for both models, there is still a clear positive correlation between the approximation error and the performance after fine-tuning. This means that decomposition choices can still be based on the approximation error when intending to perform fine-tuning later, even though different hyperparameters might be optimal without and with fine-tuning.
While the correlation is positive and significantly different from zero, the correlation is only around +0.5 for ResNet-18. We therefore investigate if the correlation is higher when only considering specific decomposition choices next.
Correlation across Layers vs. Methods In the previous experiments, we compared how decomposition choices on both different layers and methods correlated with performance. Here we investigate if the correlation is stronger if only one of these choices would be considered. For instance, previous works often only include layers as decomposition choice, and have not compared across decomposition methods. We compare correlation on all choices for both sets (L ×M × C) as before, to layers only (L× {m} × {c} with reported results averaged for all m ∈ M and c ∈ C), and to methods only ({l} ×M× {c} with reported results averaged for all l ∈ L and c ∈ C). Figure 3 shows that before fine-tuning the approximation error has a lower correlation with the performance of the model when considering layers only compared to all decomposition choices. Not all layers of a neural network have the same efficiency-performance trade-off (Lebedev et al., 2015; Hawkins et al., 2021). Therefore, the correlation is lower when we fix the decomposition method and compression level. It is better to combine layers with compression levels (and decomposition
methods). However, fine-tuning recovers some of the correlation for layers. Across decomposition methods, the correlation before fine-tuning is comparable to the correlation calculated across all decomposition choices. These results suggest decomposition methods can be compared better than just the layers before fine-tuning, although the former is not an optimization choice considered in previous works. Interestingly, for GaripovNet the correlation across decomposition methods drops significantly after fine-tuning. We find that this is due to difficulties in optimizing the CP decomposed layers, since the gradient flow through CP convolutions is a known problem (Silva & Lim, 2008; Lebedev et al., 2015), whereas ResNet does not suffer from this due to its skip connections. We conclude that (unlike current practice) network compression could consider multiple decomposition methods as their approximation errors can be compared, though most reliably when aiming for compression without fine-tuning.
5 CONCLUSION
We have tested Assumption 1, and find that there is a positive correlation between the relative approximation error on the weights and the resulting performance error of the model for a wide range of TD choices, including layers, methods, and compression levels. We further find that using data to compute the approximation error on the features, rather than simply on the model weights directly, does not improve the correlation. Scaling the approximation error with the norm of the original tensor provides the highest and most stable correlation across all compared models and datasets. Our findings suggest that the Relative Weights approximation error is best suited to select among TD decomposition choices.
While these choices can be made across layers, TD methods, and compression levels, we observe that the correlation before fine-tuning is smaller when comparing between layers for a fixed method, than when comparing across methods (here: CP, Tucker, and Tensor Train) for a fixed layer. Integrating multiple types of decompositions within a network compression technique is therefore a potential direction for future work, although care has to be taken when the use case includes later fine-tuning, as the correlation for selecting across decomposition methods can degrade since backpropagation through certain factorized structures remains challenging.
Our experiments are limited to a set of decomposition choices and network layers commonly found in the TD literature. Future work can extend to other decompositions and other types of neural network layers, e.g. fully connected layers. While the weights are matrices, tensor decomposition has been applied to fully connected layers by reshaping the weight matrix into a higher-order tensor. The choice of reshaping then becomes an additional decomposition choice.
REPRODUCIBILITY
The authors find it important that this work is reproducible. To this extent the following efforts have been made: The datasets and test-validation splits are described in Section 4.1. The datasets are collected from PyTorch Vision. The models and hyperparameters used for training are covered in Section 4.1. The implementations of the baseline models are from the PyTorch Model Zoo. The models are factorized with Tensorly-Torch (Kossaifi et al., 2019b) , using CP initialization described in Section 4.1 and ranks provided in Appendix A.1. The experimental setup is explained in Section 4.1. The calculation of metrics is formulated in Section 3. Finally, the code to reproduce these experiments is available at: https://github.com/JSchuurmans/tddl.
ACKNOWLEDGMENTS
Described results are made possible in part by TU Delft Cohesion subsidy, TERP Cohesion project 2020.
A APPENDIX
A.1 RANK AND COMPRESSION LEVEL
Tables 2 and 3 present the ranks that are used for GaripovNet (Garipov et al., 2016) and ResNet-18 (He et al., 2016) respectively. Note that in these tables, the Tucker rank includes the kernel ranks corresponding to the width and height, and the TT rank includes R0 = R4 = 1, which are left out of Equation 4 for conciseness.
A.2 N-MODE PRODUCT
The definition of n-Mode Product ×n given by Kolda & Bader (2009) is used in this paper. The contraction of a tensor G ∈ RR1,R2,··· ,RN with matrix Y ∈ RY,Rn along the nth mode of the tensor is defined elementwise as:
(X ×n Y )r1,··· ,rn−1,y,rn+1,··· ,rN = Rn∑
rn=1
Xr1,··· ,rn−1,rn,rn+1,··· ,rNYy,rn (5)
A.3 SELECTION OF RESNET-18 LAYERS
The following layers are considered for decomposition in ResNet-18: The last layer of the first two blocks. The first layer with stride two. The first layer after a 1x1 convolution. A 1x1 convolution with a similar number of parameters as the first choice of layer. The layer before and after the 1x1 convolution. The final two convolutional layers, as they are the largest convolutional layers.
A.4 DIFFERENCE BETWEEN LAYERS IN ABSOLUTE WEIGHTS FOR RESNET-18
Let us recall from Figure 2 that when basing the approximation error on Absolute Weights, the correlation with the performance error is close to zero for ResNet-18. The correlation is zero due to the difference between layers.
Figure 4 plots the approximation errors of Relative and Absolute Weights versus the performance error before and after fine-tuning. The points resulting from CP, Tucker, and Tensor Train with compression of 50%, 75%, and 90% are grouped per layer.
Layers 15, 19, and 28 have a smaller absolute weight error and large performance error relative to layers 38, 60, and 63. Compare this to the data for Relative Weights where layers 60 and 63 have a small relative weight error and small performance error and layer 28 has comparable errors to 15, 19, and 28. This leads to the correlation between Absolute Weights and performance errors being close to zero, while it is positive for Relative Weights. Therefore, the difference in correlations can be explained by the difference in approximation error between layers. | 1. What is the main contribution of the paper regarding post-training compression of CNN-based image classification neural networks?
2. What are the strengths of the paper in terms of its objectives, writing quality, and practical implications?
3. What are the weaknesses of the paper regarding the comparison with distillation approaches and the robustness of the method?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this study the authors investigate post-training compression of CNN-based image classification neural networks. One method to compress a CNN-based NN is tensor decomposition. Their goal is to learn how much the approximation error of tensor decomposition is predictive of the performance of the compressed CNN-based NN classifier. For example, they ask if the choice of tensor decomposition method can be guided by this or the level of compression. This would be more efficient than using the final task performance of each candidate compressed model. Another question, for example, they investigate is if finetuning after compression does disturb or improve the correlation between approximation error and final task performance of the NN.
Strengths And Weaknesses
S1. The paper investigates a clear objective that is relevant for potential follow-up work and has useful practical implications.
S2. The paper is well written and well structured.
S3. The paper does give a clear picture and considers many practically relevant questions.
W1. It is unclear how the increased complexity due to the proposed methods and the level of control on the final task performance do compare against a distillation approach.
W2. How much does the randomness of many of the TD methods due to their initialization influence to robustness of the approach.
Clarity, Quality, Novelty And Reproducibility
The paper was clear, succint. The experiments are reproducible and all their components, down to the exact correlation that were used have been clearly described. Apart from the two weakness mentioned I have no other concerns. |
ICLR | Title
How Informative is the Approximation Error from Tensor Decomposition for Neural Network Compression?
Abstract
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weights is a proxy for the performance of the model to compress multiple layers and fine-tune the compressed model. Surprisingly, little research has systematically evaluated which approximation errors can be used to make choices regarding the layer, tensor decomposition method, and level of compression. To close this gap, we perform an experimental study to test if this assumption holds across different layers and types of decompositions, and what the effect of fine-tuning is. We include the approximation error on the features resulting from a compressed layer in our analysis to test if this provides a better proxy, as it explicitly takes the data into account. We find the approximation error on the weights has a positive correlation with the performance error, before as well as after fine-tuning. Basing the approximation error on the features does not improve the correlation significantly. While scaling the approximation error commonly is used to account for the different sizes of layers, the average correlation across layers is smaller than across all choices (i.e. layers, decompositions, and level of compression) before fine-tuning. When calculating the correlation across the different decompositions, the average rank correlation is larger than across all choices. This means multiple decompositions can be considered for compression and the approximation error can be used to choose between them.
N/A
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weights is a proxy for the performance of the model to compress multiple layers and fine-tune the compressed model. Surprisingly, little research has systematically evaluated which approximation errors can be used to make choices regarding the layer, tensor decomposition method, and level of compression. To close this gap, we perform an experimental study to test if this assumption holds across different layers and types of decompositions, and what the effect of fine-tuning is. We include the approximation error on the features resulting from a compressed layer in our analysis to test if this provides a better proxy, as it explicitly takes the data into account. We find the approximation error on the weights has a positive correlation with the performance error, before as well as after fine-tuning. Basing the approximation error on the features does not improve the correlation significantly. While scaling the approximation error commonly is used to account for the different sizes of layers, the average correlation across layers is smaller than across all choices (i.e. layers, decompositions, and level of compression) before fine-tuning. When calculating the correlation across the different decompositions, the average rank correlation is larger than across all choices. This means multiple decompositions can be considered for compression and the approximation error can be used to choose between them.
1 INTRODUCTION
Tensor Decompositions (TD) have shown potential for compressing pre-trained models, such as convolutional neural networks, by replacing the optimized weight tensor with a low-rank multi-linear approximation with fewer parameters (Jaderberg et al., 2014; Lebedev et al., 2015; Kim et al., 2016; Garipov et al., 2016; Kossaifi et al., 2019a). Common compression procedures (Lebedev et al., 2015; Garipov et al., 2016; Hawkins et al., 2021) work by iteratively applying TD on a selected weight tensor, where each time several decomposition choices have to be made regarding (i) the layer to compress, (ii) the type of decomposition, and (iii) the compression level. Selecting the best hyperparameters for these choices at a given iteration requires costly re-evaluating the full model for each option. Recently, Liebenwein et al. (2021) suggested comparing the approximation errors on the decomposed weights as a more efficient alternative, though they only considered matrix decompositions for which analytical bounds on the resulting performance exist. These bounds rely on the Eckhart-Young-Mirsky theorem. For TD, no equivalent theorem is possible (Vannieuwenhoven et al., 2014). While theoretical bounds are not available for more general TD methods, the same concept could still be practical when considering TDs too. We summarize this as the following general assumption:
Assumption 1. A lower TD approximation error on a model’s weight tensor indicates better overall model performance after compression.
While this assumption appears intuitive and reasonable, we observe several gaps in the existing literature: First, most existing TD compression literature only focuses on a few decomposition choices, e.g. fixing the TD method (Lebedev et al., 2015; Kim et al., 2016). Although various error measures and decomposition choices have been studied in separation, no prior work systematically compares different decomposition errors across multiple decomposition choices. Second, different decomposition errors with different properties have been used throughout the literature (Jaderberg et al., 2014), and it is unclear if some error measure should be preferred. Third, a benefit of TD is that no training data is needed for compression, though if labeled data is available, more recent methods combine TD with a subsequent fine-tuning step. Is the approximation error equally valid for the model performance with and without fine-tuning?
Overall, to the best of the authors’ knowledge, no prior work investigates if and which decomposition choices for TD network compression can be made using specific approximation errors. This paper studies empirically to what extent a single decomposition error correlates with the compressed model’s performance across varied decomposition choices, identifying how existing procedures could be improved, and providing support for specific practices. Our contributions are as follows:
• A first empirical study is proposed on the correlation between the approximation error on the model weights that result from compression with TD, and the performance of the compressed model1. Studied decomposition choices include the layer, multiple decomposition methods (CP, Tucker, and Tensor Train), and level of compression. Measurements are made using several models and datasets. We show that the error is indicative of model performance, even when comparing multiple TD methods, though useful correlation only occurs at the higher compression levels.
• Different formulations for the approximation error measure are compared, including measuring the error on the features as motivated by the work Jaderberg et al. (2014); Denil et al. (2014) which considers the data distribution. We further study how using training labeled data for additional fine-tuning affects the correlation.
2 RELATED WORK
There is currently no systematic study on how well the approximation error relates to a compressed neural network’s performance across multiple choices of network layers, TD methods, and compression levels. We here review the most similar and related studies where we distinguish works with theoretical versus empirical validation, different approximation error measures, and the role of fine-tuning after compression.
The relationship between the approximation error on the weights and the performance of the model was studied by theoretical analysis for matrix decompositions. Liebenwein et al. (2021) derive bounds on the model performance for SVD-based compression on the convolutional layers, and thus motivate that the SVD approximation error is a good proxy for the compressed model performance. Arora et al. (2018) derive bounds on the generalization error for convolutional layers based on a compression error from their matrix projection algorithm. Baykal et al. (2019) show how the amount of sparsity introduced in a model’s layers relates to its generalization performance. While these works show that some theoretical bounds can be found for specific compression methods, such bounds are not available for TDs in general. Other works, therefore, study the relationship for TD empirically. For instance, Lebedev et al. (2015) show how CP decomposition rank affects the approximation error, and the resulting accuracy drop as the rank is decreased. Hawkins et al. (2022) observe that, for networks with repeated layer blocks, the approximation error depends on the convolutional layers within the block.
When considering the model’s final task performance, the approximation error on the weights might not be the most relevant measure. To consider the effect on the actual data distribution, Jaderberg et al. (2014) instead propose to compute an error on the approximated output features of a layer after its weights have been compressed. They found that compressing weights by minimizing the error on features, rather than the error on the weights, results in a smaller loss in classification accuracy. However, Jaderberg et al. (2014) do not fine-tune the decomposed model, and only use a toy model with few layers. Denil et al. (2014) try to capture the information from the data via the empirical
1The code for our experiments is available at https://github.com/JSchuurmans/tddl.
covariance matrix. Although this method eliminates the need for multiple passes over the data during the compression step, it is limited to a two-dimensional case. Eo et al. (2021) forego looking at a compression error altogether, and selects the rank based on the accuracy on the validation set. This requires the labels to be present and a forward pass through the whole network, even if the compressed layer is near the input.
Several norms have been used in the literature to quantify the approximation error, which is the difference between the pretrained weights and the compressed weights. The works that decompose pretrained layers (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015; Novikov et al., 2015; Kim et al., 2016) explicitly minimize the Frobenius norm. Hawkins et al. (2021); Liebenwein et al. (2021) calculate the relative Frobenius norm, i.e., the norm of the error proportional to the norm of the pretrained weights, to compare the error for layers of different sizes. Still, it remains unclear which error measure is most informative for the compressed model’s final performance.
In practice, when training data is still available for compression, fine-tuning for the target task after compressing weights could recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). Adding fine-tuning results in a three-step process: pretrain, compress and fine-tune. Optimization thus alternates between minimizing the error of respectively the features, the weights, and finally the features again. While Denton et al. (2014); Kim et al. (2016) compare compressed model performance before and after fine-tuning, they do not investigate how the finetuned network performance relates to the weight compression error. Lebedev et al. (2015) does study the compression error for CP decomposition, but only reports performance with fine-tuning.
3 METHODOLOGY
We consider the task of compressing a pretrained neural network with TD. While TD is a general technique that could be applied to many types of layers, the focus will be on convolutional layers. Due to their ubiquity and suitability to compare different types of higher-order decompositions, as the layer weights are four-dimensional tensors.
Generally, a compression procedure will iteratively apply TD to the weights of selected layers, making several choices on how and what weight tensor to decompose, while ideally maintaining as much of the original network’s performance. In its original uncompressed form, the full-rank weight tensors W ∈ RC×H×W×T of a layer represent a local optimum in the network’s parameter space with respect to the training data and loss, where C is the number of input channels, H and W are the height and width of the convolutional kernel, and T is the number of output channels. When a TD is applied to the weights of a specific layer, this results in a factorized structure W̃ composed of multiple smaller tensor multiplications, which replaces the original weights in the network. Each time TD is applied, several decomposition choices need to be evaluated:
1. Layer: The layer l from the set of network layers L = {1, 2, · · · , L} to decompose. 2. Method: The type of TD method m ∈ M = {CP,Tucker,Tensor Train}. The decomposition
determines the factorized structure of W̃. 3. Compression: The compression level c ∈ C for the selected layer. Here C ⊂ (0, 1] is some finite
set of testable levels, and c = 0.75 means the number of parameters is reduced by 75% and the factorized layer contains only 25% of the parameters. A given compression level is achieved by decomposing the tensor to some rank R, depending on the selected TD method (see Section 3.3).
We will refer to H = L × M × C as the set with possible hyperparameter values for (l,m, c) to consider. Note that compression procedures in the literature might only consider a subset of these choices. For example, a procedure might fix the layer for a given iteration or only consider a single TD method. In practice, it is computationally infeasible to evaluate the compressed network’s performance for every possible hyperparameter choice at every compression iteration, especially when optimizing for performance after fine-tuning. Instead, automated compression procedures will efficiently compare an approximation error ai = e(W̃i,W) between the original and decomposed weights using a particular choice of hyperparameters hi ∈ H. In doing so one relies on Assumption 1 that a lower approximation error is indicative of better compressed performance pi. If annotated training data is available, additional network fine-tuning on the decomposed structure could result in improved performance p⋆i > pi.
In this work, we propose to focus on a single iteration and investigate Assumption 1 in isolation of any specific compression procedure. Our aim is thus to assess how well computing an approximation error e can predict the optimal compression choice from some hypothesis set H, i.e. the choice that results in the lowest compressed network performance error p, or even performance after fine-tuning p⋆. We will study the correlation between approximation error and model performance empirically in our experiments, using the procedure and correlation metric explained in Section 3.1. Details on the different approximation errors that we will explore are covered in Section 3.2. Finally, the considered TD methods are explained in Section 3.3.
3.1 EMPIRICAL EVALUATION OF ERROR-PERFORMANCE CORRELATION
Our proposed empirical evaluation procedure will evaluate a large set of hyperparameters H = {h1, h2, · · · } on multiple convolutional neural networks and datasets (see Section 4) for different options of approximation error metric (see Section 3.2). For a given model, dataset, and approximation error metric e, the procedure evaluates for each set of hyperparameter choices hi ∈ H the approximation error ai = e(W̃i,W), the model performance error pi on the validation split, and the model performance error p⋆i after additional fine-tuning on the training data. We thus obtain sets of measurements A = {a1, a2, · · · }, P = {p1, p2, · · · } and P⋆ = {p⋆1, p⋆2, · · · } for H. When comparing two sets of hyperparameters hi ∈ H and hj ∈ H, we want to establish if the set with the smaller approximation error results in a smaller performance error of the model. In other words, the concordance of pairs of measurements needs to be established. Concordant pairs have a larger (smaller) performance error when the approximation error is larger (smaller) between two sets of hyperparameter choices, i.e. i and j are concordant if ai > aj and pi > pj or if ai < aj and pi < pj , and discordant otherwise. Kendall’s τ is a measure for the rank correlation (Kendall, 1938) or ordinal association between two order sets, in our case between approximation errors e and a model performances P (or P⋆). To avoid confusion with the concept of tensor rank, we will refer to Kendall’s τ simply as correlation. For this correlation measure, the difference between the number of concordant pairs (k) and discordant pairs (d) is scaled with the binomial coefficient m(m− 1)/2 to account for the different ways two measurements can be sampled from a total of m measurements:
τ = 2(k − d)/(m(m− 1)). (1) Kendall’s τ can be interpreted as follows: τ = 1 indicates a perfect positive rank correlation, τ = 0 no correlation, and τ = −1 a strong negative correlation. For a set of hyperparameters H, a useful approximation error e would thus result in a τ close to ±1, indicating it is predictive of the model’s performance. Note that Kendall’s τ does not depend on assumptions about the underlying distribution, whereas Pearson correlation assumes a linear relationship between the two measurements. Kendall’s τ is used over Spearman’s ρ because the interpretation of con- and discordance pairs for Kendall’s τ is closely related to our use case of choosing between hyperparameter sets.
3.2 APPROXIMATION ERRORS
We now discuss various measures e(W̃,W) to quantify the approximation error. The basis is to compute some norm on the difference between these tensors, in this work we use the Frobenius norm as is common in the literature (Lebedev et al., 2015; Hawkins et al., 2022). We shall consider three options to scale the norm, which could help make the error more robust when comparing hypotheses with different layers. Additionally, we can consider two options to compute the error on, namely either directly the weights or on the features. In total, we shall thus explore six different approximation errors in this work. An overview is presented in Table 1.
Normalization The norm between the difference of the weights is referred to as absolute norm and is used in the objective function when decomposing the pretrained weights. The relative norm is used in TD literature to compare errors between different layers (Lebedev et al., 2015; Hawkins et al., 2022), as it is invariant to the size of the weights. Alternatively, the norm of the difference can be scaled to account for the number of parameters, while keeping the distance from the weights.
Target tensor The most common option is to compute approximation error on the decomposed layer’s weights, W. However, Jaderberg et al. (2014) achieved promising results basing the decomposition on the approximation error of the features. Errors in some elements of the weight tensor
might be more permissible if they do not affect the resulting feature space. We therefore also consider the expected error on the features F = X · W, which is the output tensor of the layer after convolving its input with weights W on input data X. Likewise, approximated weights W̃ result in an approximated F̃. In practice, computing the feature space requires input data and is computationally more demanding than computing the weight approximation error. However, it is potentially more representative of an approximation’s effect on the output, plus unlike a later fine-tuning step, it could even be used if only data without labels is available.
3.3 TENSOR DECOMPOSITION METHODS
Our experiments shall consider three popular decomposition methods for convolutional layers, namely CP (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015), Tucker (Kim et al., 2016), and Tensor Train (TT) (Garipov et al., 2016). During the decomposition step the decomposed weights are found by minimizing the approximation error between the pretrained weights and the estimated decomposition: argminW̃ ||W − W̃||. For CP this is done with ALS (Carroll & Chang, 1970; Harshman, 1972), for Tucker with HOSVD (De Lathauwer et al., 2000), and TT-SVD (Oseledets, 2011) for Tensor Train. The ALS algorithm requires a random initialization. We sample from a uniform distribution [0,1) using Tensorly (Kossaifi et al., 2019b). The desired compression level is achieved by finding the corresponding rank, using the package Tensorly-Torch (Kossaifi et al., 2019b). The ranks used for CP, Tucker, and TT are given in Appendix A.1. For completeness, we list all considered decompositions with a 4-way tensor W.
CP decomposition A rank-R CP decomposition (Hitchcock, 1927) sums R rank-one tensors:
W̃CPc,y,x,t = R∑
r=1
Cc,rYy,rXx,rTt,r. (2)
Tucker decomposition A Tucker decomposition (Tucker, 1966) is distinct from a CP by the Tucker core G ∈ RR1×R2×R3×R4 . The Tucker rank is defined as the four-tuple (R1, R2, R3, R4). Since the dimensions of the convolutional weights are small with respect to the width and height dimensions, it is computationally more efficient to contract these with the Tucker core G and form a new core H = G ×2 Y ×3 X , where ×n is the n-mode product (Appendix A.2):
W̃Tuckerc,y,x,t = R1∑
r1=1 R2∑ r2=1 R3∑ r3=1 R4∑ r4=1 Gr1,r2,r3,r4Cc,r1Yy,r2Xx,r3Tt,r4 = R1∑ r1=1 R4∑ r4=1 Hr1,y,x,r4Cc,r1Tt,r4 .
(3)
Tensor Train decomposition Another alternative is the Tensor Train decomposition (Oseledets, 2011), which decomposes a given tensor as a linear chain of 2-way and 3-way tensors, where the first and last tensors are 2-way. The TT-rank in our four-dimensional case is the 3-tuple (R1, R2, R3):
W̃TTc,y,x,t = R1∑
r1=1 R2∑ r2=1 R3∑ r3=1 Cc,r1Yr1,y,r2Xr2,x,r3Tr3,t. (4)
4 EXPERIMENTS
This section provides the implementation details and discusses the results of our empirical approach.
4.1 EXPERIMENTAL SETUP
Datasets The experiments are run on the datasets CIFAR-10 (Krizhevsky, 2009) and FashionMNIST (Xiao et al., 2017). These datasets are common classification benchmarks for testing TD in CNNs (Cheng et al., 2021; Denil et al., 2014; Wu et al., 2020; Garipov et al., 2016; Hawkins et al., 2021). For both datasets, the original training sets are split into the set used for training and validation. The split is made such that equal class distributions are maintained. The details specific to the datasets are as follows: CIFAR-10 has 10 classes distributed equally across 60,000 images of 32 × 32 pixels with 3 color channels. After our validation split, there are 45,000 images in the training set and 5,000 in the validation set. The test set of 10,000 images remains unchanged. Fashion-MNIST has 10 classes distributed equally across 70,000 grayscale images of 28×28 pixels. After our validation split, there are 55,000 images in the training set and 5,000 in the validation set.
Model architecture and training The models used are ResNet-18 (He et al., 2016) and GaripovNet (Garipov et al., 2016). ResNet is a well-performing state-of-the-art convolutional neural network. GaripovNet is a 7-layer convolutional neural network proposed in Garipov et al. (2016) and used by Hawkins et al. (2022) for image classification. These models enable comparison with other works within and beyond TD for deep learning (Gusak et al., 2019; Garipov et al., 2016; Hawkins et al., 2022; Chu & Lee, 2021; Kossaifi et al., 2020). The following hyperparameters are used: ResNet-18 is trained with batch size 128, for 300 epochs, with Adam optimizer and a learning rate of 10−3. At epochs 100 and 150 the learning rate is multiplied by 0.1. GaripovNet is trained with the same settings as the original paper Garipov et al. (2016). The model is trained with Stochastic Gradient Descent (SGD) with a momentum of 0.9 and a learning rate of 0.1, multiplied by 0.1 at epochs 30, 60, and 90.
The validation set is used for early stopping and the selection of the training hyperparameters, i.e. learning rate, schedule, level of annealing, batch size, and optimizer. Training data is augmented with a random crop (padding with 4 pixels and cropping to the original size) and a random horizontal flip. All images are standardized based on the training mean and standard deviation per channel overall training samples. Early stopping is applied for both training the baseline and fine-tuning the decomposed model. The classification error on the test set is used for performance errors P. To Fine-tune after decomposition and obtain performance errors P⋆, the ResNet-18 is optimized for another 25 epochs, and GaripovNet for 10 epochs, using the last learning rate from the training.
Decomposition choices We now explain the considered values for the decomposition choices H explained in Section 3. For both neural network models, neither the first nor the last layer will be decomposed, as these layers already contain a relatively small amount of parameters. For GaripovNet the other five layers are part of L. For ResNet-18, L contains a selection of eight convolutional layers, details of which can be found in Appendix A.3. The set of TD methods that will be considered is M = {CP,Tucker,Tensor Train}, which were discussed in Section 3.3. The set of compression levels is C = {10%, 25%, 50%, 75%, 90%}. Multiple levels of compression are considered as each neural network layer can have different efficiency-performance trade-offs (Lebedev et al., 2015). In the experiments, we evaluate ResNet-18 on CIFAR-10, GaripovNet on CIFAR-10, and GaripovNet on F-MNIST. We exclude ResNet-18 on F-MNIST as the dataset is not sufficiently challenging for this model, and compressing one layer does not lead to a viable impact on the performance due to the model’s size and skip-connections.
Variance The process of decomposing and fine-tuning is repeated for five independent runs for each choice of layer, decomposition method, and compression level to assess and report variance in the results. Note that due to the stochasticity of the ALS algorithms, the random initializations can result in different CP decomposition estimates. The variance in correlation shown in the plots without fine-tuning results from the randomness in the CP initialization. Fine-tuning adds additional variance through its use of batched SGD. The observed variance with fine-tuning accounts for both the randomness from CP initialization as well as from fine-tuning, thereby representing all sources of randomness in our methodology. All runs for a given model and dataset are based on the same pretrained weights, so this is not a source of reported variance. In total, this results in 600 measurements for ResNet-18 and 375 measurements for GaripovNet per dataset.
4.2 EXPERIMENTAL RESULTS
Impact of compression levels on correlation We start by calculating the correlation across the layers and decomposition methods for multiple runs and calculate the averages grouped by compression levels. This is presented in Figure 1, where the bars are the average correlation τ between the Relative Weights and the classification error. The correlation is only based on the Relative Weights, as this is the most common metric in recent literature (Lebedev et al., 2015; Hawkins et al., 2022; Liebenwein et al., 2021). The correlations are grouped by the different levels of compression and represented by different colors. The error bars are ±1 standard deviation, representing the variance from multiple runs.
In Figure 1, it can be seen that the larger the compression, the higher the correlation is. This is a positive result for our use case. In the end, we are interested in making decomposition choices when compressing. The more we compress, the higher the correlation and therefore the more certain we are that basing our choice on the approximation error results in the optimal choice. It can also be noted that a certain level of compression is needed to be able to make choices based on the approximation error. For both models and datasets, the correlation is small when the compression is only 10% and 25%. The variance in the correlation at smaller compression levels is larger than at higher compression levels. When the compression is too small the effect on the performance of the model is too small compared to the observed variance, especially after fine-tuning. In the remainder of the experiments, we therefore focus on compression levels of C = {50%, 75%, 90%}.
Comparison of approximation error measures Works such as Liebenwein et al. (2021) have used a single approximation error, e.g. Relative Weights, to identify which layer to compress next, implicitly assuming that relative errors between layers are indicative of the relative model performance differences. We here compare the various approximation error measures, testing the correlation with performance over all decomposition choices. In Figure 2, the correlation is calculated based on measurements of all combinations of layer, decomposition method, and compression level once. The correlations are averaged over runs, as well as the ±1 standard deviation is calculated over the runs.
Figure 2 shows that the correlations are generally positive and significantly different from zero. This means that the decomposition choices can (to some extent) be based on the approximation error. There is one exception where the correlation is close to zero, namely for the Absolute Weights measure on ResNet-18. The difference in correlations can be explained by the difference in approximation error between layers, a detailed explanation is provided in Appendix A.4. These results suggest that using Absolute-based approximation errors, while they may show high correlations in some cases, are not generally a reliable indicator for future model performance, and that normalized measures should be used instead.
Comparing the different approximation error metrics, we observe the highest correlation with the performance for Relative Weights in all tested cases. Interestingly, the magnitude of the correlation for Feature-based measures is similar to or smaller than the correlation for Weight-based measures. Although the findings of Jaderberg et al. (2014) would suggest a stronger correlation for the features, at least before fine-tuning, we do not observe benefits for basing decomposition choices on the approximation error of the features rather than the weights. Possibly pretraining already ensures all the elements in the weight tensor are equally important for the target data distribution, thus a Weight-based error already reliably reflects resulting errors on the output features. Comparing the error bars with and without fine-tuning, the randomness from fine-tuning has a larger impact on the variance in correlation than the randomness from CP initialization. In summary, our results support the use of the Relative Weights approximation error to make decomposition choices.
Impact of fine-tuning Most works use fine-tuning to recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). The right subfigure of Figure 2 shows the mean and standard deviation of five correlations, per model and per dataset after fine-tuning. After fine-tuning, the correlation between the approximation error and the performance error is smaller than before fine-tuning for GaripovNet, as additional training adapts the model and reduces the performance gap between the different choices, but this effect is not observed for ResNet-18 where the correlation was already lower. However, for both models, there is still a clear positive correlation between the approximation error and the performance after fine-tuning. This means that decomposition choices can still be based on the approximation error when intending to perform fine-tuning later, even though different hyperparameters might be optimal without and with fine-tuning.
While the correlation is positive and significantly different from zero, the correlation is only around +0.5 for ResNet-18. We therefore investigate if the correlation is higher when only considering specific decomposition choices next.
Correlation across Layers vs. Methods In the previous experiments, we compared how decomposition choices on both different layers and methods correlated with performance. Here we investigate if the correlation is stronger if only one of these choices would be considered. For instance, previous works often only include layers as decomposition choice, and have not compared across decomposition methods. We compare correlation on all choices for both sets (L ×M × C) as before, to layers only (L× {m} × {c} with reported results averaged for all m ∈ M and c ∈ C), and to methods only ({l} ×M× {c} with reported results averaged for all l ∈ L and c ∈ C). Figure 3 shows that before fine-tuning the approximation error has a lower correlation with the performance of the model when considering layers only compared to all decomposition choices. Not all layers of a neural network have the same efficiency-performance trade-off (Lebedev et al., 2015; Hawkins et al., 2021). Therefore, the correlation is lower when we fix the decomposition method and compression level. It is better to combine layers with compression levels (and decomposition
methods). However, fine-tuning recovers some of the correlation for layers. Across decomposition methods, the correlation before fine-tuning is comparable to the correlation calculated across all decomposition choices. These results suggest decomposition methods can be compared better than just the layers before fine-tuning, although the former is not an optimization choice considered in previous works. Interestingly, for GaripovNet the correlation across decomposition methods drops significantly after fine-tuning. We find that this is due to difficulties in optimizing the CP decomposed layers, since the gradient flow through CP convolutions is a known problem (Silva & Lim, 2008; Lebedev et al., 2015), whereas ResNet does not suffer from this due to its skip connections. We conclude that (unlike current practice) network compression could consider multiple decomposition methods as their approximation errors can be compared, though most reliably when aiming for compression without fine-tuning.
5 CONCLUSION
We have tested Assumption 1, and find that there is a positive correlation between the relative approximation error on the weights and the resulting performance error of the model for a wide range of TD choices, including layers, methods, and compression levels. We further find that using data to compute the approximation error on the features, rather than simply on the model weights directly, does not improve the correlation. Scaling the approximation error with the norm of the original tensor provides the highest and most stable correlation across all compared models and datasets. Our findings suggest that the Relative Weights approximation error is best suited to select among TD decomposition choices.
While these choices can be made across layers, TD methods, and compression levels, we observe that the correlation before fine-tuning is smaller when comparing between layers for a fixed method, than when comparing across methods (here: CP, Tucker, and Tensor Train) for a fixed layer. Integrating multiple types of decompositions within a network compression technique is therefore a potential direction for future work, although care has to be taken when the use case includes later fine-tuning, as the correlation for selecting across decomposition methods can degrade since backpropagation through certain factorized structures remains challenging.
Our experiments are limited to a set of decomposition choices and network layers commonly found in the TD literature. Future work can extend to other decompositions and other types of neural network layers, e.g. fully connected layers. While the weights are matrices, tensor decomposition has been applied to fully connected layers by reshaping the weight matrix into a higher-order tensor. The choice of reshaping then becomes an additional decomposition choice.
REPRODUCIBILITY
The authors find it important that this work is reproducible. To this extent the following efforts have been made: The datasets and test-validation splits are described in Section 4.1. The datasets are collected from PyTorch Vision. The models and hyperparameters used for training are covered in Section 4.1. The implementations of the baseline models are from the PyTorch Model Zoo. The models are factorized with Tensorly-Torch (Kossaifi et al., 2019b) , using CP initialization described in Section 4.1 and ranks provided in Appendix A.1. The experimental setup is explained in Section 4.1. The calculation of metrics is formulated in Section 3. Finally, the code to reproduce these experiments is available at: https://github.com/JSchuurmans/tddl.
ACKNOWLEDGMENTS
Described results are made possible in part by TU Delft Cohesion subsidy, TERP Cohesion project 2020.
A APPENDIX
A.1 RANK AND COMPRESSION LEVEL
Tables 2 and 3 present the ranks that are used for GaripovNet (Garipov et al., 2016) and ResNet-18 (He et al., 2016) respectively. Note that in these tables, the Tucker rank includes the kernel ranks corresponding to the width and height, and the TT rank includes R0 = R4 = 1, which are left out of Equation 4 for conciseness.
A.2 N-MODE PRODUCT
The definition of n-Mode Product ×n given by Kolda & Bader (2009) is used in this paper. The contraction of a tensor G ∈ RR1,R2,··· ,RN with matrix Y ∈ RY,Rn along the nth mode of the tensor is defined elementwise as:
(X ×n Y )r1,··· ,rn−1,y,rn+1,··· ,rN = Rn∑
rn=1
Xr1,··· ,rn−1,rn,rn+1,··· ,rNYy,rn (5)
A.3 SELECTION OF RESNET-18 LAYERS
The following layers are considered for decomposition in ResNet-18: The last layer of the first two blocks. The first layer with stride two. The first layer after a 1x1 convolution. A 1x1 convolution with a similar number of parameters as the first choice of layer. The layer before and after the 1x1 convolution. The final two convolutional layers, as they are the largest convolutional layers.
A.4 DIFFERENCE BETWEEN LAYERS IN ABSOLUTE WEIGHTS FOR RESNET-18
Let us recall from Figure 2 that when basing the approximation error on Absolute Weights, the correlation with the performance error is close to zero for ResNet-18. The correlation is zero due to the difference between layers.
Figure 4 plots the approximation errors of Relative and Absolute Weights versus the performance error before and after fine-tuning. The points resulting from CP, Tucker, and Tensor Train with compression of 50%, 75%, and 90% are grouped per layer.
Layers 15, 19, and 28 have a smaller absolute weight error and large performance error relative to layers 38, 60, and 63. Compare this to the data for Relative Weights where layers 60 and 63 have a small relative weight error and small performance error and layer 28 has comparable errors to 15, 19, and 28. This leads to the correlation between Absolute Weights and performance errors being close to zero, while it is positive for Relative Weights. Therefore, the difference in correlations can be explained by the difference in approximation error between layers. | 1. What is the main contribution of the paper regarding tensor compressive layers?
2. What are the strengths and weaknesses of the paper's approach to studying the correlation between approximation errors and classification performance?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any specific questions or concerns raised by the reviewer regarding the paper's methodology or conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper empirically evaluates the correlation between different hyperparameters of tensor compressive layers and the classification performance. Their results show that the error is indicative of model performance, even when comparing multiple TD methods, though useful correlation only occurs at the higher compression levels.
Strengths And Weaknesses
[Strengths]
S1: This is the first paper that systematic studies on how well the approximation error relates to a compressed neural network’s performance across multiple choices of network layers, TD methods, and compression levels.
S2: The experimental results support the conclusion that there is a positive correlation between the relative approximation error on the weights and the resulting performance error of the model for a wide range of TD choices, including layers, methods, and compression levels.
[Weaknesses]
W1: To test Assumption 1, this work reports some experimental phenomena/results based on extensive experiments. However, it doesn't go far enough in the sense that possible reasons behind these phenomena (e.g., the impact of compression levels on the rank correlation) are not sufficiently investigated.
W2: Although showing the correlation between weight (tensor) approximation error and the classification performance is interesting, it is more significant if some practical suggestions are given for designing compressed deep models based on tensor decompositions. However, the empirical findings of this work seem of limited guiding value for a deep learning practitioner who needs to decide which layer to compress, which tensor decomposition to use, and which compression level to choose for both efficiency and effectiveness.
W3: The writing is not always satisfactory due to typos like "no works investigate if the make TD decomposition choices using specific approximation errors are well suited", "previous works often only includes", and "Shown are averages and standard deviation over runs We observe that a higher compression". There are also confusing notions used, e.g., "rank" in "the correlation is calculated based on measurements of all combinations of layer, decomposition method, and rank once".
Clarity, Quality, Novelty And Reproducibility
[Clarify score 4/10] This paper is in general moderately well written. The clarity is not always satisfactory. See Weakness W3.
[Quality score 5/10]
(1) This is the first systematic study on the influences of the approximation error to a compressed neural network’s performance across multiple choices of network layers, TD methods, and compression levels. The experimental findings are new.
(2) Although this work presents extensive experimental results, it lacks deeper explanations for the empirical findings and sufficiently practical suggestions for deep learning practitioners in designing compressed models.
[Novelty score 4/10]
(1) This paper does not propose new models, algorithms, or theory.
(2) It empirically tests the validity of Assumption 1 which is motivated by Liebenwein et al. (2021).
[Reproducibility score 4/10] The conclusion of this paper relies so heavily on the empirical results. Although the authors gave some detailed descriptions on the experimental settings, there are still so many details not well explained in the current version for reproduction (especially without shared code). For example, what are the detailed model & algorithmic hyperparameters in using CP, Tucker, and TT? |
ICLR | Title
How Informative is the Approximation Error from Tensor Decomposition for Neural Network Compression?
Abstract
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weights is a proxy for the performance of the model to compress multiple layers and fine-tune the compressed model. Surprisingly, little research has systematically evaluated which approximation errors can be used to make choices regarding the layer, tensor decomposition method, and level of compression. To close this gap, we perform an experimental study to test if this assumption holds across different layers and types of decompositions, and what the effect of fine-tuning is. We include the approximation error on the features resulting from a compressed layer in our analysis to test if this provides a better proxy, as it explicitly takes the data into account. We find the approximation error on the weights has a positive correlation with the performance error, before as well as after fine-tuning. Basing the approximation error on the features does not improve the correlation significantly. While scaling the approximation error commonly is used to account for the different sizes of layers, the average correlation across layers is smaller than across all choices (i.e. layers, decompositions, and level of compression) before fine-tuning. When calculating the correlation across the different decompositions, the average rank correlation is larger than across all choices. This means multiple decompositions can be considered for compression and the approximation error can be used to choose between them.
N/A
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weights is a proxy for the performance of the model to compress multiple layers and fine-tune the compressed model. Surprisingly, little research has systematically evaluated which approximation errors can be used to make choices regarding the layer, tensor decomposition method, and level of compression. To close this gap, we perform an experimental study to test if this assumption holds across different layers and types of decompositions, and what the effect of fine-tuning is. We include the approximation error on the features resulting from a compressed layer in our analysis to test if this provides a better proxy, as it explicitly takes the data into account. We find the approximation error on the weights has a positive correlation with the performance error, before as well as after fine-tuning. Basing the approximation error on the features does not improve the correlation significantly. While scaling the approximation error commonly is used to account for the different sizes of layers, the average correlation across layers is smaller than across all choices (i.e. layers, decompositions, and level of compression) before fine-tuning. When calculating the correlation across the different decompositions, the average rank correlation is larger than across all choices. This means multiple decompositions can be considered for compression and the approximation error can be used to choose between them.
1 INTRODUCTION
Tensor Decompositions (TD) have shown potential for compressing pre-trained models, such as convolutional neural networks, by replacing the optimized weight tensor with a low-rank multi-linear approximation with fewer parameters (Jaderberg et al., 2014; Lebedev et al., 2015; Kim et al., 2016; Garipov et al., 2016; Kossaifi et al., 2019a). Common compression procedures (Lebedev et al., 2015; Garipov et al., 2016; Hawkins et al., 2021) work by iteratively applying TD on a selected weight tensor, where each time several decomposition choices have to be made regarding (i) the layer to compress, (ii) the type of decomposition, and (iii) the compression level. Selecting the best hyperparameters for these choices at a given iteration requires costly re-evaluating the full model for each option. Recently, Liebenwein et al. (2021) suggested comparing the approximation errors on the decomposed weights as a more efficient alternative, though they only considered matrix decompositions for which analytical bounds on the resulting performance exist. These bounds rely on the Eckhart-Young-Mirsky theorem. For TD, no equivalent theorem is possible (Vannieuwenhoven et al., 2014). While theoretical bounds are not available for more general TD methods, the same concept could still be practical when considering TDs too. We summarize this as the following general assumption:
Assumption 1. A lower TD approximation error on a model’s weight tensor indicates better overall model performance after compression.
While this assumption appears intuitive and reasonable, we observe several gaps in the existing literature: First, most existing TD compression literature only focuses on a few decomposition choices, e.g. fixing the TD method (Lebedev et al., 2015; Kim et al., 2016). Although various error measures and decomposition choices have been studied in separation, no prior work systematically compares different decomposition errors across multiple decomposition choices. Second, different decomposition errors with different properties have been used throughout the literature (Jaderberg et al., 2014), and it is unclear if some error measure should be preferred. Third, a benefit of TD is that no training data is needed for compression, though if labeled data is available, more recent methods combine TD with a subsequent fine-tuning step. Is the approximation error equally valid for the model performance with and without fine-tuning?
Overall, to the best of the authors’ knowledge, no prior work investigates if and which decomposition choices for TD network compression can be made using specific approximation errors. This paper studies empirically to what extent a single decomposition error correlates with the compressed model’s performance across varied decomposition choices, identifying how existing procedures could be improved, and providing support for specific practices. Our contributions are as follows:
• A first empirical study is proposed on the correlation between the approximation error on the model weights that result from compression with TD, and the performance of the compressed model1. Studied decomposition choices include the layer, multiple decomposition methods (CP, Tucker, and Tensor Train), and level of compression. Measurements are made using several models and datasets. We show that the error is indicative of model performance, even when comparing multiple TD methods, though useful correlation only occurs at the higher compression levels.
• Different formulations for the approximation error measure are compared, including measuring the error on the features as motivated by the work Jaderberg et al. (2014); Denil et al. (2014) which considers the data distribution. We further study how using training labeled data for additional fine-tuning affects the correlation.
2 RELATED WORK
There is currently no systematic study on how well the approximation error relates to a compressed neural network’s performance across multiple choices of network layers, TD methods, and compression levels. We here review the most similar and related studies where we distinguish works with theoretical versus empirical validation, different approximation error measures, and the role of fine-tuning after compression.
The relationship between the approximation error on the weights and the performance of the model was studied by theoretical analysis for matrix decompositions. Liebenwein et al. (2021) derive bounds on the model performance for SVD-based compression on the convolutional layers, and thus motivate that the SVD approximation error is a good proxy for the compressed model performance. Arora et al. (2018) derive bounds on the generalization error for convolutional layers based on a compression error from their matrix projection algorithm. Baykal et al. (2019) show how the amount of sparsity introduced in a model’s layers relates to its generalization performance. While these works show that some theoretical bounds can be found for specific compression methods, such bounds are not available for TDs in general. Other works, therefore, study the relationship for TD empirically. For instance, Lebedev et al. (2015) show how CP decomposition rank affects the approximation error, and the resulting accuracy drop as the rank is decreased. Hawkins et al. (2022) observe that, for networks with repeated layer blocks, the approximation error depends on the convolutional layers within the block.
When considering the model’s final task performance, the approximation error on the weights might not be the most relevant measure. To consider the effect on the actual data distribution, Jaderberg et al. (2014) instead propose to compute an error on the approximated output features of a layer after its weights have been compressed. They found that compressing weights by minimizing the error on features, rather than the error on the weights, results in a smaller loss in classification accuracy. However, Jaderberg et al. (2014) do not fine-tune the decomposed model, and only use a toy model with few layers. Denil et al. (2014) try to capture the information from the data via the empirical
1The code for our experiments is available at https://github.com/JSchuurmans/tddl.
covariance matrix. Although this method eliminates the need for multiple passes over the data during the compression step, it is limited to a two-dimensional case. Eo et al. (2021) forego looking at a compression error altogether, and selects the rank based on the accuracy on the validation set. This requires the labels to be present and a forward pass through the whole network, even if the compressed layer is near the input.
Several norms have been used in the literature to quantify the approximation error, which is the difference between the pretrained weights and the compressed weights. The works that decompose pretrained layers (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015; Novikov et al., 2015; Kim et al., 2016) explicitly minimize the Frobenius norm. Hawkins et al. (2021); Liebenwein et al. (2021) calculate the relative Frobenius norm, i.e., the norm of the error proportional to the norm of the pretrained weights, to compare the error for layers of different sizes. Still, it remains unclear which error measure is most informative for the compressed model’s final performance.
In practice, when training data is still available for compression, fine-tuning for the target task after compressing weights could recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). Adding fine-tuning results in a three-step process: pretrain, compress and fine-tune. Optimization thus alternates between minimizing the error of respectively the features, the weights, and finally the features again. While Denton et al. (2014); Kim et al. (2016) compare compressed model performance before and after fine-tuning, they do not investigate how the finetuned network performance relates to the weight compression error. Lebedev et al. (2015) does study the compression error for CP decomposition, but only reports performance with fine-tuning.
3 METHODOLOGY
We consider the task of compressing a pretrained neural network with TD. While TD is a general technique that could be applied to many types of layers, the focus will be on convolutional layers. Due to their ubiquity and suitability to compare different types of higher-order decompositions, as the layer weights are four-dimensional tensors.
Generally, a compression procedure will iteratively apply TD to the weights of selected layers, making several choices on how and what weight tensor to decompose, while ideally maintaining as much of the original network’s performance. In its original uncompressed form, the full-rank weight tensors W ∈ RC×H×W×T of a layer represent a local optimum in the network’s parameter space with respect to the training data and loss, where C is the number of input channels, H and W are the height and width of the convolutional kernel, and T is the number of output channels. When a TD is applied to the weights of a specific layer, this results in a factorized structure W̃ composed of multiple smaller tensor multiplications, which replaces the original weights in the network. Each time TD is applied, several decomposition choices need to be evaluated:
1. Layer: The layer l from the set of network layers L = {1, 2, · · · , L} to decompose. 2. Method: The type of TD method m ∈ M = {CP,Tucker,Tensor Train}. The decomposition
determines the factorized structure of W̃. 3. Compression: The compression level c ∈ C for the selected layer. Here C ⊂ (0, 1] is some finite
set of testable levels, and c = 0.75 means the number of parameters is reduced by 75% and the factorized layer contains only 25% of the parameters. A given compression level is achieved by decomposing the tensor to some rank R, depending on the selected TD method (see Section 3.3).
We will refer to H = L × M × C as the set with possible hyperparameter values for (l,m, c) to consider. Note that compression procedures in the literature might only consider a subset of these choices. For example, a procedure might fix the layer for a given iteration or only consider a single TD method. In practice, it is computationally infeasible to evaluate the compressed network’s performance for every possible hyperparameter choice at every compression iteration, especially when optimizing for performance after fine-tuning. Instead, automated compression procedures will efficiently compare an approximation error ai = e(W̃i,W) between the original and decomposed weights using a particular choice of hyperparameters hi ∈ H. In doing so one relies on Assumption 1 that a lower approximation error is indicative of better compressed performance pi. If annotated training data is available, additional network fine-tuning on the decomposed structure could result in improved performance p⋆i > pi.
In this work, we propose to focus on a single iteration and investigate Assumption 1 in isolation of any specific compression procedure. Our aim is thus to assess how well computing an approximation error e can predict the optimal compression choice from some hypothesis set H, i.e. the choice that results in the lowest compressed network performance error p, or even performance after fine-tuning p⋆. We will study the correlation between approximation error and model performance empirically in our experiments, using the procedure and correlation metric explained in Section 3.1. Details on the different approximation errors that we will explore are covered in Section 3.2. Finally, the considered TD methods are explained in Section 3.3.
3.1 EMPIRICAL EVALUATION OF ERROR-PERFORMANCE CORRELATION
Our proposed empirical evaluation procedure will evaluate a large set of hyperparameters H = {h1, h2, · · · } on multiple convolutional neural networks and datasets (see Section 4) for different options of approximation error metric (see Section 3.2). For a given model, dataset, and approximation error metric e, the procedure evaluates for each set of hyperparameter choices hi ∈ H the approximation error ai = e(W̃i,W), the model performance error pi on the validation split, and the model performance error p⋆i after additional fine-tuning on the training data. We thus obtain sets of measurements A = {a1, a2, · · · }, P = {p1, p2, · · · } and P⋆ = {p⋆1, p⋆2, · · · } for H. When comparing two sets of hyperparameters hi ∈ H and hj ∈ H, we want to establish if the set with the smaller approximation error results in a smaller performance error of the model. In other words, the concordance of pairs of measurements needs to be established. Concordant pairs have a larger (smaller) performance error when the approximation error is larger (smaller) between two sets of hyperparameter choices, i.e. i and j are concordant if ai > aj and pi > pj or if ai < aj and pi < pj , and discordant otherwise. Kendall’s τ is a measure for the rank correlation (Kendall, 1938) or ordinal association between two order sets, in our case between approximation errors e and a model performances P (or P⋆). To avoid confusion with the concept of tensor rank, we will refer to Kendall’s τ simply as correlation. For this correlation measure, the difference between the number of concordant pairs (k) and discordant pairs (d) is scaled with the binomial coefficient m(m− 1)/2 to account for the different ways two measurements can be sampled from a total of m measurements:
τ = 2(k − d)/(m(m− 1)). (1) Kendall’s τ can be interpreted as follows: τ = 1 indicates a perfect positive rank correlation, τ = 0 no correlation, and τ = −1 a strong negative correlation. For a set of hyperparameters H, a useful approximation error e would thus result in a τ close to ±1, indicating it is predictive of the model’s performance. Note that Kendall’s τ does not depend on assumptions about the underlying distribution, whereas Pearson correlation assumes a linear relationship between the two measurements. Kendall’s τ is used over Spearman’s ρ because the interpretation of con- and discordance pairs for Kendall’s τ is closely related to our use case of choosing between hyperparameter sets.
3.2 APPROXIMATION ERRORS
We now discuss various measures e(W̃,W) to quantify the approximation error. The basis is to compute some norm on the difference between these tensors, in this work we use the Frobenius norm as is common in the literature (Lebedev et al., 2015; Hawkins et al., 2022). We shall consider three options to scale the norm, which could help make the error more robust when comparing hypotheses with different layers. Additionally, we can consider two options to compute the error on, namely either directly the weights or on the features. In total, we shall thus explore six different approximation errors in this work. An overview is presented in Table 1.
Normalization The norm between the difference of the weights is referred to as absolute norm and is used in the objective function when decomposing the pretrained weights. The relative norm is used in TD literature to compare errors between different layers (Lebedev et al., 2015; Hawkins et al., 2022), as it is invariant to the size of the weights. Alternatively, the norm of the difference can be scaled to account for the number of parameters, while keeping the distance from the weights.
Target tensor The most common option is to compute approximation error on the decomposed layer’s weights, W. However, Jaderberg et al. (2014) achieved promising results basing the decomposition on the approximation error of the features. Errors in some elements of the weight tensor
might be more permissible if they do not affect the resulting feature space. We therefore also consider the expected error on the features F = X · W, which is the output tensor of the layer after convolving its input with weights W on input data X. Likewise, approximated weights W̃ result in an approximated F̃. In practice, computing the feature space requires input data and is computationally more demanding than computing the weight approximation error. However, it is potentially more representative of an approximation’s effect on the output, plus unlike a later fine-tuning step, it could even be used if only data without labels is available.
3.3 TENSOR DECOMPOSITION METHODS
Our experiments shall consider three popular decomposition methods for convolutional layers, namely CP (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015), Tucker (Kim et al., 2016), and Tensor Train (TT) (Garipov et al., 2016). During the decomposition step the decomposed weights are found by minimizing the approximation error between the pretrained weights and the estimated decomposition: argminW̃ ||W − W̃||. For CP this is done with ALS (Carroll & Chang, 1970; Harshman, 1972), for Tucker with HOSVD (De Lathauwer et al., 2000), and TT-SVD (Oseledets, 2011) for Tensor Train. The ALS algorithm requires a random initialization. We sample from a uniform distribution [0,1) using Tensorly (Kossaifi et al., 2019b). The desired compression level is achieved by finding the corresponding rank, using the package Tensorly-Torch (Kossaifi et al., 2019b). The ranks used for CP, Tucker, and TT are given in Appendix A.1. For completeness, we list all considered decompositions with a 4-way tensor W.
CP decomposition A rank-R CP decomposition (Hitchcock, 1927) sums R rank-one tensors:
W̃CPc,y,x,t = R∑
r=1
Cc,rYy,rXx,rTt,r. (2)
Tucker decomposition A Tucker decomposition (Tucker, 1966) is distinct from a CP by the Tucker core G ∈ RR1×R2×R3×R4 . The Tucker rank is defined as the four-tuple (R1, R2, R3, R4). Since the dimensions of the convolutional weights are small with respect to the width and height dimensions, it is computationally more efficient to contract these with the Tucker core G and form a new core H = G ×2 Y ×3 X , where ×n is the n-mode product (Appendix A.2):
W̃Tuckerc,y,x,t = R1∑
r1=1 R2∑ r2=1 R3∑ r3=1 R4∑ r4=1 Gr1,r2,r3,r4Cc,r1Yy,r2Xx,r3Tt,r4 = R1∑ r1=1 R4∑ r4=1 Hr1,y,x,r4Cc,r1Tt,r4 .
(3)
Tensor Train decomposition Another alternative is the Tensor Train decomposition (Oseledets, 2011), which decomposes a given tensor as a linear chain of 2-way and 3-way tensors, where the first and last tensors are 2-way. The TT-rank in our four-dimensional case is the 3-tuple (R1, R2, R3):
W̃TTc,y,x,t = R1∑
r1=1 R2∑ r2=1 R3∑ r3=1 Cc,r1Yr1,y,r2Xr2,x,r3Tr3,t. (4)
4 EXPERIMENTS
This section provides the implementation details and discusses the results of our empirical approach.
4.1 EXPERIMENTAL SETUP
Datasets The experiments are run on the datasets CIFAR-10 (Krizhevsky, 2009) and FashionMNIST (Xiao et al., 2017). These datasets are common classification benchmarks for testing TD in CNNs (Cheng et al., 2021; Denil et al., 2014; Wu et al., 2020; Garipov et al., 2016; Hawkins et al., 2021). For both datasets, the original training sets are split into the set used for training and validation. The split is made such that equal class distributions are maintained. The details specific to the datasets are as follows: CIFAR-10 has 10 classes distributed equally across 60,000 images of 32 × 32 pixels with 3 color channels. After our validation split, there are 45,000 images in the training set and 5,000 in the validation set. The test set of 10,000 images remains unchanged. Fashion-MNIST has 10 classes distributed equally across 70,000 grayscale images of 28×28 pixels. After our validation split, there are 55,000 images in the training set and 5,000 in the validation set.
Model architecture and training The models used are ResNet-18 (He et al., 2016) and GaripovNet (Garipov et al., 2016). ResNet is a well-performing state-of-the-art convolutional neural network. GaripovNet is a 7-layer convolutional neural network proposed in Garipov et al. (2016) and used by Hawkins et al. (2022) for image classification. These models enable comparison with other works within and beyond TD for deep learning (Gusak et al., 2019; Garipov et al., 2016; Hawkins et al., 2022; Chu & Lee, 2021; Kossaifi et al., 2020). The following hyperparameters are used: ResNet-18 is trained with batch size 128, for 300 epochs, with Adam optimizer and a learning rate of 10−3. At epochs 100 and 150 the learning rate is multiplied by 0.1. GaripovNet is trained with the same settings as the original paper Garipov et al. (2016). The model is trained with Stochastic Gradient Descent (SGD) with a momentum of 0.9 and a learning rate of 0.1, multiplied by 0.1 at epochs 30, 60, and 90.
The validation set is used for early stopping and the selection of the training hyperparameters, i.e. learning rate, schedule, level of annealing, batch size, and optimizer. Training data is augmented with a random crop (padding with 4 pixels and cropping to the original size) and a random horizontal flip. All images are standardized based on the training mean and standard deviation per channel overall training samples. Early stopping is applied for both training the baseline and fine-tuning the decomposed model. The classification error on the test set is used for performance errors P. To Fine-tune after decomposition and obtain performance errors P⋆, the ResNet-18 is optimized for another 25 epochs, and GaripovNet for 10 epochs, using the last learning rate from the training.
Decomposition choices We now explain the considered values for the decomposition choices H explained in Section 3. For both neural network models, neither the first nor the last layer will be decomposed, as these layers already contain a relatively small amount of parameters. For GaripovNet the other five layers are part of L. For ResNet-18, L contains a selection of eight convolutional layers, details of which can be found in Appendix A.3. The set of TD methods that will be considered is M = {CP,Tucker,Tensor Train}, which were discussed in Section 3.3. The set of compression levels is C = {10%, 25%, 50%, 75%, 90%}. Multiple levels of compression are considered as each neural network layer can have different efficiency-performance trade-offs (Lebedev et al., 2015). In the experiments, we evaluate ResNet-18 on CIFAR-10, GaripovNet on CIFAR-10, and GaripovNet on F-MNIST. We exclude ResNet-18 on F-MNIST as the dataset is not sufficiently challenging for this model, and compressing one layer does not lead to a viable impact on the performance due to the model’s size and skip-connections.
Variance The process of decomposing and fine-tuning is repeated for five independent runs for each choice of layer, decomposition method, and compression level to assess and report variance in the results. Note that due to the stochasticity of the ALS algorithms, the random initializations can result in different CP decomposition estimates. The variance in correlation shown in the plots without fine-tuning results from the randomness in the CP initialization. Fine-tuning adds additional variance through its use of batched SGD. The observed variance with fine-tuning accounts for both the randomness from CP initialization as well as from fine-tuning, thereby representing all sources of randomness in our methodology. All runs for a given model and dataset are based on the same pretrained weights, so this is not a source of reported variance. In total, this results in 600 measurements for ResNet-18 and 375 measurements for GaripovNet per dataset.
4.2 EXPERIMENTAL RESULTS
Impact of compression levels on correlation We start by calculating the correlation across the layers and decomposition methods for multiple runs and calculate the averages grouped by compression levels. This is presented in Figure 1, where the bars are the average correlation τ between the Relative Weights and the classification error. The correlation is only based on the Relative Weights, as this is the most common metric in recent literature (Lebedev et al., 2015; Hawkins et al., 2022; Liebenwein et al., 2021). The correlations are grouped by the different levels of compression and represented by different colors. The error bars are ±1 standard deviation, representing the variance from multiple runs.
In Figure 1, it can be seen that the larger the compression, the higher the correlation is. This is a positive result for our use case. In the end, we are interested in making decomposition choices when compressing. The more we compress, the higher the correlation and therefore the more certain we are that basing our choice on the approximation error results in the optimal choice. It can also be noted that a certain level of compression is needed to be able to make choices based on the approximation error. For both models and datasets, the correlation is small when the compression is only 10% and 25%. The variance in the correlation at smaller compression levels is larger than at higher compression levels. When the compression is too small the effect on the performance of the model is too small compared to the observed variance, especially after fine-tuning. In the remainder of the experiments, we therefore focus on compression levels of C = {50%, 75%, 90%}.
Comparison of approximation error measures Works such as Liebenwein et al. (2021) have used a single approximation error, e.g. Relative Weights, to identify which layer to compress next, implicitly assuming that relative errors between layers are indicative of the relative model performance differences. We here compare the various approximation error measures, testing the correlation with performance over all decomposition choices. In Figure 2, the correlation is calculated based on measurements of all combinations of layer, decomposition method, and compression level once. The correlations are averaged over runs, as well as the ±1 standard deviation is calculated over the runs.
Figure 2 shows that the correlations are generally positive and significantly different from zero. This means that the decomposition choices can (to some extent) be based on the approximation error. There is one exception where the correlation is close to zero, namely for the Absolute Weights measure on ResNet-18. The difference in correlations can be explained by the difference in approximation error between layers, a detailed explanation is provided in Appendix A.4. These results suggest that using Absolute-based approximation errors, while they may show high correlations in some cases, are not generally a reliable indicator for future model performance, and that normalized measures should be used instead.
Comparing the different approximation error metrics, we observe the highest correlation with the performance for Relative Weights in all tested cases. Interestingly, the magnitude of the correlation for Feature-based measures is similar to or smaller than the correlation for Weight-based measures. Although the findings of Jaderberg et al. (2014) would suggest a stronger correlation for the features, at least before fine-tuning, we do not observe benefits for basing decomposition choices on the approximation error of the features rather than the weights. Possibly pretraining already ensures all the elements in the weight tensor are equally important for the target data distribution, thus a Weight-based error already reliably reflects resulting errors on the output features. Comparing the error bars with and without fine-tuning, the randomness from fine-tuning has a larger impact on the variance in correlation than the randomness from CP initialization. In summary, our results support the use of the Relative Weights approximation error to make decomposition choices.
Impact of fine-tuning Most works use fine-tuning to recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). The right subfigure of Figure 2 shows the mean and standard deviation of five correlations, per model and per dataset after fine-tuning. After fine-tuning, the correlation between the approximation error and the performance error is smaller than before fine-tuning for GaripovNet, as additional training adapts the model and reduces the performance gap between the different choices, but this effect is not observed for ResNet-18 where the correlation was already lower. However, for both models, there is still a clear positive correlation between the approximation error and the performance after fine-tuning. This means that decomposition choices can still be based on the approximation error when intending to perform fine-tuning later, even though different hyperparameters might be optimal without and with fine-tuning.
While the correlation is positive and significantly different from zero, the correlation is only around +0.5 for ResNet-18. We therefore investigate if the correlation is higher when only considering specific decomposition choices next.
Correlation across Layers vs. Methods In the previous experiments, we compared how decomposition choices on both different layers and methods correlated with performance. Here we investigate if the correlation is stronger if only one of these choices would be considered. For instance, previous works often only include layers as decomposition choice, and have not compared across decomposition methods. We compare correlation on all choices for both sets (L ×M × C) as before, to layers only (L× {m} × {c} with reported results averaged for all m ∈ M and c ∈ C), and to methods only ({l} ×M× {c} with reported results averaged for all l ∈ L and c ∈ C). Figure 3 shows that before fine-tuning the approximation error has a lower correlation with the performance of the model when considering layers only compared to all decomposition choices. Not all layers of a neural network have the same efficiency-performance trade-off (Lebedev et al., 2015; Hawkins et al., 2021). Therefore, the correlation is lower when we fix the decomposition method and compression level. It is better to combine layers with compression levels (and decomposition
methods). However, fine-tuning recovers some of the correlation for layers. Across decomposition methods, the correlation before fine-tuning is comparable to the correlation calculated across all decomposition choices. These results suggest decomposition methods can be compared better than just the layers before fine-tuning, although the former is not an optimization choice considered in previous works. Interestingly, for GaripovNet the correlation across decomposition methods drops significantly after fine-tuning. We find that this is due to difficulties in optimizing the CP decomposed layers, since the gradient flow through CP convolutions is a known problem (Silva & Lim, 2008; Lebedev et al., 2015), whereas ResNet does not suffer from this due to its skip connections. We conclude that (unlike current practice) network compression could consider multiple decomposition methods as their approximation errors can be compared, though most reliably when aiming for compression without fine-tuning.
5 CONCLUSION
We have tested Assumption 1, and find that there is a positive correlation between the relative approximation error on the weights and the resulting performance error of the model for a wide range of TD choices, including layers, methods, and compression levels. We further find that using data to compute the approximation error on the features, rather than simply on the model weights directly, does not improve the correlation. Scaling the approximation error with the norm of the original tensor provides the highest and most stable correlation across all compared models and datasets. Our findings suggest that the Relative Weights approximation error is best suited to select among TD decomposition choices.
While these choices can be made across layers, TD methods, and compression levels, we observe that the correlation before fine-tuning is smaller when comparing between layers for a fixed method, than when comparing across methods (here: CP, Tucker, and Tensor Train) for a fixed layer. Integrating multiple types of decompositions within a network compression technique is therefore a potential direction for future work, although care has to be taken when the use case includes later fine-tuning, as the correlation for selecting across decomposition methods can degrade since backpropagation through certain factorized structures remains challenging.
Our experiments are limited to a set of decomposition choices and network layers commonly found in the TD literature. Future work can extend to other decompositions and other types of neural network layers, e.g. fully connected layers. While the weights are matrices, tensor decomposition has been applied to fully connected layers by reshaping the weight matrix into a higher-order tensor. The choice of reshaping then becomes an additional decomposition choice.
REPRODUCIBILITY
The authors find it important that this work is reproducible. To this extent the following efforts have been made: The datasets and test-validation splits are described in Section 4.1. The datasets are collected from PyTorch Vision. The models and hyperparameters used for training are covered in Section 4.1. The implementations of the baseline models are from the PyTorch Model Zoo. The models are factorized with Tensorly-Torch (Kossaifi et al., 2019b) , using CP initialization described in Section 4.1 and ranks provided in Appendix A.1. The experimental setup is explained in Section 4.1. The calculation of metrics is formulated in Section 3. Finally, the code to reproduce these experiments is available at: https://github.com/JSchuurmans/tddl.
ACKNOWLEDGMENTS
Described results are made possible in part by TU Delft Cohesion subsidy, TERP Cohesion project 2020.
A APPENDIX
A.1 RANK AND COMPRESSION LEVEL
Tables 2 and 3 present the ranks that are used for GaripovNet (Garipov et al., 2016) and ResNet-18 (He et al., 2016) respectively. Note that in these tables, the Tucker rank includes the kernel ranks corresponding to the width and height, and the TT rank includes R0 = R4 = 1, which are left out of Equation 4 for conciseness.
A.2 N-MODE PRODUCT
The definition of n-Mode Product ×n given by Kolda & Bader (2009) is used in this paper. The contraction of a tensor G ∈ RR1,R2,··· ,RN with matrix Y ∈ RY,Rn along the nth mode of the tensor is defined elementwise as:
(X ×n Y )r1,··· ,rn−1,y,rn+1,··· ,rN = Rn∑
rn=1
Xr1,··· ,rn−1,rn,rn+1,··· ,rNYy,rn (5)
A.3 SELECTION OF RESNET-18 LAYERS
The following layers are considered for decomposition in ResNet-18: The last layer of the first two blocks. The first layer with stride two. The first layer after a 1x1 convolution. A 1x1 convolution with a similar number of parameters as the first choice of layer. The layer before and after the 1x1 convolution. The final two convolutional layers, as they are the largest convolutional layers.
A.4 DIFFERENCE BETWEEN LAYERS IN ABSOLUTE WEIGHTS FOR RESNET-18
Let us recall from Figure 2 that when basing the approximation error on Absolute Weights, the correlation with the performance error is close to zero for ResNet-18. The correlation is zero due to the difference between layers.
Figure 4 plots the approximation errors of Relative and Absolute Weights versus the performance error before and after fine-tuning. The points resulting from CP, Tucker, and Tensor Train with compression of 50%, 75%, and 90% are grouped per layer.
Layers 15, 19, and 28 have a smaller absolute weight error and large performance error relative to layers 38, 60, and 63. Compare this to the data for Relative Weights where layers 60 and 63 have a small relative weight error and small performance error and layer 28 has comparable errors to 15, 19, and 28. This leads to the correlation between Absolute Weights and performance errors being close to zero, while it is positive for Relative Weights. Therefore, the difference in correlations can be explained by the difference in approximation error between layers. | 1. What is the focus of the paper in terms of the relationship between approximation error and model performance?
2. What are the strengths of the proposed approach, particularly in terms of its ability to validate assumptions through extensive experiments?
3. What are the weaknesses of the paper regarding its contributions and significance?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper performs an empirical study on the relationship between the approximation error of a model’s weight tensor and the model performance after compression. Multiple decomposition methods (CP, Tucker, and Tensor Train), and level of compression are tested on several models and datasets.
Strengths And Weaknesses
Strength:
The paper does a great amount of experiments to validate the assumption, namely there is a positive correlation between the relative approximation error on the weights and the model performance for a wide range of TD choices, including layers, methods, and compression levels.
Weakness:
The contribution is not significant since the results are not surprised (e.g., relative weight approximation error is better than using absolute weight and rescaled weight), though I appreciate the engineering efforts the author spent on this work.
Clarity, Quality, Novelty And Reproducibility
The paper is clear, and is reproducible. The major concern is that the contribution is not significant. |
ICLR | Title
How Informative is the Approximation Error from Tensor Decomposition for Neural Network Compression?
Abstract
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weights is a proxy for the performance of the model to compress multiple layers and fine-tune the compressed model. Surprisingly, little research has systematically evaluated which approximation errors can be used to make choices regarding the layer, tensor decomposition method, and level of compression. To close this gap, we perform an experimental study to test if this assumption holds across different layers and types of decompositions, and what the effect of fine-tuning is. We include the approximation error on the features resulting from a compressed layer in our analysis to test if this provides a better proxy, as it explicitly takes the data into account. We find the approximation error on the weights has a positive correlation with the performance error, before as well as after fine-tuning. Basing the approximation error on the features does not improve the correlation significantly. While scaling the approximation error commonly is used to account for the different sizes of layers, the average correlation across layers is smaller than across all choices (i.e. layers, decompositions, and level of compression) before fine-tuning. When calculating the correlation across the different decompositions, the average rank correlation is larger than across all choices. This means multiple decompositions can be considered for compression and the approximation error can be used to choose between them.
N/A
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weights is a proxy for the performance of the model to compress multiple layers and fine-tune the compressed model. Surprisingly, little research has systematically evaluated which approximation errors can be used to make choices regarding the layer, tensor decomposition method, and level of compression. To close this gap, we perform an experimental study to test if this assumption holds across different layers and types of decompositions, and what the effect of fine-tuning is. We include the approximation error on the features resulting from a compressed layer in our analysis to test if this provides a better proxy, as it explicitly takes the data into account. We find the approximation error on the weights has a positive correlation with the performance error, before as well as after fine-tuning. Basing the approximation error on the features does not improve the correlation significantly. While scaling the approximation error commonly is used to account for the different sizes of layers, the average correlation across layers is smaller than across all choices (i.e. layers, decompositions, and level of compression) before fine-tuning. When calculating the correlation across the different decompositions, the average rank correlation is larger than across all choices. This means multiple decompositions can be considered for compression and the approximation error can be used to choose between them.
1 INTRODUCTION
Tensor Decompositions (TD) have shown potential for compressing pre-trained models, such as convolutional neural networks, by replacing the optimized weight tensor with a low-rank multi-linear approximation with fewer parameters (Jaderberg et al., 2014; Lebedev et al., 2015; Kim et al., 2016; Garipov et al., 2016; Kossaifi et al., 2019a). Common compression procedures (Lebedev et al., 2015; Garipov et al., 2016; Hawkins et al., 2021) work by iteratively applying TD on a selected weight tensor, where each time several decomposition choices have to be made regarding (i) the layer to compress, (ii) the type of decomposition, and (iii) the compression level. Selecting the best hyperparameters for these choices at a given iteration requires costly re-evaluating the full model for each option. Recently, Liebenwein et al. (2021) suggested comparing the approximation errors on the decomposed weights as a more efficient alternative, though they only considered matrix decompositions for which analytical bounds on the resulting performance exist. These bounds rely on the Eckhart-Young-Mirsky theorem. For TD, no equivalent theorem is possible (Vannieuwenhoven et al., 2014). While theoretical bounds are not available for more general TD methods, the same concept could still be practical when considering TDs too. We summarize this as the following general assumption:
Assumption 1. A lower TD approximation error on a model’s weight tensor indicates better overall model performance after compression.
While this assumption appears intuitive and reasonable, we observe several gaps in the existing literature: First, most existing TD compression literature only focuses on a few decomposition choices, e.g. fixing the TD method (Lebedev et al., 2015; Kim et al., 2016). Although various error measures and decomposition choices have been studied in separation, no prior work systematically compares different decomposition errors across multiple decomposition choices. Second, different decomposition errors with different properties have been used throughout the literature (Jaderberg et al., 2014), and it is unclear if some error measure should be preferred. Third, a benefit of TD is that no training data is needed for compression, though if labeled data is available, more recent methods combine TD with a subsequent fine-tuning step. Is the approximation error equally valid for the model performance with and without fine-tuning?
Overall, to the best of the authors’ knowledge, no prior work investigates if and which decomposition choices for TD network compression can be made using specific approximation errors. This paper studies empirically to what extent a single decomposition error correlates with the compressed model’s performance across varied decomposition choices, identifying how existing procedures could be improved, and providing support for specific practices. Our contributions are as follows:
• A first empirical study is proposed on the correlation between the approximation error on the model weights that result from compression with TD, and the performance of the compressed model1. Studied decomposition choices include the layer, multiple decomposition methods (CP, Tucker, and Tensor Train), and level of compression. Measurements are made using several models and datasets. We show that the error is indicative of model performance, even when comparing multiple TD methods, though useful correlation only occurs at the higher compression levels.
• Different formulations for the approximation error measure are compared, including measuring the error on the features as motivated by the work Jaderberg et al. (2014); Denil et al. (2014) which considers the data distribution. We further study how using training labeled data for additional fine-tuning affects the correlation.
2 RELATED WORK
There is currently no systematic study on how well the approximation error relates to a compressed neural network’s performance across multiple choices of network layers, TD methods, and compression levels. We here review the most similar and related studies where we distinguish works with theoretical versus empirical validation, different approximation error measures, and the role of fine-tuning after compression.
The relationship between the approximation error on the weights and the performance of the model was studied by theoretical analysis for matrix decompositions. Liebenwein et al. (2021) derive bounds on the model performance for SVD-based compression on the convolutional layers, and thus motivate that the SVD approximation error is a good proxy for the compressed model performance. Arora et al. (2018) derive bounds on the generalization error for convolutional layers based on a compression error from their matrix projection algorithm. Baykal et al. (2019) show how the amount of sparsity introduced in a model’s layers relates to its generalization performance. While these works show that some theoretical bounds can be found for specific compression methods, such bounds are not available for TDs in general. Other works, therefore, study the relationship for TD empirically. For instance, Lebedev et al. (2015) show how CP decomposition rank affects the approximation error, and the resulting accuracy drop as the rank is decreased. Hawkins et al. (2022) observe that, for networks with repeated layer blocks, the approximation error depends on the convolutional layers within the block.
When considering the model’s final task performance, the approximation error on the weights might not be the most relevant measure. To consider the effect on the actual data distribution, Jaderberg et al. (2014) instead propose to compute an error on the approximated output features of a layer after its weights have been compressed. They found that compressing weights by minimizing the error on features, rather than the error on the weights, results in a smaller loss in classification accuracy. However, Jaderberg et al. (2014) do not fine-tune the decomposed model, and only use a toy model with few layers. Denil et al. (2014) try to capture the information from the data via the empirical
1The code for our experiments is available at https://github.com/JSchuurmans/tddl.
covariance matrix. Although this method eliminates the need for multiple passes over the data during the compression step, it is limited to a two-dimensional case. Eo et al. (2021) forego looking at a compression error altogether, and selects the rank based on the accuracy on the validation set. This requires the labels to be present and a forward pass through the whole network, even if the compressed layer is near the input.
Several norms have been used in the literature to quantify the approximation error, which is the difference between the pretrained weights and the compressed weights. The works that decompose pretrained layers (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015; Novikov et al., 2015; Kim et al., 2016) explicitly minimize the Frobenius norm. Hawkins et al. (2021); Liebenwein et al. (2021) calculate the relative Frobenius norm, i.e., the norm of the error proportional to the norm of the pretrained weights, to compare the error for layers of different sizes. Still, it remains unclear which error measure is most informative for the compressed model’s final performance.
In practice, when training data is still available for compression, fine-tuning for the target task after compressing weights could recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). Adding fine-tuning results in a three-step process: pretrain, compress and fine-tune. Optimization thus alternates between minimizing the error of respectively the features, the weights, and finally the features again. While Denton et al. (2014); Kim et al. (2016) compare compressed model performance before and after fine-tuning, they do not investigate how the finetuned network performance relates to the weight compression error. Lebedev et al. (2015) does study the compression error for CP decomposition, but only reports performance with fine-tuning.
3 METHODOLOGY
We consider the task of compressing a pretrained neural network with TD. While TD is a general technique that could be applied to many types of layers, the focus will be on convolutional layers. Due to their ubiquity and suitability to compare different types of higher-order decompositions, as the layer weights are four-dimensional tensors.
Generally, a compression procedure will iteratively apply TD to the weights of selected layers, making several choices on how and what weight tensor to decompose, while ideally maintaining as much of the original network’s performance. In its original uncompressed form, the full-rank weight tensors W ∈ RC×H×W×T of a layer represent a local optimum in the network’s parameter space with respect to the training data and loss, where C is the number of input channels, H and W are the height and width of the convolutional kernel, and T is the number of output channels. When a TD is applied to the weights of a specific layer, this results in a factorized structure W̃ composed of multiple smaller tensor multiplications, which replaces the original weights in the network. Each time TD is applied, several decomposition choices need to be evaluated:
1. Layer: The layer l from the set of network layers L = {1, 2, · · · , L} to decompose. 2. Method: The type of TD method m ∈ M = {CP,Tucker,Tensor Train}. The decomposition
determines the factorized structure of W̃. 3. Compression: The compression level c ∈ C for the selected layer. Here C ⊂ (0, 1] is some finite
set of testable levels, and c = 0.75 means the number of parameters is reduced by 75% and the factorized layer contains only 25% of the parameters. A given compression level is achieved by decomposing the tensor to some rank R, depending on the selected TD method (see Section 3.3).
We will refer to H = L × M × C as the set with possible hyperparameter values for (l,m, c) to consider. Note that compression procedures in the literature might only consider a subset of these choices. For example, a procedure might fix the layer for a given iteration or only consider a single TD method. In practice, it is computationally infeasible to evaluate the compressed network’s performance for every possible hyperparameter choice at every compression iteration, especially when optimizing for performance after fine-tuning. Instead, automated compression procedures will efficiently compare an approximation error ai = e(W̃i,W) between the original and decomposed weights using a particular choice of hyperparameters hi ∈ H. In doing so one relies on Assumption 1 that a lower approximation error is indicative of better compressed performance pi. If annotated training data is available, additional network fine-tuning on the decomposed structure could result in improved performance p⋆i > pi.
In this work, we propose to focus on a single iteration and investigate Assumption 1 in isolation of any specific compression procedure. Our aim is thus to assess how well computing an approximation error e can predict the optimal compression choice from some hypothesis set H, i.e. the choice that results in the lowest compressed network performance error p, or even performance after fine-tuning p⋆. We will study the correlation between approximation error and model performance empirically in our experiments, using the procedure and correlation metric explained in Section 3.1. Details on the different approximation errors that we will explore are covered in Section 3.2. Finally, the considered TD methods are explained in Section 3.3.
3.1 EMPIRICAL EVALUATION OF ERROR-PERFORMANCE CORRELATION
Our proposed empirical evaluation procedure will evaluate a large set of hyperparameters H = {h1, h2, · · · } on multiple convolutional neural networks and datasets (see Section 4) for different options of approximation error metric (see Section 3.2). For a given model, dataset, and approximation error metric e, the procedure evaluates for each set of hyperparameter choices hi ∈ H the approximation error ai = e(W̃i,W), the model performance error pi on the validation split, and the model performance error p⋆i after additional fine-tuning on the training data. We thus obtain sets of measurements A = {a1, a2, · · · }, P = {p1, p2, · · · } and P⋆ = {p⋆1, p⋆2, · · · } for H. When comparing two sets of hyperparameters hi ∈ H and hj ∈ H, we want to establish if the set with the smaller approximation error results in a smaller performance error of the model. In other words, the concordance of pairs of measurements needs to be established. Concordant pairs have a larger (smaller) performance error when the approximation error is larger (smaller) between two sets of hyperparameter choices, i.e. i and j are concordant if ai > aj and pi > pj or if ai < aj and pi < pj , and discordant otherwise. Kendall’s τ is a measure for the rank correlation (Kendall, 1938) or ordinal association between two order sets, in our case between approximation errors e and a model performances P (or P⋆). To avoid confusion with the concept of tensor rank, we will refer to Kendall’s τ simply as correlation. For this correlation measure, the difference between the number of concordant pairs (k) and discordant pairs (d) is scaled with the binomial coefficient m(m− 1)/2 to account for the different ways two measurements can be sampled from a total of m measurements:
τ = 2(k − d)/(m(m− 1)). (1) Kendall’s τ can be interpreted as follows: τ = 1 indicates a perfect positive rank correlation, τ = 0 no correlation, and τ = −1 a strong negative correlation. For a set of hyperparameters H, a useful approximation error e would thus result in a τ close to ±1, indicating it is predictive of the model’s performance. Note that Kendall’s τ does not depend on assumptions about the underlying distribution, whereas Pearson correlation assumes a linear relationship between the two measurements. Kendall’s τ is used over Spearman’s ρ because the interpretation of con- and discordance pairs for Kendall’s τ is closely related to our use case of choosing between hyperparameter sets.
3.2 APPROXIMATION ERRORS
We now discuss various measures e(W̃,W) to quantify the approximation error. The basis is to compute some norm on the difference between these tensors, in this work we use the Frobenius norm as is common in the literature (Lebedev et al., 2015; Hawkins et al., 2022). We shall consider three options to scale the norm, which could help make the error more robust when comparing hypotheses with different layers. Additionally, we can consider two options to compute the error on, namely either directly the weights or on the features. In total, we shall thus explore six different approximation errors in this work. An overview is presented in Table 1.
Normalization The norm between the difference of the weights is referred to as absolute norm and is used in the objective function when decomposing the pretrained weights. The relative norm is used in TD literature to compare errors between different layers (Lebedev et al., 2015; Hawkins et al., 2022), as it is invariant to the size of the weights. Alternatively, the norm of the difference can be scaled to account for the number of parameters, while keeping the distance from the weights.
Target tensor The most common option is to compute approximation error on the decomposed layer’s weights, W. However, Jaderberg et al. (2014) achieved promising results basing the decomposition on the approximation error of the features. Errors in some elements of the weight tensor
might be more permissible if they do not affect the resulting feature space. We therefore also consider the expected error on the features F = X · W, which is the output tensor of the layer after convolving its input with weights W on input data X. Likewise, approximated weights W̃ result in an approximated F̃. In practice, computing the feature space requires input data and is computationally more demanding than computing the weight approximation error. However, it is potentially more representative of an approximation’s effect on the output, plus unlike a later fine-tuning step, it could even be used if only data without labels is available.
3.3 TENSOR DECOMPOSITION METHODS
Our experiments shall consider three popular decomposition methods for convolutional layers, namely CP (Denton et al., 2014; Jaderberg et al., 2014; Lebedev et al., 2015), Tucker (Kim et al., 2016), and Tensor Train (TT) (Garipov et al., 2016). During the decomposition step the decomposed weights are found by minimizing the approximation error between the pretrained weights and the estimated decomposition: argminW̃ ||W − W̃||. For CP this is done with ALS (Carroll & Chang, 1970; Harshman, 1972), for Tucker with HOSVD (De Lathauwer et al., 2000), and TT-SVD (Oseledets, 2011) for Tensor Train. The ALS algorithm requires a random initialization. We sample from a uniform distribution [0,1) using Tensorly (Kossaifi et al., 2019b). The desired compression level is achieved by finding the corresponding rank, using the package Tensorly-Torch (Kossaifi et al., 2019b). The ranks used for CP, Tucker, and TT are given in Appendix A.1. For completeness, we list all considered decompositions with a 4-way tensor W.
CP decomposition A rank-R CP decomposition (Hitchcock, 1927) sums R rank-one tensors:
W̃CPc,y,x,t = R∑
r=1
Cc,rYy,rXx,rTt,r. (2)
Tucker decomposition A Tucker decomposition (Tucker, 1966) is distinct from a CP by the Tucker core G ∈ RR1×R2×R3×R4 . The Tucker rank is defined as the four-tuple (R1, R2, R3, R4). Since the dimensions of the convolutional weights are small with respect to the width and height dimensions, it is computationally more efficient to contract these with the Tucker core G and form a new core H = G ×2 Y ×3 X , where ×n is the n-mode product (Appendix A.2):
W̃Tuckerc,y,x,t = R1∑
r1=1 R2∑ r2=1 R3∑ r3=1 R4∑ r4=1 Gr1,r2,r3,r4Cc,r1Yy,r2Xx,r3Tt,r4 = R1∑ r1=1 R4∑ r4=1 Hr1,y,x,r4Cc,r1Tt,r4 .
(3)
Tensor Train decomposition Another alternative is the Tensor Train decomposition (Oseledets, 2011), which decomposes a given tensor as a linear chain of 2-way and 3-way tensors, where the first and last tensors are 2-way. The TT-rank in our four-dimensional case is the 3-tuple (R1, R2, R3):
W̃TTc,y,x,t = R1∑
r1=1 R2∑ r2=1 R3∑ r3=1 Cc,r1Yr1,y,r2Xr2,x,r3Tr3,t. (4)
4 EXPERIMENTS
This section provides the implementation details and discusses the results of our empirical approach.
4.1 EXPERIMENTAL SETUP
Datasets The experiments are run on the datasets CIFAR-10 (Krizhevsky, 2009) and FashionMNIST (Xiao et al., 2017). These datasets are common classification benchmarks for testing TD in CNNs (Cheng et al., 2021; Denil et al., 2014; Wu et al., 2020; Garipov et al., 2016; Hawkins et al., 2021). For both datasets, the original training sets are split into the set used for training and validation. The split is made such that equal class distributions are maintained. The details specific to the datasets are as follows: CIFAR-10 has 10 classes distributed equally across 60,000 images of 32 × 32 pixels with 3 color channels. After our validation split, there are 45,000 images in the training set and 5,000 in the validation set. The test set of 10,000 images remains unchanged. Fashion-MNIST has 10 classes distributed equally across 70,000 grayscale images of 28×28 pixels. After our validation split, there are 55,000 images in the training set and 5,000 in the validation set.
Model architecture and training The models used are ResNet-18 (He et al., 2016) and GaripovNet (Garipov et al., 2016). ResNet is a well-performing state-of-the-art convolutional neural network. GaripovNet is a 7-layer convolutional neural network proposed in Garipov et al. (2016) and used by Hawkins et al. (2022) for image classification. These models enable comparison with other works within and beyond TD for deep learning (Gusak et al., 2019; Garipov et al., 2016; Hawkins et al., 2022; Chu & Lee, 2021; Kossaifi et al., 2020). The following hyperparameters are used: ResNet-18 is trained with batch size 128, for 300 epochs, with Adam optimizer and a learning rate of 10−3. At epochs 100 and 150 the learning rate is multiplied by 0.1. GaripovNet is trained with the same settings as the original paper Garipov et al. (2016). The model is trained with Stochastic Gradient Descent (SGD) with a momentum of 0.9 and a learning rate of 0.1, multiplied by 0.1 at epochs 30, 60, and 90.
The validation set is used for early stopping and the selection of the training hyperparameters, i.e. learning rate, schedule, level of annealing, batch size, and optimizer. Training data is augmented with a random crop (padding with 4 pixels and cropping to the original size) and a random horizontal flip. All images are standardized based on the training mean and standard deviation per channel overall training samples. Early stopping is applied for both training the baseline and fine-tuning the decomposed model. The classification error on the test set is used for performance errors P. To Fine-tune after decomposition and obtain performance errors P⋆, the ResNet-18 is optimized for another 25 epochs, and GaripovNet for 10 epochs, using the last learning rate from the training.
Decomposition choices We now explain the considered values for the decomposition choices H explained in Section 3. For both neural network models, neither the first nor the last layer will be decomposed, as these layers already contain a relatively small amount of parameters. For GaripovNet the other five layers are part of L. For ResNet-18, L contains a selection of eight convolutional layers, details of which can be found in Appendix A.3. The set of TD methods that will be considered is M = {CP,Tucker,Tensor Train}, which were discussed in Section 3.3. The set of compression levels is C = {10%, 25%, 50%, 75%, 90%}. Multiple levels of compression are considered as each neural network layer can have different efficiency-performance trade-offs (Lebedev et al., 2015). In the experiments, we evaluate ResNet-18 on CIFAR-10, GaripovNet on CIFAR-10, and GaripovNet on F-MNIST. We exclude ResNet-18 on F-MNIST as the dataset is not sufficiently challenging for this model, and compressing one layer does not lead to a viable impact on the performance due to the model’s size and skip-connections.
Variance The process of decomposing and fine-tuning is repeated for five independent runs for each choice of layer, decomposition method, and compression level to assess and report variance in the results. Note that due to the stochasticity of the ALS algorithms, the random initializations can result in different CP decomposition estimates. The variance in correlation shown in the plots without fine-tuning results from the randomness in the CP initialization. Fine-tuning adds additional variance through its use of batched SGD. The observed variance with fine-tuning accounts for both the randomness from CP initialization as well as from fine-tuning, thereby representing all sources of randomness in our methodology. All runs for a given model and dataset are based on the same pretrained weights, so this is not a source of reported variance. In total, this results in 600 measurements for ResNet-18 and 375 measurements for GaripovNet per dataset.
4.2 EXPERIMENTAL RESULTS
Impact of compression levels on correlation We start by calculating the correlation across the layers and decomposition methods for multiple runs and calculate the averages grouped by compression levels. This is presented in Figure 1, where the bars are the average correlation τ between the Relative Weights and the classification error. The correlation is only based on the Relative Weights, as this is the most common metric in recent literature (Lebedev et al., 2015; Hawkins et al., 2022; Liebenwein et al., 2021). The correlations are grouped by the different levels of compression and represented by different colors. The error bars are ±1 standard deviation, representing the variance from multiple runs.
In Figure 1, it can be seen that the larger the compression, the higher the correlation is. This is a positive result for our use case. In the end, we are interested in making decomposition choices when compressing. The more we compress, the higher the correlation and therefore the more certain we are that basing our choice on the approximation error results in the optimal choice. It can also be noted that a certain level of compression is needed to be able to make choices based on the approximation error. For both models and datasets, the correlation is small when the compression is only 10% and 25%. The variance in the correlation at smaller compression levels is larger than at higher compression levels. When the compression is too small the effect on the performance of the model is too small compared to the observed variance, especially after fine-tuning. In the remainder of the experiments, we therefore focus on compression levels of C = {50%, 75%, 90%}.
Comparison of approximation error measures Works such as Liebenwein et al. (2021) have used a single approximation error, e.g. Relative Weights, to identify which layer to compress next, implicitly assuming that relative errors between layers are indicative of the relative model performance differences. We here compare the various approximation error measures, testing the correlation with performance over all decomposition choices. In Figure 2, the correlation is calculated based on measurements of all combinations of layer, decomposition method, and compression level once. The correlations are averaged over runs, as well as the ±1 standard deviation is calculated over the runs.
Figure 2 shows that the correlations are generally positive and significantly different from zero. This means that the decomposition choices can (to some extent) be based on the approximation error. There is one exception where the correlation is close to zero, namely for the Absolute Weights measure on ResNet-18. The difference in correlations can be explained by the difference in approximation error between layers, a detailed explanation is provided in Appendix A.4. These results suggest that using Absolute-based approximation errors, while they may show high correlations in some cases, are not generally a reliable indicator for future model performance, and that normalized measures should be used instead.
Comparing the different approximation error metrics, we observe the highest correlation with the performance for Relative Weights in all tested cases. Interestingly, the magnitude of the correlation for Feature-based measures is similar to or smaller than the correlation for Weight-based measures. Although the findings of Jaderberg et al. (2014) would suggest a stronger correlation for the features, at least before fine-tuning, we do not observe benefits for basing decomposition choices on the approximation error of the features rather than the weights. Possibly pretraining already ensures all the elements in the weight tensor are equally important for the target data distribution, thus a Weight-based error already reliably reflects resulting errors on the output features. Comparing the error bars with and without fine-tuning, the randomness from fine-tuning has a larger impact on the variance in correlation than the randomness from CP initialization. In summary, our results support the use of the Relative Weights approximation error to make decomposition choices.
Impact of fine-tuning Most works use fine-tuning to recover some of the lost performance (Denton et al., 2014; Lebedev et al., 2015; Kim et al., 2016). The right subfigure of Figure 2 shows the mean and standard deviation of five correlations, per model and per dataset after fine-tuning. After fine-tuning, the correlation between the approximation error and the performance error is smaller than before fine-tuning for GaripovNet, as additional training adapts the model and reduces the performance gap between the different choices, but this effect is not observed for ResNet-18 where the correlation was already lower. However, for both models, there is still a clear positive correlation between the approximation error and the performance after fine-tuning. This means that decomposition choices can still be based on the approximation error when intending to perform fine-tuning later, even though different hyperparameters might be optimal without and with fine-tuning.
While the correlation is positive and significantly different from zero, the correlation is only around +0.5 for ResNet-18. We therefore investigate if the correlation is higher when only considering specific decomposition choices next.
Correlation across Layers vs. Methods In the previous experiments, we compared how decomposition choices on both different layers and methods correlated with performance. Here we investigate if the correlation is stronger if only one of these choices would be considered. For instance, previous works often only include layers as decomposition choice, and have not compared across decomposition methods. We compare correlation on all choices for both sets (L ×M × C) as before, to layers only (L× {m} × {c} with reported results averaged for all m ∈ M and c ∈ C), and to methods only ({l} ×M× {c} with reported results averaged for all l ∈ L and c ∈ C). Figure 3 shows that before fine-tuning the approximation error has a lower correlation with the performance of the model when considering layers only compared to all decomposition choices. Not all layers of a neural network have the same efficiency-performance trade-off (Lebedev et al., 2015; Hawkins et al., 2021). Therefore, the correlation is lower when we fix the decomposition method and compression level. It is better to combine layers with compression levels (and decomposition
methods). However, fine-tuning recovers some of the correlation for layers. Across decomposition methods, the correlation before fine-tuning is comparable to the correlation calculated across all decomposition choices. These results suggest decomposition methods can be compared better than just the layers before fine-tuning, although the former is not an optimization choice considered in previous works. Interestingly, for GaripovNet the correlation across decomposition methods drops significantly after fine-tuning. We find that this is due to difficulties in optimizing the CP decomposed layers, since the gradient flow through CP convolutions is a known problem (Silva & Lim, 2008; Lebedev et al., 2015), whereas ResNet does not suffer from this due to its skip connections. We conclude that (unlike current practice) network compression could consider multiple decomposition methods as their approximation errors can be compared, though most reliably when aiming for compression without fine-tuning.
5 CONCLUSION
We have tested Assumption 1, and find that there is a positive correlation between the relative approximation error on the weights and the resulting performance error of the model for a wide range of TD choices, including layers, methods, and compression levels. We further find that using data to compute the approximation error on the features, rather than simply on the model weights directly, does not improve the correlation. Scaling the approximation error with the norm of the original tensor provides the highest and most stable correlation across all compared models and datasets. Our findings suggest that the Relative Weights approximation error is best suited to select among TD decomposition choices.
While these choices can be made across layers, TD methods, and compression levels, we observe that the correlation before fine-tuning is smaller when comparing between layers for a fixed method, than when comparing across methods (here: CP, Tucker, and Tensor Train) for a fixed layer. Integrating multiple types of decompositions within a network compression technique is therefore a potential direction for future work, although care has to be taken when the use case includes later fine-tuning, as the correlation for selecting across decomposition methods can degrade since backpropagation through certain factorized structures remains challenging.
Our experiments are limited to a set of decomposition choices and network layers commonly found in the TD literature. Future work can extend to other decompositions and other types of neural network layers, e.g. fully connected layers. While the weights are matrices, tensor decomposition has been applied to fully connected layers by reshaping the weight matrix into a higher-order tensor. The choice of reshaping then becomes an additional decomposition choice.
REPRODUCIBILITY
The authors find it important that this work is reproducible. To this extent the following efforts have been made: The datasets and test-validation splits are described in Section 4.1. The datasets are collected from PyTorch Vision. The models and hyperparameters used for training are covered in Section 4.1. The implementations of the baseline models are from the PyTorch Model Zoo. The models are factorized with Tensorly-Torch (Kossaifi et al., 2019b) , using CP initialization described in Section 4.1 and ranks provided in Appendix A.1. The experimental setup is explained in Section 4.1. The calculation of metrics is formulated in Section 3. Finally, the code to reproduce these experiments is available at: https://github.com/JSchuurmans/tddl.
ACKNOWLEDGMENTS
Described results are made possible in part by TU Delft Cohesion subsidy, TERP Cohesion project 2020.
A APPENDIX
A.1 RANK AND COMPRESSION LEVEL
Tables 2 and 3 present the ranks that are used for GaripovNet (Garipov et al., 2016) and ResNet-18 (He et al., 2016) respectively. Note that in these tables, the Tucker rank includes the kernel ranks corresponding to the width and height, and the TT rank includes R0 = R4 = 1, which are left out of Equation 4 for conciseness.
A.2 N-MODE PRODUCT
The definition of n-Mode Product ×n given by Kolda & Bader (2009) is used in this paper. The contraction of a tensor G ∈ RR1,R2,··· ,RN with matrix Y ∈ RY,Rn along the nth mode of the tensor is defined elementwise as:
(X ×n Y )r1,··· ,rn−1,y,rn+1,··· ,rN = Rn∑
rn=1
Xr1,··· ,rn−1,rn,rn+1,··· ,rNYy,rn (5)
A.3 SELECTION OF RESNET-18 LAYERS
The following layers are considered for decomposition in ResNet-18: The last layer of the first two blocks. The first layer with stride two. The first layer after a 1x1 convolution. A 1x1 convolution with a similar number of parameters as the first choice of layer. The layer before and after the 1x1 convolution. The final two convolutional layers, as they are the largest convolutional layers.
A.4 DIFFERENCE BETWEEN LAYERS IN ABSOLUTE WEIGHTS FOR RESNET-18
Let us recall from Figure 2 that when basing the approximation error on Absolute Weights, the correlation with the performance error is close to zero for ResNet-18. The correlation is zero due to the difference between layers.
Figure 4 plots the approximation errors of Relative and Absolute Weights versus the performance error before and after fine-tuning. The points resulting from CP, Tucker, and Tensor Train with compression of 50%, 75%, and 90% are grouped per layer.
Layers 15, 19, and 28 have a smaller absolute weight error and large performance error relative to layers 38, 60, and 63. Compare this to the data for Relative Weights where layers 60 and 63 have a small relative weight error and small performance error and layer 28 has comparable errors to 15, 19, and 28. This leads to the correlation between Absolute Weights and performance errors being close to zero, while it is positive for Relative Weights. Therefore, the difference in correlations can be explained by the difference in approximation error between layers. | 1. What is the main contribution of the paper regarding tensor decomposition for CNN compression?
2. What are the strengths and weaknesses of the paper, particularly in terms of clarity, quality, novelty, and reproducibility?
3. Do you have any questions or suggestions regarding the paper's content, experiments, or conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper considers the use of tensor decomposition (TD) for compression of the weight tensors in CNNs. It considers the problem of choosing the best compression hyperparameters (which layer to compress, which type of TD to use) for a given level of compression. In particular, it investigates if the decomposition error for a given choice can be used as an indicator of how well that particular decomposition choice will perform both without and with fine-tuning when the CNN performs its intended task. If the decomposition error does provide such an indication, it would simplify the task of choosing compression hyperparameters since it would be enough to consider the weight tensor in isolation, rather than how it interacts with the rest of the network.
Strengths And Weaknesses
--- Strengths ---
S1. The paper provides a good overview of previous works that use TD for compression of NNs.
S2. The problem the paper tries to solve is relevant in practice if someone wants to use TD to compress a CNN, but doesn't have any prior insight into what type of TD would work well for that particular network.
S3. Although some details are hard to understand, the paper is overall well-written and the different experiments are interesting.
--- Weaknesses ---
W1. Some details are hard to understand. I list these below.
a. In Fig 2: What is the difference between Relative Weights and Scaled Weights? Similarly, what is the difference between Relative Features and Scaled Features? Based on Table 1, isn't the relative measures the same as the scaled measures but with the particular choice
n
W
=
|
|
W
|
|
and
n
F
=
|
|
F
|
|
? If this is correct, then how is
n
W
and
n
F
different for the scaled measures? Also, Table 1 is confusing since
n
W
and
n
F
aren't introduced anywhere.
b. In the paragraph "Comparison of approximation error measures" on page 7, in the sentence "In Figure 2, the correlation is calculated ... and rank once.": By "rank", do you mean compression level (i.e., one of the three choices {50%, 75%, 90%})? Calling this rank is a bit confusing, since there's many choices of rank that potentially could yield a certain compression ratio, but only three compression ratios under consideration.
c. Related to the point b above: With 3 types of decomposition and 3 levels of compression and 5 convolutional layers for ResNet-18, shouldn't there be 5 * 3 * 3 = 45 different measurements rather than 40? Similarly, since GaripovNet has 8 layers to choose from, shouldn't there be 8 * 3 * 3 = 72 different measurements for that network rather than 50? Please clarify.
d. In Sec 3, the iterative approach is a bit hard to understand. For example, if my network has 10 layers and I want to compress 5 of them, do I run 5 iterations of the method, compressing one layer each iteration out of the remaining uncompressed layers? Does the iterative method have a target compression, and how does that impact the number of layers to compress? Perhaps you can add a more detailed algorithm for how this works in the supplement?
Clarity, Quality, Novelty And Reproducibility
--- Clarity, Quality ---
The paper is well-written overall, although there are a few places that are unclear. In addition to those points listed under weaknesses above, a few more minor comments follow below.
The sentence "Across decomposition methods, the correlation before fine-tuning is comparable for all decomposition choices." on page 9 doesn't make sense.
"Rank" is used both as in tensor decomposition rank, and when discussion the rank correlation throughout the paper. To avoid confusion, I would avoid using "rank" in the second sense, i.e., in reference to the correlation measure.
In the supplement, I think you should add a definition of the notation
×
n
as well as a definition of the Tucker decomposition when the first 2 modes aren't contracted with the core tensor, just so it's clear what "standard" Tucker looks like.
The colons in the subscripts in Eq (4) are confusing, since this is usually used to denote all indices along a certain mode. It would better if you just explain that
C
and
T
are in fact just 2-way tensors here and remove the colons.
The figures are a bit blurry/pixelated. I would recommend using proper vector graphics.
The usage of the term "quantization" in Sec 5 is confusing since it's different from how it's typically used in ML; see e.g. https://pytorch.org/docs/stable/quantization.html. I think it would be better to just call it "reshaping" to avoid confusion.
Some minor typos:
In the 1st sentence of the 2nd paragraph on page 2: The work "make" should be removed. Also "TD decomposition" sounds strange since this reads "tensor decomposition decomposition".
Last sentence before start of Sec 3.1: TDs -> TD
There's a period missing in the Fig 1 caption.
--- Novelty ---
The paper doesn't present any novel method, but it does provide a nice empirical investigation of the problem.
--- Reproducibility ---
If the code is released as promised, the paper will be reproducible. |
ICLR | Title
Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning
Abstract
Learning representations for individual instances when only bag-level labels are available is a fundamental challenge in multiple instance learning (MIL). Recent works have shown promising results using contrastive self-supervised learning (CSSL), which learns to push apart representations corresponding to two different randomly-selected instances. Unfortunately, in real-world applications such as medical image classification, there is often class imbalance, so randomlyselected instances mostly belong to the same majority class, which precludes CSSL from learning inter-class differences. To address this issue, we propose a novel framework, Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR), which improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels. The framework employs a novel self-paced sampling strategy to ensure the accuracy of pseudo labels. We evaluate ItS2CLR on three medical datasets, showing that it improves the quality of instance-level pseudo labels and representations, and outperforms existing MIL methods in terms of both bag and instance level accuracy.
1 INTRODUCTION
The goal of multiple instance learning (MIL) is to perform classification on data that is arranged in bags of instances. Each instance is either positive or negative, but these instance-level labels are not available during training; only bag-level labels are available. A bag is labeled as positive if any of the instances in it are positive, and negative otherwise. An important application of MIL is cancer diagnosis from histopathology slides. Each slide is divided into hundreds or thousands of tiles but typically only slide-level labels are available (Courtiol et al., 2018; Campanella et al., 2019; Li et al., 2021; Chen and Krishnan, 2022; Zhang et al., 2022; Lu et al., 2021).
Histopathology slides are typically very large, in the order of gigapixels (the resolution of a typical slide can be as high as 105 × 105), so end-to-end training of deep neural networks is typically infeasible due to memory limitations of GPU hardware. Consequently, state-of-the-art approaches (Campanella et al., 2019; Li et al., 2021; Zhang et al., 2022; Lu et al., 2021; Shao et al., 2021) utilize a two-stage learning pipeline: (1) a feature-extraction stage where each instance is mapped to a representation which summarizes its content, and (2) an aggregation stage where the representations extracted from all instances in a bag are combined to produce a bag-level prediction (Figure 1). Notably, our results indicate that even in rare settings where end-to-end training is possible, this pipeline is still superior (see Section 4.3).
In this work, we focus on a fundamental challenge in MIL: how to train the feature extractor. Currently, there are three main strategies to perform feature-extraction, which have significant shortcomings. (1) Pretraining on a large natural image dataset such as ImageNet (Shao et al., 2021; Lu et al., 2021) is problematic for medical applications because features learned from natural images may generalize poorly to other domains (Lu et al., 2020). (2) Supervised training using bag-level labels as instance-level labels is effective if positive bags contain mostly positive instances (Lerousseau et al., 2020; Xu et al., 2019; Chikontwe et al., 2020), but in many medical datasets this is not the case (Bejnordi et al., 2017; Li et al., 2021). (3) Contrastive self-supervised learning (CSSL) outperforms prior methods (Li et al., 2021; Ciga et al., 2022), but is not as effective in settings with heavy class imbalance, which are of crucial importance in medicine. CSSL operates by pushing apart the representations of different randomly selected instances. When positive bags contain
mostly negative instances, CSSL training ends up pushing apart negative instances from each other, which precludes it from learning features that distinguish positive samples from the negative ones (Figure 2). We discuss this finding in Section 2.
Our goal is to address the shortcomings of current feature-extraction methods. We build upon several key insights. First, it is possible to extract instance-level pseudo labels from trained MIL models, which are more accurate than assigning the bag-level labels to all instances within a positive bag. Second, we can use the pseudo labels to finetune the feature extractor, improving the instance-level representations. Third, these improved representations result in improved bag-level classification and more accurate instance-level pseudo labels. These observations are utilized in our proposed framework, Iterative Self-Paced Supervised Contrastive Learning for MIL Representation (ItS2CLR), as illustrated in Figure 1. After initializing the features with CSSL, we iteratively improve them via supervised contrastive learning (Khosla et al., 2020) using pseudo labels inferred by the aggregator. This feature refinement utilizes pseudo labels sampled according to a novel selfpaced strategy, which ensures that they are sufficiently accurate (see Section 3.2). In summary, our contributions are the following:
1. We propose ItS2CLR – a novel MIL framework where instance features are iteratively improved using pseudo labels extracted from the MIL aggregator. The framework combines supervised contrastive learning with a self-paced sampling scheme to ensure that pseudo labels are accurate.
2. We demonstrate that the proposed approach outperforms existing MIL methods in terms of bagand instance-level accuracy on three real-world medical datasets relevant to cancer diagnosis: two histopathology datasets and a breast ultrasound dataset. It also outperforms alternative finetuning methods, such as instance-level cross-entropy minimization and end-to-end training.
3. In a series of controlled experiments, we show that ItS2CLR is effective when applied to different feature-extraction architectures and when combined with different aggregators.
2 CSSL MAY NOT LEARN DISCRIMINATIVE FEATURES IN MIL
Recent MIL approaches use contrastive self-supervised learning (CSSL) to train the feature extractor (Li et al., 2021; Saillard et al., 2021; Rymarczyk et al., 2021). In this section, we show that CSSL has a crucial limitation in realistic MIL settings, which precludes it from learning discriminative features. CSSL aims to learn a representation space where samples from the same class are close to each other, and samples from different classes are far from each other, without access to class labels. This is achieved by minimizing the InfoNCE loss (Oord et al., 2018).
LCSSL = Ex,xaug,{xdiffi }ni=1
[ − log exp (fψ(x) · fψ(x aug)/τ)
exp (fψ(x) · fψ(xaug)/τ) + ∑n i=1 exp ( fψ(x) · fψ(xdiffi )/τ )] , (1)
where fψ = f ◦ψ, in which f : Rm → Rd is the feature extractor mapping the input data to a representation, ψ : Rd → Rd′ is a projection head with a feed-forward network and ℓ2 normalization, and τ is a temperature hyperparameter. The expectation is taken over samples x ∈ Rm drawn uniformly from the training set. Minimizing the loss brings the representation of an instance x closer to the representation of its random augmentation, xaug, and pushes the representation of x away from the representations of n other examples {xdiffi }ni=1 in the training set. A key assumption in CSSL is that x belongs to a different class than most of the randomly-sampled examples xdiff1 , . . . , x diff n . This usually holds in standard classification datasets with many classes such as ImageNet (Deng et al., 2009), but not in MIL tasks relevant to medical diagnosis, where a majority of instances are negative (e.g. 95% in Camelyon16). Hence, most terms in the sum∑n i=1 exp ( fψ(x) · fψ(xdiffi )/τ ) in the loss in Equation 1 correspond to pairs of examples (x, xdiffi ) both belonging to the negative class. Therefore, minimizing the loss mostly pushes apart the representations of negative instances, as illustrated in the top panel of Figure 2. This is an example of class collision (Arora et al., 2019; Chuang et al., 2020), a general problem in CSSL, which has been shown to impair performance on downstream tasks (Ash et al., 2021; Zheng et al., 2021).
Class collision makes CSSL learn representations that are not discriminative between classes. In order to study this phenomenon, we report the average inter-class distances and intra-class deviations for representations learned by CSSL on Camelyon16 in Table 1. The inter-class distance reflects how far the instance representations from different classes are apart; the intra-class distance reflects the variation of instance representations within each class. As predicted, the intra-class deviation corresponding to the representations of negative instances learned by CSSL is large. Representations learned by ItS2CLR have larger inter-class distance (more separated classes) and smaller intra-class deviation (less variance among instances belonging to the same class) than those learned by CSSL. This suggests that the features learned by ItS2CLR are more discriminative, which is confirmed by the results in Section 4.
Note that using bag-level labels does not solve the problem of class collision. When x is negative, even if we select {xdiffi }ni=1 from the positive bags in equation 1, most of the selected instances
will still be negative. Overcoming the class-collision problem requires explicitly detecting positive instances. This motivates our proposed framework, described in the following section.
3 MIL VIA ITERATIVE SELF-PACED SUPERVISED CONTRASTIVE LEARNING
Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR) addresses the limitation of contrastive self-supervised learning (CSSL) described in Section 2. ItS2CLR relies on latent variables indicating whether each instance is positive or negative, which we call instancelevel pseudo labels. To estimate pseudo labels, we use instance-level probabilities obtained from the MIL aggregator (we use the aggregator from DS-MIL (Li et al., 2021) but our framework is compatible with any aggregator that generates instance-level probabilities). The pseudo labels are obtained by binarizing the probabilities according to a threshold η ∈ (0, 1), which is a hyperparameter. ItS2CLR uses the pseudo labels to finetune the feature extractor (initialized using CSSL). In the spirit of iterative self-training techniques (Zhong et al., 2019; Wei et al., 2020; Liu et al., 2022), we alternate between refining the feature extractor, re-computing the pseudo labels, and training the aggregator, as described in Algorithm 1. A key challenge is that the pseudo labels are not completely accurate, especially at the beginning of the training process. To address the impact of incorrect pseudo labels, we apply a contrastive loss to finetune the feature extractor (see Section 3.1), where the contrastive pairs are selected according to a novel self-paced learning scheme (see Section 3.2). The right panel of Figure 1 shows that our approach iteratively improves the pseudo labels on the Camelyon16 dataset (Bejnordi et al., 2017). This finetuning only requires a modest increment in computational time (see Appendix A.3).
3.1 SUPERVISED CONTRASTIVE LEARNING WITH PSEUDO LABELS
To address the class collision problem described in Section 2, we leverage supervised contrastive learning (Pantazis et al., 2021; Dwibedi et al., 2021; Khosla et al., 2020) combined with the pseudo labels estimated by the aggregator. The goal is to learn discriminative representations by pulling together the representations corresponding to instances in the same class, and pushing apart those belong to instances of different classes. For each anchor instance x selected for contrastive learning, we collect a set Sx believed to have the same label as x, and a set Dx believed to have a different label to x. These sets are depicted in the bottom panel of Figure 2. The supervised contrastive loss corresponding to x is defined as:
Lsup (x) = 1 |Sx| ∑ xs∈Sx − log exp (fψ(x) · fψ(xs)/τ)∑ xs∈Sx exp (fψ(xi) · fψ(xs)/τ) + ∑ xd∈Dx exp (fψ(x) · fψ(xd)/τ) . (2) In Section 3.2, we explain how to select x, Sx and Dx to ensure that the selected samples have high-quality pseudo labels.
A tempting alternative to supervised constrastive learning is to train the feature extractor on pseudo labels using standard cross-entropy loss. However, in Section 4.3 we show that this leads to sub-
Algorithm 1 Iterative Self-Paced Supervised Contrastive Learning (It2SCLR) Require: Feature extractor f , projection head ψ, MIL aggregator gϕ, where ϕ is an instance classifier
Require: Bags {Xb}Bb=1, bag-level labels {Yb}Bb=1 1: f (0) ← fSSL ▷ Initialize f with CSSL-pretrained weights 2: for t = 0 to T do
3: hbk ← f (t)(xbk), ∀xbk ∈ Xb, ∀b ▷ Extract instance representation 4: Hb ← {hbk} Kb k=1, ∀b ▷ Group instance embedding into bags 5: g(t)ϕ ← Train with {Hb} B b=1 and {Yb}Bb=1 ▷ Train the aggregator 6: AUC(t)val ← bag-level AUC on the validation set 7: if AUC(t)val ≥ maxt′≤t { AUC(t ′) val } then ▷ If bag prediction improves
8: ŷbk ← 1{ϕ(t)(hb k )>η}, ∀x b k ∈ Xb, ∀b ▷ Update instance pseudo labels 9: end if
10: f (t+1)ψ ← argminfψLsup(f (t) ψ ) ▷ Optimize feature extractor via Eq.(2) 11: end for
stantially worse performance in the downstream MIL classification task due to memorization of incorrect pseudo labels. On the contrary, we can leverage self-paced sampling to choose contrastive pairs with the most confident, even certain, pseudo labels for supervised contrastive learning.
3.2 SAMPLING VIA SELF-PACED LEARNING
A key challenge in It2SCLR is to improve the accuracy of instance-level pseudo labels without ground-truth labels. This is achieved by finetuning the feature extractor on a carefully-selected subset of instances. We select the anchor instance x and the corresponding sets Sx and Dx (defined in Section 3.1 and Figure 2) building upon two key insights: (1) The negative bags only contain negative instances. (2) The probabilities used to build the pseudo labels are indicative of their quality; instances with higher predicted probabilities usually have more accurate pseudo labels (Zou et al., 2018a; Liu et al., 2020; 2022).
Let X−neg denote all instances within the negative bags. By definition of MIL, we can safely assume that all instances in X−neg are negative. In contrast, positive bags contain both positive and negative instances. Let X+pos and X−pos denote the sets of instances in positive bags with positive and negative pseudo labels respectively. During an initial warm-up lasting Twarm-up epochs, we sample anchor instances x only from X−neg to ensure that they are indeed all negative. For each such instance, Sx is built by sampling instances from X−neg, and Dx is built by sampling from X+pos. After the warm-up phase, we start sampling anchor instances from X+pos and X−pos. To ensure that these instances have accurate pseudo labels, we only consider the top-r% instances with the highest probabilities in each of these sets, which we call X+pos(r) and X−pos(r) respectively, as illustrated by Figure 5 (the ratio of positive-to-negative anchors is a fixed hyperparameter p+). For each anchor x, the same-label set Sx is sampled from X+pos(r) if x is positive and from X−neg ∪ X−pos(r) if x is negative. The different-label set Dx is sampled from X−neg ∪ X−pos(r) if x is positive, and from X+pos(r) if x is negative. To exploit the improvement of the instance representations during training, we gradually increase r to include more instances from positive bags, which can be interpreted as a self-paced easy-to-hard learning scheme (Kumar et al., 2010; Jiang et al., 2014; Zou et al., 2018b). Let t and T denote the current epoch, and the total number of epochs respectively. For Twarmup < t ≤ T . we set:
r := r0 + αr (t− Twarm-up) , where αr = (rT − r0)/(T − Twarm-up), (3)
and r0 and rT are hyperparameters. Details on tuning these hyperparameters are provided in Appendix A.4. As demonstrated in the right panel of Figure 1 (see also Appendix B.1), this scheme indeed results in an improvement of the pseudo labels (and hence of the underlying representations).
4 EXPERIMENTS
We evaluate ItS2CLR on three MIL datasets described in Section 4.1. In Section 4.2 we show that ItS2CLR consistently outperforms approaches that use CSSL feature-extraction by a substantial margin on all three datasets for different choices of aggregators. In Section 4.3 we show that ItS2CLR outperforms alternative finetuning approaches based on cross-entropy loss minimization and end-to-end training across a wide range of settings where the prevalence of positive instances and bag size vary. In Section 4.4, we show that ItS2CLR is able to improve features obtained from a variety of pretraining schemes and network architectures.
4.1 DATASETS
We evaluate the proposed framework on three cancer diagnosis tasks. When training our models, we select the model with the highest bag-level performance on the validation set and report the performance on a held-out test set. More information about the datasets, experimental setup, and implementation is provided in Appendix A.
Camelyon16 (Bejnordi et al., 2017) is a popular benchmark for MIL (Li et al., 2021; Shao et al., 2021; Zhang et al., 2022) where the goal is to detect breast-cancer metastasis in lymph node sections. It consists of 400 whole-slide histopathology images. Each whole slide image (WSI) corresponds to a bag with a binary label indicating the presence of cancer. Each WSI is divided into an average of 625 tiles at 5x magnification, which correspond to individual instances. The dataset also contains pixel-wise annotations indicating the presence of cancer, which can be used to derive ground-truth instance-level labels.
TCGA-LUAD is a dataset from The Cancer Genome Atlas (TCGA) (tcg), a landmark cancer genomics program, where the associated task is to detect genetic mutations in cancer cells. Detecting these mutations is important to determine treatment options for LUAD (Coudray et al., 2018; Fu et al., 2020). The data contains 800 labeled tumorous frozen WSIs from lung adenocarcinoma (LUAD). Each WSI is divided into an average of 633 tiles at 10x magnification corresponding to unlabeled instances.
The Breast Ultrasound Dataset contains 28,914 B-mode breast ultrasound exams (Shen et al., 2021a). The associated task is to detect breast cancer. Each exam contains between 4-70 images (18.8 images per exam on average) corresponding to individual instances, but only a bag-level label indicating the presence of cancer is available per exam. Additionally, a subset of images is annotated, which makes it possible to also evaluate instance-level performance. This dataset is imbalanced at the bag level: only 5,593 of 28,914 exams contain cancer.
4.2 COMPARISON WITH CONTRASTIVE SELF-SUPERVISED LEARNING
In this section, we compare the performance of ItS2CLR to a baseline that performs featureextraction via the CSSL method SimCLR (Chen et al., 2020). This approach has achieved stateof-the-art performance on multiple WSI datasets (Li et al., 2021). To ensure a fair comparison, we initialize the feature extractor in ItS2CLR also using SimCLR. Table 2 shows that ItS2CLR clearly outperforms the SimCLR baseline on all three datasets. The performance improvement is particularly significant in Camelyon16 where it achieves a bag-level AUC of 0.943, outperforming the baseline by an absolute margin of 8.87%. ItS2CLR also outperforms an improved baseline reported by Li et al. (2021) with an AUC of 0.917, which uses higher resolution tiles than in our experiments (both 20x and 5x, as opposed to only 5x).
To perform a more exhaustive comparison of the features learned by SimCLR and ItS2CLR, we compare them in combination with several different popular MIL aggregators.1: max pooling, top-k pooling (Shen et al., 2021b), attention-MIL pooling (Ilse et al., 2018), DS-MIL pooling (Li et al., 2021), and transformer (Chen and Krishnan, 2022) (see Appendix C for a detailed description). Table 3 shows that the ItS2CLR features outperform the SimCLR features by a large margin for all aggregators, and are substantially more stable (the standard deviation of the AUC over multiple trials is lower).
We also evaluate instance-level accuracy, which can be used to interpret the bag-level prediction (for instance, by revealing tumor locations). In Table 4, we report the instance-level AUC, F1 score, and Dice score of both ItS2CLR and the SimCLR-based baseline on Camelyon16. ItS2CLR again exhibits stronger performance. Figure 3 shows an example of instance-level predictions in the form of a tumor localization map.
4.3 COMPARISON WITH ALTERNATIVE APPROACHES
In Tables 3, 4 and 5, we compare ItS2CLR with the approaches described below. Table 6 reports additional comparisons at different witness rates (the fraction of positive instances in positive bags), created synthetically by modifying the ratio between negative and positive instances in Camelyon16.
Finetuning with ground-truth instance labels provides an upper bound on the performance that can be achieved through feature improvement. ItS2CLR does not reach this gold standard, but substantially closes the gap.
1To be clear, the ItS2CLR features are learned using the DS-MIL aggregator, as described in Section 3, and then frozen before combining them with the different aggregators.
Cross-entropy finetuning with pseudo labels, which we refer to as CE finetuning, consistently underperforms ItS2CLR when combined with different aggregators, except at high witness rates. We conjecture that this is due to the sensitivity of the cross-entropy loss to incorrect pseudo labels.
Ablated versions of ItS2CLR where we do not apply iterative updates of the pseudo labels (w/o iter.), or our self-paced learning scheme (w/o SPL) or both (w/o both) achieve substantially worse performance than the full approach. This indicates that both of these ingredients are critical in learning discriminative features.
End-to-end training is often computationally infeasible in medical applications. We compare ItS2CLR to end-to-end models on a downsampled version of Camelyon16 (see Appendix A.5) and on the breast ultrasound dataset. For a fair comparison, all end-to-end models use the same CSSLpretrained weights and aggregator as used in ItS2CLR. Table 5 shows that ItS2CLR achieves better instance- and bag-level performance than end-to-end training. The analysis in Appendix B.3 shows that end-to-end overfits quickly when the bag size is large.
4.4 IMPROVING DIFFERENT PRETRAINED REPRESENTATIONS
In this section, we show that ItS2CLR is capable of improving representations learned by different pretraining methods: supervised training on ImageNet and two non-contrastive SSL methods, BYOL (Grill et al., 2020a) and DINO (Caron et al., 2021). DINO is based on the ViT-S/16 architecture (Dosovitskiy et al., 2020), whereas the other methods are based on ResNet-18. Table 7a shows the result of initializing ItS2CLR with these features (as well as with SimCLR). The different initializations achieve varying degrees of pseudo label accuracy, but ItS2CLR improves the performance of all of them, demonstrating the robustness of the proposed framework.
5 RELATED WORK
Self-supervised learning Contrastive learning methods have become popular in unsupervised representation learning, achieving state-of-the-art self-supervised learning performance for natural images (Chen et al., 2020; He et al., 2020; Grill et al., 2020a; Caron et al., 2020; Zbontar et al., 2021; Caron et al., 2021). These methods have also shown promising results in medical imaging (Li et al., 2021; Azizi et al., 2021; Kaku et al., 2021; Zhu et al., 2022; Ciga et al., 2022). Recently, Li et. al. (Li et al., 2021) applied SimCLR (Chen et al., 2020) to extract instance-level features for WSI MIL tasks and achieved state-of-the-art performance. However, Arora et al. (2019) point out the
potential issue of class collision in contrastive learning, i.e. that some negative pairs may actually have the same class. Prior works on alleviating class collision problem include reweighting the negative and positive terms with class ratio (Chuang et al., 2020), pulling closer additional similar pairs (Dwibedi et al., 2021), and avoiding pushing apart negatives that are inferred to belong to the same class based on a similarity metric (Zheng et al., 2021). In contrast, we propose a framework that leverages information from the bag-level labels to iteratively resolve the class collision problem.
There also exist non-contrastive alternatives that avoid introducing negative pairs (e.g. BYOL (Grill et al., 2020b), DINO (Caron et al., 2021), and SimSiam (Chen and He, 2021)). However, Wang et al. (2021) report that removing the negatives can make different object categories overlap and result in under-clustering, which limits the model’s ability to learn discriminative features.
Multiple instance learning A major part of MIL works focuses on improving the MIL aggregator. Traditionally, non-learnable pooling operators such as mean-pooling and max-pooling were commonly used in MIL (Pinheiro and Collobert, 2015; Feng and Zhou, 2017). More recent methods parameterize the aggregator using neural networks that employ attention mechanisms (Ilse et al., 2018; Li et al., 2021; Shao et al., 2021; Chen and Krishnan, 2022). This research direction is complementary to our proposed approach, which focuses on obtaining better instance representations, and can be combined with different types of aggregators (see Section 4.2).
6 CONCLUSION
In this work, we investigate how to improve feature-extraction in multiple-instance learning models. We identify a limitation of contrastive self-supervised learning: class collision hinders it from learning discriminative features in class-imbalanced MIL problems. To address this, we propose a novel framework that iteratively refines the features with pseudo labels estimated by the aggregator. Our method outperforms the existing state-of-the-art MIL methods on three medical datasets, and can be combined with different aggregators and pretrained feature extractors.
The proposed method does not outperform a cross-entropy-based baseline at very high witness rates, suggesting that it is mostly suitable for low witness rates scenarios (however, it is worth noting that this is the regime that is more commonly encountered in medical applications such as cancer diagnosis). In addition, there is a performance gap between our method and finetuning using instance-level ground truth, which suggests there is further room for improvement.
A EXPERIMENTS
A.1 DATASET
Camelyon16 Camelyon16 is a public dataset for detection of metastasis in breast cancer. This dataset consists of 271 training and 129 test whole slide images (WSI), which are further divided into roughly 3.2 million patches at 20× magnification and 0.25 million patches at 5× magnification. On average, at 20× and 5× magnification each slide contains approximately 8,000 and 625 patches respectively. Each WSI is paired with pixel-level annotations indicating the position of tumors (if any are present). We ignore the pixel-level annotations during training and consider only slide-level labels (i.e. the slide is considered positive if it contains any annotated tumor regions). As a result, positive bags contain mixtures of patches with tumors and patches with healthy tissue. Negative bags contain only patches with healthy tissue. The ratio between positive and negative patches in this dataset is highly imbalanced. Only a small fraction of patches (less than 10%) in the positive slides contains tumor.
TCGA-LUAD TCGA for Lung Adenocarcinoma (LUAD) is a subset of TCGA (The Cancer Genome Atlas), a landmark cancer genomics program. It consists of 800 tumorous frozen wholeslide histopathology images and the corresponding genetic mutation status. Each WSI is paired with binary labels indicating whether each gene is mutated or wild type. In this experiment, we build MIL models to detect four mutations - EGFR, KRAS, STK11, and TP53, which are sensitizing mutations that can impact treatment options in LUAD (Coudray et al., 2018; Fu et al., 2020). We split the data randomly into training, validation and test sets so that each patient will appear in only one of the subsets. After splitting the data, 477 images are in the training set, 96 images are in the validation set, and 227 images are in the test set.
Breast Ultrasound dataset The Breast Ultrasound Dataset includes 28,914 ultrasound exams (Shen et al., 2021a). An exam is labeled as cancer-positive if there is a pathology-confirmed malignant finding associated with this exam. In this dataset, 5593 exams are cancer-positive. On average, each exam contains approximately 18 images. Patients in the dataset were randomly divided into a training set (60%), a validation set (10%), and test set (30%). Each patient was included in only one of the three sets. We show 5 example breast ultrasound images in Figure 4.
A.2 IMPLEMENTATION DETAILS
All experiments were conducted on NVIDIA RTX8000 GPUs and NVIDIA V100 GPUs. For all models, we perform model selection during training based on bag-level AUC evaluated on the validation set.
Camelyon16 We follow the same preprocessing and pretraining steps as Li et. al. (Li et al., 2021). To preprocess the slides, we cropped the slides into tiles at 5x magnification, filtered out tiles that do not contain enough tissues (average saturation < 30), and resized the images into a resolution of 224 x 224 pixels. Resizing was performed using the Pillow package (Clark, 2015) with default settings (nearest neighbor sampling).
We pretrain the feature extractor, ResNet18 (He et al., 2016), with SimCLR (Chen et al., 2020) for a maximum of 600 epochs. Each patch is represented by a 512-dimensional vector. We set the batch size at 512 and temperature at 0.5. We use SGD with the learning rate of 0.03, weight decay of 0.0001, and cosine annealing scheduler. We also train a MIL aggregator using the instance features extracted by the feature extractor in order to monitor the bag AUC of the downstream task on the validation set. During finetuning with Its2CLR, we fine-tune the feature extractor for a maximum of 50 epochs.
The batch size is set to 512, and the learning rate is set to 10−2. At the feature extractor training stage, we apply random data augmentation to each instance, including:
• Random (p = 0.8) color jittering: brightness, contrast, and saturation factors are uniformly sampled from [0.2, 1.8], hue factor is uniformly sampled from [−0.2, 0.2];
• Random gray scale (p = 0.2); • Random Gaussian blur with kernel size of 0.06 times the size of an image; • Random horizontal/vertical flipping with 0.5 probability.
When training the DS-MIL aggregator, we follow the settings in (Li et al., 2021). We use the Adam optimizer during training. Since each bag may contain different a number of instances, we follow (Li et al., 2021) and set the batch size to just one bag. We train each model for a maximum of 350 epochs. We use an initial learning rate of 2× 10−4, and use the StepLR scheduler to reduce the learning rate by 0.5 every 75 epochs. Details on the hyperparameters used for training the aggregator are in Appendix C.
TCGA-LUAD To preprocess the slides, we cropped them into tiles at 10x magnification, filtered out the background tiles that do not contain enough tissues (when average saturation is less than 30), and resized the images into the resolution of 224 x 224 pixels. Resizing was performed using the Pillow package (Clark, 2015) with nearest neighbor sampling. These tiles were color-normalized to match the Vahadane method (Vahadane et al., 2016).
To train the feature extractor, we perform the same process as for Camelyon16.
We also use DS-MIL (Li et al., 2021) as the aggregator. When training the aggregator, we resample the ratio of positive and negative bags to keep the class ratio balanced. We train the aggregator for a maximum of 100 epochs using the Adam optimizer with the learning rate set to 2×10−4 and reduce the learning rate by 0.5 every 50 epochs.
Breast Ultrasound We follow the same preprocessing steps as Shen et. al. (Shen et al., 2021a). All images were resized to 224 x 224 pixels using bilinear interpolation. We used ResNet18 (He et al., 2016) as the feature extractor and pretrained it using SimCLR (Chen et al., 2020) for 100 epochs. We adopt the same pretraining approach as for Camelyon16. We used the Instance Attention-MIL
as an aggregator (Ilse et al., 2018). Given a bag of images x1, ..., xk and a feature extractor f , the aggregator first computes instance-level predictions ŷi for each image xi. It then calculates an attention score αi ∈ [0, 1] for each image xi using its feature vectors f(xi). Lastly, the baglevel prediction is computed as the average instance prediction weighted by the attention score ŷ =∑k i αiŷi. To optimize the aggregator, we trained it using Adam with a learning rate set to 10
−3 for a maximum of 350 epochs.
A.3 COMPUTATIONAL COMPLEXITY
It takes 600 epochs (around 90 hours ) to train SimCLR. Our finetuning only takes 50 extra epochs (around 10 hours), which is only 1/10 of the pretraining time. Updating pseudo labels is also efficient. Updating instance features and training MIL aggregators only take around 10 minutes, and we only do it every 5 epochs.
A.4 HYPERPARAMETERS OF TRAINING THE FEATURE EXTRACTOR IN ITS2CLR
Hyperparameter tuning The hyperparameters of the proposed method include: the initial threshold used for binarization of the prediction to produce pseudo labels η ∈ [0.1, 0.9], the proportion of positive queries sampled p+ ∈ [0.05, 0.5], the initial ratio r0 ∈ [0.01, 0.7] and the final ratio rT ∈ [0.2, 0.8] of in the self-paced sampling scheme. For Camelyon16, we obtain the highest baglevel validation AUC using the following hyperparameters: η = 0.3, p+ = 0.2, r0 = 0.2 and rT = 0.8. We use the feature extractor trained under this setting in Tables 2, 3 and 4. The complete list of hyperparameters in different experiments is reported in Table 8.
Sensitivity analysis We conduct the sensitivity analysis for each hyperparameter on Camelyon 16, and observe a robust performance over a range of hyperparameter values.
• Threshold η: The choice of η influences the instance-level pseudo labels. As shown in Figure 5, the outputs of the instance-level classifier are mostly close to 0 or 1, so the pseudo labels do not dramatically vary for a wide range of η. We conducted a small ablation experiment on the importance of η. Figure 6 (left) shows that ItS2CLR is quite robust to the value of η, except for some extreme values. If η is too small (e.g. 0.1), it can introduce a significant number of false positives. If η is too large (e.g. 0.8), it can mistakenly exclude some useful positive samples, causing a drop in the performance. In the main paper, since negative instances are more prevalent than positive instances, a threshold of 0.3 (less than 0.5) can increase the recall for the positive instances.
• Sampling ratio of query instance over pseudo labels: We use p+ to denote the percentage of positive query instances used during the contrastive learning stage. Figure 6 (right) shows that it is desirable to choose a relatively small p+. Since there are far fewer positive instances than negative instance, keeping the ratio of positive queries low can avoid repetitively sampling from a limited number of positive instances. Also, since the negative instance set X−neg is clean, there is more label noise among the positive pseudo labels.
• The initial rate r0 and final rate rT for the self-paced sampling scheduler: Figure 7 shows that ItS2CLR is also generally robust to the r0 and rT . However, extremely large initial rate r0 (high confidence in the pseudo labels) may introduce more noise during the training and hurt the performance. Conversely, extremely small rT (low confidence in the pseudo labels) may prohibit the model from using more data, also hurting performance.
• Sampling during warm-up: During the warmup phase, we sample query instances from X−neg. An alternative choice can be sampling we sample the query instance from X+pos and the corresponding set Dx from X−neg. However, our experiments show that the resulting bag-level AUC drops to 90.91% under this setting, which is significantly lower than 94.25% by the proposed method. This comparison demonstrates the importance of using clean negative instances as query images during warmup.
A.5 EXPERIMENTS ON SYNTHETIC VERSIONS OF CAMELYON16
Simulation of witness rates (WR)
Since the ground truth instance-level labels are available for Camelyon16, we can conduct experiments on synthetic versions of the dataset to study the impact of the prevalence of positive instances
on the performance of the proposed approach and the baselines. Section 2 describes how the performance of CSSL is affected by low witness rates (fraction of positive instances in each positive bag). To study the robustness of the proposed framework to the witness rate in the data, we increase or decrease the witness rate of Camelyon16 by randomly dropping negative or positive instances within each bag respectively. The percentage of retained instances and the resulting witness rates are reported in Table 6.
Downsampled version of Camelyon16 for end-to-end training
In order to enable end-to-end training we downsample each bag in Camelyon16 to around 500 instances so that it fits in the memory of a GPU. To achieve this, we divide the large bag into smaller bags while keeping the witness rate of each sub-bag at a similar level as the original bag. In more detail, if the bag size is smaller or equal to 500, we keep the original bag. If the bag size is greater than 500, we divide the bag into several sub-bags such that each sub-bag contains approximately 500 instances. After that, we randomly divide the negative instances within the original bag into desired number of sub-bags evenly. For the positive bag, we randomly partition the positive instances within the original bag into desired number of sub-bags evenly as well. If the number of positive instances within that positive bag is smaller than the desired number of sub-bags, we correct the number of sub-bags to be the number of positive instances. We then combine the positive instances and the negative instances to form sub-bags. This ensures that the bag-level label is correct and the witness rate for each positive sub-bag remains similar to the original bag.
A.6 DESCRIPTION OF THE ABLATION STUDY
Details for CE finetuning with/without Iterative Updating
• CE + w/o iterative updating: we use the same set of the initial pseudo labels as our ItS2CLR framework. All the instances within the negative bags are labeled as negative, for instances in positive bags, we assign the pseudo label according to the instance prediction by the aggregator. In the following finetuning epochs of the aggregator, the pseudo label is kept fixed.
• CE + iterative updating: based on CE + w/o iterative updating, the pseudo label is updated every few epochs, which is in turn used to guide the finetuning. We will make sure that this is explained clearly in the revised version of the paper.
Details for ItS2CLR With/ without SPL.
• ItS2CLR without iterative update: We keep everything the same as the full Its2CLR procedure (including the SPL strategy), but we do not apply steps 7, 8 and 9 in Algorithm 1. As a result, the pseudo labels are fixed to always equal the initial set of pseudo labels.
• ItS2CLR without SPL: We keep everything the same as the full Its2CLR procedure (including iterative updating), but modify step 10 in Algorithm 1. We do not utilize the pseudo label to train the model in a self-paced learning way as in Section 3.2. We utilize all the pseudo-labeled data from the beginning of the finetuning.
B ADDITIONAL RESULTS
We present here additional results to supplement those presented in Section 4.
B.1 LEARNING CURVES
F1-Score plot corresponding to Figure 1: In Figure 8, we show the max F1 score curve corresponding to the right side of Figure 1. This plot confirms the importance of self-paced learning and iterative updating in ItS2CLR.
Instance pseudo label AUC comparison with cross-entropy finetuning: Figure 9 compares ItS2CLR with an alternative approach that finetunes the feature extractor using cross-entropy (CE) loss on the Camelyon16 dataset. Without iterative updating, CE finetuning rapidly overfits to the noise in the pseudo labels. Iterative updating prevents this to some extent, but does not match the performance of ItS2CLR, which produces increasingly accurate pseudo labels as the iterations proceed.
B.2 INSTANCE-LEVEL EVALUATION
In order to evaluate instance-level performance, we report classification metrics including AUC, F1-score, AUPRC and Dice score for localization.
The Dice score is defined as follows: Dice = 2 ∑ i yipi∑
i yi + ∑ i pi , (4)
where yi and pi are the ground truth and predicted probability for the ith sample. It penalizes the prediction with low confidence. The prediced probability is computed from the output of the MIL model si via linear scaling: pi = σ (asi + b) , (5) where a ∈ [−5, 5] and b ∈ [0.1, 10] are chosen to maximize the Dice score on the validation set. Max pooling aggregator: In Table 4, we show that out model achieves better weakly supervised localization performance compared to other methods when DS-MIL is used as the aggregator. In Table 9, we show that the same conclusion holds for an aggregator based on max-pooling.
Linear evaluation: In Table 10, we report results obtained by training a logistic regression model using the features obtained from the same approaches in Table 4, following a standard linear eval-
uation pipeline in representation learning (Chen et al., 2020). ItS2CLR again achieves the best instance-level performance. We also produce bag-level predictions using the maximum output of the linear classifier for each bag, which again showcases that instance-level performance results in superior bag-level classification.
B.3 COMPARISON WITH END-TO-END TRAINING
In this section we provide additional results to complement Table 5, where ItS2CLR is compared to end-to-end models. The end-to-end training is conducted with the same aggregators for each dataset as described in Section 4 and Appendix A.2.
Camelyon16 Figure 10 shows that an end-to-end model trained on the downsampled version of Camelyon16 described in Section A.5 rapidly overfits when trained from scratch and from SimCLRpretrained weights. The two-stage model, on the other hand, is less prone to overfitting. Table 11 shows that the two-stage learning pipeline outperforms end-to-end training, and is in turn outperfomed by ItS2CLR.
Breast Ultrasound dataset Table 12 shows that for the breast-ultrasound dataset end-to-end training outperforms the SimCLR+Aggregator baseline, but is outperformed by ItS2CLR.
B.4 TUMOR LOCALIZATION MAPS
Figure 11 provides additional tumor localization maps.
C MIL AGGREGATORS
C.1 FORMULATION OF MIL AGGREGATORS
In this section, we describe the different MIL aggregators benchmarked in Section 4.2 and Table 3.
Let B denote a collection of sets of feature vectors in Rd. The bags of extracted features in the dataset are denoted by {Hb}Bb=1 ⊂ B. An aggregator is defined as a function g : B → [0, 1] mapping bags of extracted features to a score in [0, 1].
There exist two main approaches in MIL:
1. The instance-level approach: using a logistic classifier on each instance, then aggregating instance predictions over a bag (e.g. max-pooling, top k-pooling).
2. The embedding-level approach: aggregating the instance embeddings, then obtaining a bag-level prediction via a bag-level classifier (e.g. attention-based aggregator, Transformer).
We denote the embeddings of the instances within a bag by H = {hk}Kk=1, where K is the number of instances.
Max-pooling obtains bag-level predictions by taking the maximum of the instance-level predictions produced by a logistic instance classifier ϕ, that is
Top-k pooling Shen et al. (2021b) produces bag-level predictios using the mean of the top-M ranked instance-level predictions produced by a logistic instance classifier ϕ, whereM is a hyperparameter.
Let topM (ϕ,H) denote the indices of the elements inH for which ϕ produces the highestM scores,
gϕ(H) = 1
M ∑ k∈topM(ϕ,H) ϕ(hk). (7)
Attention-based MIL Ilse et al. (2018) aggregates instance embeddings using a sum weighted by attention weights. Then the bag-level estimation is computed from the aggregated embeddings by a logistic bag-level classifier φ:
gφ(H) = φ ( K∑ k=1 akhk ) , (8)
where ak is the attention weight on instance k. ak is computed by:
ak = exp
( wT tanh(V hTk ) )∑K j=1 exp ( wT tanh(V hTj )
) , (9) wherew ∈ Rp×1 and V ∈ Rp×d are learnable parameters and p is the dimension of the hidden layer. DS-MIL combines instance-level and embedding-level aggregation, we refer to DS-MIL (Li et al., 2021) for more details on this approach.
Transformer Chen and Krishnan (2022) aggregation uses an L-layer Transformer to process the set of instance features H . The initial set H(0) is set equal to H . Then it goes through Transformer as following:
H ′(l) = MSA ( H(l−1) ) +H(l−1))
H(l) = MLP ( H ′(l−1) ) +H ′(l−1)
(10)
for l = 1, · · · , L, where MSA is multiple-head self-attention, MLP is a multi-layer perceptron network. Then the processed vectors H(l) are fed to Attention-based MIL (Ilse et al., 2018) to obtain bag-level predictions.
gφ(H l) = φ ( K∑ k=1 akh l k ) , (11)
where ak is the attention weight on instance k. ak is computed by:
ak = exp
( wT tanh(V (hlk) T ) )∑K
j=1 exp ( wT tanh(V (hlj) T ) ) , (12)
wherew ∈ Rp×1 and V ∈ Rp×d are learnable parameters and p is the dimension of the hidden layer.
C.2 IMPLEMENTATION DETAILS
Top-k pooling we select the ratio in Top-k pooling from the set {0.1%, 1%, 3%, 10%, 20%}. DS-MIL The weight between the two cross-entropy loss functions in DS-MIL is selected from the interval [0.1, 5] based on the best validation performance.
Attention-based MIL. The hidden dimension of the attention module to compute the attention weight is set equal to the dimension of the input feature vector (512).
Transformer. We add a light-weighted two-layer Transformer blocks to process instance features. We did not observe increase in performance with additional blocks. | 1. What is the main contribution of the paper regarding instance-level pseudo-labels in medical imaging datasets?
2. What are the strengths and weaknesses of the proposed method compared to other contrastive learning methods such as BYOL, MoCo, or Barlow Twins?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any additional comments or questions regarding the paper that the reviewer has? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes It2SCLR, a method to finetune the feature extractor of MIL models with supervised contrastive learning (SCL) by sampling instances based on pseudo-labels. The main challenge the authors tackle is how to apply SCL in a MIL setting where instance-level labels are not available. The experiments are conducted on three medical imaging datasets comparing a baseline SimCLR to SimCLR + It2SCLR using several MIL aggregation methods. The results show substantial improvements over the baseline.
Strengths And Weaknesses
Strengths:
Reported instance and bag-level performances are considerably higher than baseline.
The method fixes the class collision problem for contrastive learning in medical imaging.
Results are reported and consistent for several datasets.
Weaknesses:
It is a rather complicated fix (additional representation learning phase; repeated feature extraction, which can be costly in histopathology slides; multiple additional hyperparameters; requirement for a validation set). This is not per se a problem, but one could use SSL methods such as BYOL, MoCo or Barlow Twins, which are considerably simpler in this regard and do not make use of negative instances, thus do not require a fix for the class collision problem.
In that sense, it is questionable that in-domain BYOL pre-training in Table 7 performs worse than ImageNet pre-training. Related works report over 98% AUC with these methods [1, 2], which should be discussed in the paper. It is also unclear how It2SCLR is applied to BYOL and DINO as these methods are different to supervised contrastive learning, i.e. Eq. 2 referenced in Algorithm 1 does not apply to these methods.
Aggregators: The proposed method requires instance-level probability scores. It is unclear how these are derived in embedding-level approaches like Attention-MIL. Is there a reason why Attention-MIL in Table 3 performs considerably worse with SimCLR than all other aggregation functions, although in ItS2CLR (e.g. w/o both setting) it is again on par with all other aggregation functions? The Transformer listed in Table 3 is not an aggregation function itself, it also uses Attention-MIL.
Details: It would be more rigorous to repeat the experiment from Fig. 1 and report intervals since the standard deviations in Table 3 are quite high. Furthermore, all variants used: ItS2CLR (full), w/o iterative update, w/o SPL, ground truth finetuning, CE finetuning (w/o iter./iter), should be explained in more detail in the main text as the descriptions given in Section 4.3 are in part difficult to understand.
It2SCLR makes use of self-training, which should be addressed in more detail in the related works.
Additional comments/questions:
The function composition after Equation 1 should be changed as f is applied first.
Fig. 1: The difference between dashed and solid lines should be explained.
The text does not mention any validation set for CAMELYON16, which is an important detail as Algorithm 1 relies on it.
Is the warm-up phase beneficial? When building D_x from positive bags, one might still sample negative instances, which would be detrimental.
Why are top-r% sampled from X_pos^- ? Would it not be more beneficial to sample negative instances with the least (disease) probability?
Are the same augmentations applied to all SSL methods?
Why is ViT used as the backbone for DINO? It would be more comparable to use the same backbone for all methods.
Would the method scale to greater magnifications?
References:
[1] Dehaene, Olivier, et al. "Self-supervision closes the gap between weak and strong supervision in histology."
[2] Bergner, Benjamin, et al. "Iterative Patch Selection for High-Resolution Image Recognition."
Clarity, Quality, Novelty And Reproducibility
The goal of the paper is clear. The approach uses existing techniques (SCL, self-training) but the combination is novel to the best of my knowledge. The proposed method seems technically correct. Adding/Polishing various details as mentioned above would improve readability and reproducibility. |
ICLR | Title
Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning
Abstract
Learning representations for individual instances when only bag-level labels are available is a fundamental challenge in multiple instance learning (MIL). Recent works have shown promising results using contrastive self-supervised learning (CSSL), which learns to push apart representations corresponding to two different randomly-selected instances. Unfortunately, in real-world applications such as medical image classification, there is often class imbalance, so randomlyselected instances mostly belong to the same majority class, which precludes CSSL from learning inter-class differences. To address this issue, we propose a novel framework, Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR), which improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels. The framework employs a novel self-paced sampling strategy to ensure the accuracy of pseudo labels. We evaluate ItS2CLR on three medical datasets, showing that it improves the quality of instance-level pseudo labels and representations, and outperforms existing MIL methods in terms of both bag and instance level accuracy.
1 INTRODUCTION
The goal of multiple instance learning (MIL) is to perform classification on data that is arranged in bags of instances. Each instance is either positive or negative, but these instance-level labels are not available during training; only bag-level labels are available. A bag is labeled as positive if any of the instances in it are positive, and negative otherwise. An important application of MIL is cancer diagnosis from histopathology slides. Each slide is divided into hundreds or thousands of tiles but typically only slide-level labels are available (Courtiol et al., 2018; Campanella et al., 2019; Li et al., 2021; Chen and Krishnan, 2022; Zhang et al., 2022; Lu et al., 2021).
Histopathology slides are typically very large, in the order of gigapixels (the resolution of a typical slide can be as high as 105 × 105), so end-to-end training of deep neural networks is typically infeasible due to memory limitations of GPU hardware. Consequently, state-of-the-art approaches (Campanella et al., 2019; Li et al., 2021; Zhang et al., 2022; Lu et al., 2021; Shao et al., 2021) utilize a two-stage learning pipeline: (1) a feature-extraction stage where each instance is mapped to a representation which summarizes its content, and (2) an aggregation stage where the representations extracted from all instances in a bag are combined to produce a bag-level prediction (Figure 1). Notably, our results indicate that even in rare settings where end-to-end training is possible, this pipeline is still superior (see Section 4.3).
In this work, we focus on a fundamental challenge in MIL: how to train the feature extractor. Currently, there are three main strategies to perform feature-extraction, which have significant shortcomings. (1) Pretraining on a large natural image dataset such as ImageNet (Shao et al., 2021; Lu et al., 2021) is problematic for medical applications because features learned from natural images may generalize poorly to other domains (Lu et al., 2020). (2) Supervised training using bag-level labels as instance-level labels is effective if positive bags contain mostly positive instances (Lerousseau et al., 2020; Xu et al., 2019; Chikontwe et al., 2020), but in many medical datasets this is not the case (Bejnordi et al., 2017; Li et al., 2021). (3) Contrastive self-supervised learning (CSSL) outperforms prior methods (Li et al., 2021; Ciga et al., 2022), but is not as effective in settings with heavy class imbalance, which are of crucial importance in medicine. CSSL operates by pushing apart the representations of different randomly selected instances. When positive bags contain
mostly negative instances, CSSL training ends up pushing apart negative instances from each other, which precludes it from learning features that distinguish positive samples from the negative ones (Figure 2). We discuss this finding in Section 2.
Our goal is to address the shortcomings of current feature-extraction methods. We build upon several key insights. First, it is possible to extract instance-level pseudo labels from trained MIL models, which are more accurate than assigning the bag-level labels to all instances within a positive bag. Second, we can use the pseudo labels to finetune the feature extractor, improving the instance-level representations. Third, these improved representations result in improved bag-level classification and more accurate instance-level pseudo labels. These observations are utilized in our proposed framework, Iterative Self-Paced Supervised Contrastive Learning for MIL Representation (ItS2CLR), as illustrated in Figure 1. After initializing the features with CSSL, we iteratively improve them via supervised contrastive learning (Khosla et al., 2020) using pseudo labels inferred by the aggregator. This feature refinement utilizes pseudo labels sampled according to a novel selfpaced strategy, which ensures that they are sufficiently accurate (see Section 3.2). In summary, our contributions are the following:
1. We propose ItS2CLR – a novel MIL framework where instance features are iteratively improved using pseudo labels extracted from the MIL aggregator. The framework combines supervised contrastive learning with a self-paced sampling scheme to ensure that pseudo labels are accurate.
2. We demonstrate that the proposed approach outperforms existing MIL methods in terms of bagand instance-level accuracy on three real-world medical datasets relevant to cancer diagnosis: two histopathology datasets and a breast ultrasound dataset. It also outperforms alternative finetuning methods, such as instance-level cross-entropy minimization and end-to-end training.
3. In a series of controlled experiments, we show that ItS2CLR is effective when applied to different feature-extraction architectures and when combined with different aggregators.
2 CSSL MAY NOT LEARN DISCRIMINATIVE FEATURES IN MIL
Recent MIL approaches use contrastive self-supervised learning (CSSL) to train the feature extractor (Li et al., 2021; Saillard et al., 2021; Rymarczyk et al., 2021). In this section, we show that CSSL has a crucial limitation in realistic MIL settings, which precludes it from learning discriminative features. CSSL aims to learn a representation space where samples from the same class are close to each other, and samples from different classes are far from each other, without access to class labels. This is achieved by minimizing the InfoNCE loss (Oord et al., 2018).
LCSSL = Ex,xaug,{xdiffi }ni=1
[ − log exp (fψ(x) · fψ(x aug)/τ)
exp (fψ(x) · fψ(xaug)/τ) + ∑n i=1 exp ( fψ(x) · fψ(xdiffi )/τ )] , (1)
where fψ = f ◦ψ, in which f : Rm → Rd is the feature extractor mapping the input data to a representation, ψ : Rd → Rd′ is a projection head with a feed-forward network and ℓ2 normalization, and τ is a temperature hyperparameter. The expectation is taken over samples x ∈ Rm drawn uniformly from the training set. Minimizing the loss brings the representation of an instance x closer to the representation of its random augmentation, xaug, and pushes the representation of x away from the representations of n other examples {xdiffi }ni=1 in the training set. A key assumption in CSSL is that x belongs to a different class than most of the randomly-sampled examples xdiff1 , . . . , x diff n . This usually holds in standard classification datasets with many classes such as ImageNet (Deng et al., 2009), but not in MIL tasks relevant to medical diagnosis, where a majority of instances are negative (e.g. 95% in Camelyon16). Hence, most terms in the sum∑n i=1 exp ( fψ(x) · fψ(xdiffi )/τ ) in the loss in Equation 1 correspond to pairs of examples (x, xdiffi ) both belonging to the negative class. Therefore, minimizing the loss mostly pushes apart the representations of negative instances, as illustrated in the top panel of Figure 2. This is an example of class collision (Arora et al., 2019; Chuang et al., 2020), a general problem in CSSL, which has been shown to impair performance on downstream tasks (Ash et al., 2021; Zheng et al., 2021).
Class collision makes CSSL learn representations that are not discriminative between classes. In order to study this phenomenon, we report the average inter-class distances and intra-class deviations for representations learned by CSSL on Camelyon16 in Table 1. The inter-class distance reflects how far the instance representations from different classes are apart; the intra-class distance reflects the variation of instance representations within each class. As predicted, the intra-class deviation corresponding to the representations of negative instances learned by CSSL is large. Representations learned by ItS2CLR have larger inter-class distance (more separated classes) and smaller intra-class deviation (less variance among instances belonging to the same class) than those learned by CSSL. This suggests that the features learned by ItS2CLR are more discriminative, which is confirmed by the results in Section 4.
Note that using bag-level labels does not solve the problem of class collision. When x is negative, even if we select {xdiffi }ni=1 from the positive bags in equation 1, most of the selected instances
will still be negative. Overcoming the class-collision problem requires explicitly detecting positive instances. This motivates our proposed framework, described in the following section.
3 MIL VIA ITERATIVE SELF-PACED SUPERVISED CONTRASTIVE LEARNING
Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR) addresses the limitation of contrastive self-supervised learning (CSSL) described in Section 2. ItS2CLR relies on latent variables indicating whether each instance is positive or negative, which we call instancelevel pseudo labels. To estimate pseudo labels, we use instance-level probabilities obtained from the MIL aggregator (we use the aggregator from DS-MIL (Li et al., 2021) but our framework is compatible with any aggregator that generates instance-level probabilities). The pseudo labels are obtained by binarizing the probabilities according to a threshold η ∈ (0, 1), which is a hyperparameter. ItS2CLR uses the pseudo labels to finetune the feature extractor (initialized using CSSL). In the spirit of iterative self-training techniques (Zhong et al., 2019; Wei et al., 2020; Liu et al., 2022), we alternate between refining the feature extractor, re-computing the pseudo labels, and training the aggregator, as described in Algorithm 1. A key challenge is that the pseudo labels are not completely accurate, especially at the beginning of the training process. To address the impact of incorrect pseudo labels, we apply a contrastive loss to finetune the feature extractor (see Section 3.1), where the contrastive pairs are selected according to a novel self-paced learning scheme (see Section 3.2). The right panel of Figure 1 shows that our approach iteratively improves the pseudo labels on the Camelyon16 dataset (Bejnordi et al., 2017). This finetuning only requires a modest increment in computational time (see Appendix A.3).
3.1 SUPERVISED CONTRASTIVE LEARNING WITH PSEUDO LABELS
To address the class collision problem described in Section 2, we leverage supervised contrastive learning (Pantazis et al., 2021; Dwibedi et al., 2021; Khosla et al., 2020) combined with the pseudo labels estimated by the aggregator. The goal is to learn discriminative representations by pulling together the representations corresponding to instances in the same class, and pushing apart those belong to instances of different classes. For each anchor instance x selected for contrastive learning, we collect a set Sx believed to have the same label as x, and a set Dx believed to have a different label to x. These sets are depicted in the bottom panel of Figure 2. The supervised contrastive loss corresponding to x is defined as:
Lsup (x) = 1 |Sx| ∑ xs∈Sx − log exp (fψ(x) · fψ(xs)/τ)∑ xs∈Sx exp (fψ(xi) · fψ(xs)/τ) + ∑ xd∈Dx exp (fψ(x) · fψ(xd)/τ) . (2) In Section 3.2, we explain how to select x, Sx and Dx to ensure that the selected samples have high-quality pseudo labels.
A tempting alternative to supervised constrastive learning is to train the feature extractor on pseudo labels using standard cross-entropy loss. However, in Section 4.3 we show that this leads to sub-
Algorithm 1 Iterative Self-Paced Supervised Contrastive Learning (It2SCLR) Require: Feature extractor f , projection head ψ, MIL aggregator gϕ, where ϕ is an instance classifier
Require: Bags {Xb}Bb=1, bag-level labels {Yb}Bb=1 1: f (0) ← fSSL ▷ Initialize f with CSSL-pretrained weights 2: for t = 0 to T do
3: hbk ← f (t)(xbk), ∀xbk ∈ Xb, ∀b ▷ Extract instance representation 4: Hb ← {hbk} Kb k=1, ∀b ▷ Group instance embedding into bags 5: g(t)ϕ ← Train with {Hb} B b=1 and {Yb}Bb=1 ▷ Train the aggregator 6: AUC(t)val ← bag-level AUC on the validation set 7: if AUC(t)val ≥ maxt′≤t { AUC(t ′) val } then ▷ If bag prediction improves
8: ŷbk ← 1{ϕ(t)(hb k )>η}, ∀x b k ∈ Xb, ∀b ▷ Update instance pseudo labels 9: end if
10: f (t+1)ψ ← argminfψLsup(f (t) ψ ) ▷ Optimize feature extractor via Eq.(2) 11: end for
stantially worse performance in the downstream MIL classification task due to memorization of incorrect pseudo labels. On the contrary, we can leverage self-paced sampling to choose contrastive pairs with the most confident, even certain, pseudo labels for supervised contrastive learning.
3.2 SAMPLING VIA SELF-PACED LEARNING
A key challenge in It2SCLR is to improve the accuracy of instance-level pseudo labels without ground-truth labels. This is achieved by finetuning the feature extractor on a carefully-selected subset of instances. We select the anchor instance x and the corresponding sets Sx and Dx (defined in Section 3.1 and Figure 2) building upon two key insights: (1) The negative bags only contain negative instances. (2) The probabilities used to build the pseudo labels are indicative of their quality; instances with higher predicted probabilities usually have more accurate pseudo labels (Zou et al., 2018a; Liu et al., 2020; 2022).
Let X−neg denote all instances within the negative bags. By definition of MIL, we can safely assume that all instances in X−neg are negative. In contrast, positive bags contain both positive and negative instances. Let X+pos and X−pos denote the sets of instances in positive bags with positive and negative pseudo labels respectively. During an initial warm-up lasting Twarm-up epochs, we sample anchor instances x only from X−neg to ensure that they are indeed all negative. For each such instance, Sx is built by sampling instances from X−neg, and Dx is built by sampling from X+pos. After the warm-up phase, we start sampling anchor instances from X+pos and X−pos. To ensure that these instances have accurate pseudo labels, we only consider the top-r% instances with the highest probabilities in each of these sets, which we call X+pos(r) and X−pos(r) respectively, as illustrated by Figure 5 (the ratio of positive-to-negative anchors is a fixed hyperparameter p+). For each anchor x, the same-label set Sx is sampled from X+pos(r) if x is positive and from X−neg ∪ X−pos(r) if x is negative. The different-label set Dx is sampled from X−neg ∪ X−pos(r) if x is positive, and from X+pos(r) if x is negative. To exploit the improvement of the instance representations during training, we gradually increase r to include more instances from positive bags, which can be interpreted as a self-paced easy-to-hard learning scheme (Kumar et al., 2010; Jiang et al., 2014; Zou et al., 2018b). Let t and T denote the current epoch, and the total number of epochs respectively. For Twarmup < t ≤ T . we set:
r := r0 + αr (t− Twarm-up) , where αr = (rT − r0)/(T − Twarm-up), (3)
and r0 and rT are hyperparameters. Details on tuning these hyperparameters are provided in Appendix A.4. As demonstrated in the right panel of Figure 1 (see also Appendix B.1), this scheme indeed results in an improvement of the pseudo labels (and hence of the underlying representations).
4 EXPERIMENTS
We evaluate ItS2CLR on three MIL datasets described in Section 4.1. In Section 4.2 we show that ItS2CLR consistently outperforms approaches that use CSSL feature-extraction by a substantial margin on all three datasets for different choices of aggregators. In Section 4.3 we show that ItS2CLR outperforms alternative finetuning approaches based on cross-entropy loss minimization and end-to-end training across a wide range of settings where the prevalence of positive instances and bag size vary. In Section 4.4, we show that ItS2CLR is able to improve features obtained from a variety of pretraining schemes and network architectures.
4.1 DATASETS
We evaluate the proposed framework on three cancer diagnosis tasks. When training our models, we select the model with the highest bag-level performance on the validation set and report the performance on a held-out test set. More information about the datasets, experimental setup, and implementation is provided in Appendix A.
Camelyon16 (Bejnordi et al., 2017) is a popular benchmark for MIL (Li et al., 2021; Shao et al., 2021; Zhang et al., 2022) where the goal is to detect breast-cancer metastasis in lymph node sections. It consists of 400 whole-slide histopathology images. Each whole slide image (WSI) corresponds to a bag with a binary label indicating the presence of cancer. Each WSI is divided into an average of 625 tiles at 5x magnification, which correspond to individual instances. The dataset also contains pixel-wise annotations indicating the presence of cancer, which can be used to derive ground-truth instance-level labels.
TCGA-LUAD is a dataset from The Cancer Genome Atlas (TCGA) (tcg), a landmark cancer genomics program, where the associated task is to detect genetic mutations in cancer cells. Detecting these mutations is important to determine treatment options for LUAD (Coudray et al., 2018; Fu et al., 2020). The data contains 800 labeled tumorous frozen WSIs from lung adenocarcinoma (LUAD). Each WSI is divided into an average of 633 tiles at 10x magnification corresponding to unlabeled instances.
The Breast Ultrasound Dataset contains 28,914 B-mode breast ultrasound exams (Shen et al., 2021a). The associated task is to detect breast cancer. Each exam contains between 4-70 images (18.8 images per exam on average) corresponding to individual instances, but only a bag-level label indicating the presence of cancer is available per exam. Additionally, a subset of images is annotated, which makes it possible to also evaluate instance-level performance. This dataset is imbalanced at the bag level: only 5,593 of 28,914 exams contain cancer.
4.2 COMPARISON WITH CONTRASTIVE SELF-SUPERVISED LEARNING
In this section, we compare the performance of ItS2CLR to a baseline that performs featureextraction via the CSSL method SimCLR (Chen et al., 2020). This approach has achieved stateof-the-art performance on multiple WSI datasets (Li et al., 2021). To ensure a fair comparison, we initialize the feature extractor in ItS2CLR also using SimCLR. Table 2 shows that ItS2CLR clearly outperforms the SimCLR baseline on all three datasets. The performance improvement is particularly significant in Camelyon16 where it achieves a bag-level AUC of 0.943, outperforming the baseline by an absolute margin of 8.87%. ItS2CLR also outperforms an improved baseline reported by Li et al. (2021) with an AUC of 0.917, which uses higher resolution tiles than in our experiments (both 20x and 5x, as opposed to only 5x).
To perform a more exhaustive comparison of the features learned by SimCLR and ItS2CLR, we compare them in combination with several different popular MIL aggregators.1: max pooling, top-k pooling (Shen et al., 2021b), attention-MIL pooling (Ilse et al., 2018), DS-MIL pooling (Li et al., 2021), and transformer (Chen and Krishnan, 2022) (see Appendix C for a detailed description). Table 3 shows that the ItS2CLR features outperform the SimCLR features by a large margin for all aggregators, and are substantially more stable (the standard deviation of the AUC over multiple trials is lower).
We also evaluate instance-level accuracy, which can be used to interpret the bag-level prediction (for instance, by revealing tumor locations). In Table 4, we report the instance-level AUC, F1 score, and Dice score of both ItS2CLR and the SimCLR-based baseline on Camelyon16. ItS2CLR again exhibits stronger performance. Figure 3 shows an example of instance-level predictions in the form of a tumor localization map.
4.3 COMPARISON WITH ALTERNATIVE APPROACHES
In Tables 3, 4 and 5, we compare ItS2CLR with the approaches described below. Table 6 reports additional comparisons at different witness rates (the fraction of positive instances in positive bags), created synthetically by modifying the ratio between negative and positive instances in Camelyon16.
Finetuning with ground-truth instance labels provides an upper bound on the performance that can be achieved through feature improvement. ItS2CLR does not reach this gold standard, but substantially closes the gap.
1To be clear, the ItS2CLR features are learned using the DS-MIL aggregator, as described in Section 3, and then frozen before combining them with the different aggregators.
Cross-entropy finetuning with pseudo labels, which we refer to as CE finetuning, consistently underperforms ItS2CLR when combined with different aggregators, except at high witness rates. We conjecture that this is due to the sensitivity of the cross-entropy loss to incorrect pseudo labels.
Ablated versions of ItS2CLR where we do not apply iterative updates of the pseudo labels (w/o iter.), or our self-paced learning scheme (w/o SPL) or both (w/o both) achieve substantially worse performance than the full approach. This indicates that both of these ingredients are critical in learning discriminative features.
End-to-end training is often computationally infeasible in medical applications. We compare ItS2CLR to end-to-end models on a downsampled version of Camelyon16 (see Appendix A.5) and on the breast ultrasound dataset. For a fair comparison, all end-to-end models use the same CSSLpretrained weights and aggregator as used in ItS2CLR. Table 5 shows that ItS2CLR achieves better instance- and bag-level performance than end-to-end training. The analysis in Appendix B.3 shows that end-to-end overfits quickly when the bag size is large.
4.4 IMPROVING DIFFERENT PRETRAINED REPRESENTATIONS
In this section, we show that ItS2CLR is capable of improving representations learned by different pretraining methods: supervised training on ImageNet and two non-contrastive SSL methods, BYOL (Grill et al., 2020a) and DINO (Caron et al., 2021). DINO is based on the ViT-S/16 architecture (Dosovitskiy et al., 2020), whereas the other methods are based on ResNet-18. Table 7a shows the result of initializing ItS2CLR with these features (as well as with SimCLR). The different initializations achieve varying degrees of pseudo label accuracy, but ItS2CLR improves the performance of all of them, demonstrating the robustness of the proposed framework.
5 RELATED WORK
Self-supervised learning Contrastive learning methods have become popular in unsupervised representation learning, achieving state-of-the-art self-supervised learning performance for natural images (Chen et al., 2020; He et al., 2020; Grill et al., 2020a; Caron et al., 2020; Zbontar et al., 2021; Caron et al., 2021). These methods have also shown promising results in medical imaging (Li et al., 2021; Azizi et al., 2021; Kaku et al., 2021; Zhu et al., 2022; Ciga et al., 2022). Recently, Li et. al. (Li et al., 2021) applied SimCLR (Chen et al., 2020) to extract instance-level features for WSI MIL tasks and achieved state-of-the-art performance. However, Arora et al. (2019) point out the
potential issue of class collision in contrastive learning, i.e. that some negative pairs may actually have the same class. Prior works on alleviating class collision problem include reweighting the negative and positive terms with class ratio (Chuang et al., 2020), pulling closer additional similar pairs (Dwibedi et al., 2021), and avoiding pushing apart negatives that are inferred to belong to the same class based on a similarity metric (Zheng et al., 2021). In contrast, we propose a framework that leverages information from the bag-level labels to iteratively resolve the class collision problem.
There also exist non-contrastive alternatives that avoid introducing negative pairs (e.g. BYOL (Grill et al., 2020b), DINO (Caron et al., 2021), and SimSiam (Chen and He, 2021)). However, Wang et al. (2021) report that removing the negatives can make different object categories overlap and result in under-clustering, which limits the model’s ability to learn discriminative features.
Multiple instance learning A major part of MIL works focuses on improving the MIL aggregator. Traditionally, non-learnable pooling operators such as mean-pooling and max-pooling were commonly used in MIL (Pinheiro and Collobert, 2015; Feng and Zhou, 2017). More recent methods parameterize the aggregator using neural networks that employ attention mechanisms (Ilse et al., 2018; Li et al., 2021; Shao et al., 2021; Chen and Krishnan, 2022). This research direction is complementary to our proposed approach, which focuses on obtaining better instance representations, and can be combined with different types of aggregators (see Section 4.2).
6 CONCLUSION
In this work, we investigate how to improve feature-extraction in multiple-instance learning models. We identify a limitation of contrastive self-supervised learning: class collision hinders it from learning discriminative features in class-imbalanced MIL problems. To address this, we propose a novel framework that iteratively refines the features with pseudo labels estimated by the aggregator. Our method outperforms the existing state-of-the-art MIL methods on three medical datasets, and can be combined with different aggregators and pretrained feature extractors.
The proposed method does not outperform a cross-entropy-based baseline at very high witness rates, suggesting that it is mostly suitable for low witness rates scenarios (however, it is worth noting that this is the regime that is more commonly encountered in medical applications such as cancer diagnosis). In addition, there is a performance gap between our method and finetuning using instance-level ground truth, which suggests there is further room for improvement.
A EXPERIMENTS
A.1 DATASET
Camelyon16 Camelyon16 is a public dataset for detection of metastasis in breast cancer. This dataset consists of 271 training and 129 test whole slide images (WSI), which are further divided into roughly 3.2 million patches at 20× magnification and 0.25 million patches at 5× magnification. On average, at 20× and 5× magnification each slide contains approximately 8,000 and 625 patches respectively. Each WSI is paired with pixel-level annotations indicating the position of tumors (if any are present). We ignore the pixel-level annotations during training and consider only slide-level labels (i.e. the slide is considered positive if it contains any annotated tumor regions). As a result, positive bags contain mixtures of patches with tumors and patches with healthy tissue. Negative bags contain only patches with healthy tissue. The ratio between positive and negative patches in this dataset is highly imbalanced. Only a small fraction of patches (less than 10%) in the positive slides contains tumor.
TCGA-LUAD TCGA for Lung Adenocarcinoma (LUAD) is a subset of TCGA (The Cancer Genome Atlas), a landmark cancer genomics program. It consists of 800 tumorous frozen wholeslide histopathology images and the corresponding genetic mutation status. Each WSI is paired with binary labels indicating whether each gene is mutated or wild type. In this experiment, we build MIL models to detect four mutations - EGFR, KRAS, STK11, and TP53, which are sensitizing mutations that can impact treatment options in LUAD (Coudray et al., 2018; Fu et al., 2020). We split the data randomly into training, validation and test sets so that each patient will appear in only one of the subsets. After splitting the data, 477 images are in the training set, 96 images are in the validation set, and 227 images are in the test set.
Breast Ultrasound dataset The Breast Ultrasound Dataset includes 28,914 ultrasound exams (Shen et al., 2021a). An exam is labeled as cancer-positive if there is a pathology-confirmed malignant finding associated with this exam. In this dataset, 5593 exams are cancer-positive. On average, each exam contains approximately 18 images. Patients in the dataset were randomly divided into a training set (60%), a validation set (10%), and test set (30%). Each patient was included in only one of the three sets. We show 5 example breast ultrasound images in Figure 4.
A.2 IMPLEMENTATION DETAILS
All experiments were conducted on NVIDIA RTX8000 GPUs and NVIDIA V100 GPUs. For all models, we perform model selection during training based on bag-level AUC evaluated on the validation set.
Camelyon16 We follow the same preprocessing and pretraining steps as Li et. al. (Li et al., 2021). To preprocess the slides, we cropped the slides into tiles at 5x magnification, filtered out tiles that do not contain enough tissues (average saturation < 30), and resized the images into a resolution of 224 x 224 pixels. Resizing was performed using the Pillow package (Clark, 2015) with default settings (nearest neighbor sampling).
We pretrain the feature extractor, ResNet18 (He et al., 2016), with SimCLR (Chen et al., 2020) for a maximum of 600 epochs. Each patch is represented by a 512-dimensional vector. We set the batch size at 512 and temperature at 0.5. We use SGD with the learning rate of 0.03, weight decay of 0.0001, and cosine annealing scheduler. We also train a MIL aggregator using the instance features extracted by the feature extractor in order to monitor the bag AUC of the downstream task on the validation set. During finetuning with Its2CLR, we fine-tune the feature extractor for a maximum of 50 epochs.
The batch size is set to 512, and the learning rate is set to 10−2. At the feature extractor training stage, we apply random data augmentation to each instance, including:
• Random (p = 0.8) color jittering: brightness, contrast, and saturation factors are uniformly sampled from [0.2, 1.8], hue factor is uniformly sampled from [−0.2, 0.2];
• Random gray scale (p = 0.2); • Random Gaussian blur with kernel size of 0.06 times the size of an image; • Random horizontal/vertical flipping with 0.5 probability.
When training the DS-MIL aggregator, we follow the settings in (Li et al., 2021). We use the Adam optimizer during training. Since each bag may contain different a number of instances, we follow (Li et al., 2021) and set the batch size to just one bag. We train each model for a maximum of 350 epochs. We use an initial learning rate of 2× 10−4, and use the StepLR scheduler to reduce the learning rate by 0.5 every 75 epochs. Details on the hyperparameters used for training the aggregator are in Appendix C.
TCGA-LUAD To preprocess the slides, we cropped them into tiles at 10x magnification, filtered out the background tiles that do not contain enough tissues (when average saturation is less than 30), and resized the images into the resolution of 224 x 224 pixels. Resizing was performed using the Pillow package (Clark, 2015) with nearest neighbor sampling. These tiles were color-normalized to match the Vahadane method (Vahadane et al., 2016).
To train the feature extractor, we perform the same process as for Camelyon16.
We also use DS-MIL (Li et al., 2021) as the aggregator. When training the aggregator, we resample the ratio of positive and negative bags to keep the class ratio balanced. We train the aggregator for a maximum of 100 epochs using the Adam optimizer with the learning rate set to 2×10−4 and reduce the learning rate by 0.5 every 50 epochs.
Breast Ultrasound We follow the same preprocessing steps as Shen et. al. (Shen et al., 2021a). All images were resized to 224 x 224 pixels using bilinear interpolation. We used ResNet18 (He et al., 2016) as the feature extractor and pretrained it using SimCLR (Chen et al., 2020) for 100 epochs. We adopt the same pretraining approach as for Camelyon16. We used the Instance Attention-MIL
as an aggregator (Ilse et al., 2018). Given a bag of images x1, ..., xk and a feature extractor f , the aggregator first computes instance-level predictions ŷi for each image xi. It then calculates an attention score αi ∈ [0, 1] for each image xi using its feature vectors f(xi). Lastly, the baglevel prediction is computed as the average instance prediction weighted by the attention score ŷ =∑k i αiŷi. To optimize the aggregator, we trained it using Adam with a learning rate set to 10
−3 for a maximum of 350 epochs.
A.3 COMPUTATIONAL COMPLEXITY
It takes 600 epochs (around 90 hours ) to train SimCLR. Our finetuning only takes 50 extra epochs (around 10 hours), which is only 1/10 of the pretraining time. Updating pseudo labels is also efficient. Updating instance features and training MIL aggregators only take around 10 minutes, and we only do it every 5 epochs.
A.4 HYPERPARAMETERS OF TRAINING THE FEATURE EXTRACTOR IN ITS2CLR
Hyperparameter tuning The hyperparameters of the proposed method include: the initial threshold used for binarization of the prediction to produce pseudo labels η ∈ [0.1, 0.9], the proportion of positive queries sampled p+ ∈ [0.05, 0.5], the initial ratio r0 ∈ [0.01, 0.7] and the final ratio rT ∈ [0.2, 0.8] of in the self-paced sampling scheme. For Camelyon16, we obtain the highest baglevel validation AUC using the following hyperparameters: η = 0.3, p+ = 0.2, r0 = 0.2 and rT = 0.8. We use the feature extractor trained under this setting in Tables 2, 3 and 4. The complete list of hyperparameters in different experiments is reported in Table 8.
Sensitivity analysis We conduct the sensitivity analysis for each hyperparameter on Camelyon 16, and observe a robust performance over a range of hyperparameter values.
• Threshold η: The choice of η influences the instance-level pseudo labels. As shown in Figure 5, the outputs of the instance-level classifier are mostly close to 0 or 1, so the pseudo labels do not dramatically vary for a wide range of η. We conducted a small ablation experiment on the importance of η. Figure 6 (left) shows that ItS2CLR is quite robust to the value of η, except for some extreme values. If η is too small (e.g. 0.1), it can introduce a significant number of false positives. If η is too large (e.g. 0.8), it can mistakenly exclude some useful positive samples, causing a drop in the performance. In the main paper, since negative instances are more prevalent than positive instances, a threshold of 0.3 (less than 0.5) can increase the recall for the positive instances.
• Sampling ratio of query instance over pseudo labels: We use p+ to denote the percentage of positive query instances used during the contrastive learning stage. Figure 6 (right) shows that it is desirable to choose a relatively small p+. Since there are far fewer positive instances than negative instance, keeping the ratio of positive queries low can avoid repetitively sampling from a limited number of positive instances. Also, since the negative instance set X−neg is clean, there is more label noise among the positive pseudo labels.
• The initial rate r0 and final rate rT for the self-paced sampling scheduler: Figure 7 shows that ItS2CLR is also generally robust to the r0 and rT . However, extremely large initial rate r0 (high confidence in the pseudo labels) may introduce more noise during the training and hurt the performance. Conversely, extremely small rT (low confidence in the pseudo labels) may prohibit the model from using more data, also hurting performance.
• Sampling during warm-up: During the warmup phase, we sample query instances from X−neg. An alternative choice can be sampling we sample the query instance from X+pos and the corresponding set Dx from X−neg. However, our experiments show that the resulting bag-level AUC drops to 90.91% under this setting, which is significantly lower than 94.25% by the proposed method. This comparison demonstrates the importance of using clean negative instances as query images during warmup.
A.5 EXPERIMENTS ON SYNTHETIC VERSIONS OF CAMELYON16
Simulation of witness rates (WR)
Since the ground truth instance-level labels are available for Camelyon16, we can conduct experiments on synthetic versions of the dataset to study the impact of the prevalence of positive instances
on the performance of the proposed approach and the baselines. Section 2 describes how the performance of CSSL is affected by low witness rates (fraction of positive instances in each positive bag). To study the robustness of the proposed framework to the witness rate in the data, we increase or decrease the witness rate of Camelyon16 by randomly dropping negative or positive instances within each bag respectively. The percentage of retained instances and the resulting witness rates are reported in Table 6.
Downsampled version of Camelyon16 for end-to-end training
In order to enable end-to-end training we downsample each bag in Camelyon16 to around 500 instances so that it fits in the memory of a GPU. To achieve this, we divide the large bag into smaller bags while keeping the witness rate of each sub-bag at a similar level as the original bag. In more detail, if the bag size is smaller or equal to 500, we keep the original bag. If the bag size is greater than 500, we divide the bag into several sub-bags such that each sub-bag contains approximately 500 instances. After that, we randomly divide the negative instances within the original bag into desired number of sub-bags evenly. For the positive bag, we randomly partition the positive instances within the original bag into desired number of sub-bags evenly as well. If the number of positive instances within that positive bag is smaller than the desired number of sub-bags, we correct the number of sub-bags to be the number of positive instances. We then combine the positive instances and the negative instances to form sub-bags. This ensures that the bag-level label is correct and the witness rate for each positive sub-bag remains similar to the original bag.
A.6 DESCRIPTION OF THE ABLATION STUDY
Details for CE finetuning with/without Iterative Updating
• CE + w/o iterative updating: we use the same set of the initial pseudo labels as our ItS2CLR framework. All the instances within the negative bags are labeled as negative, for instances in positive bags, we assign the pseudo label according to the instance prediction by the aggregator. In the following finetuning epochs of the aggregator, the pseudo label is kept fixed.
• CE + iterative updating: based on CE + w/o iterative updating, the pseudo label is updated every few epochs, which is in turn used to guide the finetuning. We will make sure that this is explained clearly in the revised version of the paper.
Details for ItS2CLR With/ without SPL.
• ItS2CLR without iterative update: We keep everything the same as the full Its2CLR procedure (including the SPL strategy), but we do not apply steps 7, 8 and 9 in Algorithm 1. As a result, the pseudo labels are fixed to always equal the initial set of pseudo labels.
• ItS2CLR without SPL: We keep everything the same as the full Its2CLR procedure (including iterative updating), but modify step 10 in Algorithm 1. We do not utilize the pseudo label to train the model in a self-paced learning way as in Section 3.2. We utilize all the pseudo-labeled data from the beginning of the finetuning.
B ADDITIONAL RESULTS
We present here additional results to supplement those presented in Section 4.
B.1 LEARNING CURVES
F1-Score plot corresponding to Figure 1: In Figure 8, we show the max F1 score curve corresponding to the right side of Figure 1. This plot confirms the importance of self-paced learning and iterative updating in ItS2CLR.
Instance pseudo label AUC comparison with cross-entropy finetuning: Figure 9 compares ItS2CLR with an alternative approach that finetunes the feature extractor using cross-entropy (CE) loss on the Camelyon16 dataset. Without iterative updating, CE finetuning rapidly overfits to the noise in the pseudo labels. Iterative updating prevents this to some extent, but does not match the performance of ItS2CLR, which produces increasingly accurate pseudo labels as the iterations proceed.
B.2 INSTANCE-LEVEL EVALUATION
In order to evaluate instance-level performance, we report classification metrics including AUC, F1-score, AUPRC and Dice score for localization.
The Dice score is defined as follows: Dice = 2 ∑ i yipi∑
i yi + ∑ i pi , (4)
where yi and pi are the ground truth and predicted probability for the ith sample. It penalizes the prediction with low confidence. The prediced probability is computed from the output of the MIL model si via linear scaling: pi = σ (asi + b) , (5) where a ∈ [−5, 5] and b ∈ [0.1, 10] are chosen to maximize the Dice score on the validation set. Max pooling aggregator: In Table 4, we show that out model achieves better weakly supervised localization performance compared to other methods when DS-MIL is used as the aggregator. In Table 9, we show that the same conclusion holds for an aggregator based on max-pooling.
Linear evaluation: In Table 10, we report results obtained by training a logistic regression model using the features obtained from the same approaches in Table 4, following a standard linear eval-
uation pipeline in representation learning (Chen et al., 2020). ItS2CLR again achieves the best instance-level performance. We also produce bag-level predictions using the maximum output of the linear classifier for each bag, which again showcases that instance-level performance results in superior bag-level classification.
B.3 COMPARISON WITH END-TO-END TRAINING
In this section we provide additional results to complement Table 5, where ItS2CLR is compared to end-to-end models. The end-to-end training is conducted with the same aggregators for each dataset as described in Section 4 and Appendix A.2.
Camelyon16 Figure 10 shows that an end-to-end model trained on the downsampled version of Camelyon16 described in Section A.5 rapidly overfits when trained from scratch and from SimCLRpretrained weights. The two-stage model, on the other hand, is less prone to overfitting. Table 11 shows that the two-stage learning pipeline outperforms end-to-end training, and is in turn outperfomed by ItS2CLR.
Breast Ultrasound dataset Table 12 shows that for the breast-ultrasound dataset end-to-end training outperforms the SimCLR+Aggregator baseline, but is outperformed by ItS2CLR.
B.4 TUMOR LOCALIZATION MAPS
Figure 11 provides additional tumor localization maps.
C MIL AGGREGATORS
C.1 FORMULATION OF MIL AGGREGATORS
In this section, we describe the different MIL aggregators benchmarked in Section 4.2 and Table 3.
Let B denote a collection of sets of feature vectors in Rd. The bags of extracted features in the dataset are denoted by {Hb}Bb=1 ⊂ B. An aggregator is defined as a function g : B → [0, 1] mapping bags of extracted features to a score in [0, 1].
There exist two main approaches in MIL:
1. The instance-level approach: using a logistic classifier on each instance, then aggregating instance predictions over a bag (e.g. max-pooling, top k-pooling).
2. The embedding-level approach: aggregating the instance embeddings, then obtaining a bag-level prediction via a bag-level classifier (e.g. attention-based aggregator, Transformer).
We denote the embeddings of the instances within a bag by H = {hk}Kk=1, where K is the number of instances.
Max-pooling obtains bag-level predictions by taking the maximum of the instance-level predictions produced by a logistic instance classifier ϕ, that is
Top-k pooling Shen et al. (2021b) produces bag-level predictios using the mean of the top-M ranked instance-level predictions produced by a logistic instance classifier ϕ, whereM is a hyperparameter.
Let topM (ϕ,H) denote the indices of the elements inH for which ϕ produces the highestM scores,
gϕ(H) = 1
M ∑ k∈topM(ϕ,H) ϕ(hk). (7)
Attention-based MIL Ilse et al. (2018) aggregates instance embeddings using a sum weighted by attention weights. Then the bag-level estimation is computed from the aggregated embeddings by a logistic bag-level classifier φ:
gφ(H) = φ ( K∑ k=1 akhk ) , (8)
where ak is the attention weight on instance k. ak is computed by:
ak = exp
( wT tanh(V hTk ) )∑K j=1 exp ( wT tanh(V hTj )
) , (9) wherew ∈ Rp×1 and V ∈ Rp×d are learnable parameters and p is the dimension of the hidden layer. DS-MIL combines instance-level and embedding-level aggregation, we refer to DS-MIL (Li et al., 2021) for more details on this approach.
Transformer Chen and Krishnan (2022) aggregation uses an L-layer Transformer to process the set of instance features H . The initial set H(0) is set equal to H . Then it goes through Transformer as following:
H ′(l) = MSA ( H(l−1) ) +H(l−1))
H(l) = MLP ( H ′(l−1) ) +H ′(l−1)
(10)
for l = 1, · · · , L, where MSA is multiple-head self-attention, MLP is a multi-layer perceptron network. Then the processed vectors H(l) are fed to Attention-based MIL (Ilse et al., 2018) to obtain bag-level predictions.
gφ(H l) = φ ( K∑ k=1 akh l k ) , (11)
where ak is the attention weight on instance k. ak is computed by:
ak = exp
( wT tanh(V (hlk) T ) )∑K
j=1 exp ( wT tanh(V (hlj) T ) ) , (12)
wherew ∈ Rp×1 and V ∈ Rp×d are learnable parameters and p is the dimension of the hidden layer.
C.2 IMPLEMENTATION DETAILS
Top-k pooling we select the ratio in Top-k pooling from the set {0.1%, 1%, 3%, 10%, 20%}. DS-MIL The weight between the two cross-entropy loss functions in DS-MIL is selected from the interval [0.1, 5] based on the best validation performance.
Attention-based MIL. The hidden dimension of the attention module to compute the attention weight is set equal to the dimension of the input feature vector (512).
Transformer. We add a light-weighted two-layer Transformer blocks to process instance features. We did not observe increase in performance with additional blocks. | 1. What is the focus and contribution of the paper regarding the MIL problem?
2. What are the strengths of the proposed approach, particularly in combining supervised-contrastive learning and iterative pseudo-labeling?
3. What are the weaknesses of the paper, especially regarding the introduction of new hyperparameters?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This article tackles the MIL problem and particularly remarks that in medical data, it is frequent to have a low ratio of positive instances in positive bags. In these cases, learning good features is hard since the positive instances are an outlier class.
A new MIL method is proposed to learn a good feature extractor in this rare positive settings. It combines supervised-contrastive learning and an iterative pseudo-labelling of instances with a ¨self-paced¨ method to filter noisy pseudo-labels at the start. The aggregation block of the MIL is taken as such from a SotA method (DSMIL) but the method can use any aggregation method that has instances scores.
The experiments show good results on 3 datasets and several ablations studies were realized to understand/prove the inner working of the method.
Strengths And Weaknesses
Strength:
tackles a real problem (MIL with a high-positive ratio is rare and relatively easy)
while pseudo-labelling and supervised contrastive learning are existing methods, they do not work together out-of-the-shelves in this settings (as shown in table 3 for example). The self-paced iteration method proposed here is critical to get good results in this setting.
Good results on 3 datasets
Good ablations studies (most questions I had while reading the article were answered in one experiment)
Weaknesses:
A bit too much new hyperparameters (4). Some of them seems to have a large functioning range but these results are only available in the appendix. The full figures might not fit in the main article but one or two sentences explaining the results in the main article would be nice.
Typos/small modifications:
Figure 5 is mentioned in the main article (p5) but available only in the appendix, please add a ¨in appendix¨ comment
p15, 4 paragraph: ¨contain different a number¨
Clarity, Quality, Novelty And Reproducibility
The article is easy to understand
The proposed method is a combination of 2 method but with specific modifications adapted to the MIL settings that makes it interesting
All the details to reproduce the results are available in the appendix |
ICLR | Title
Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning
Abstract
Learning representations for individual instances when only bag-level labels are available is a fundamental challenge in multiple instance learning (MIL). Recent works have shown promising results using contrastive self-supervised learning (CSSL), which learns to push apart representations corresponding to two different randomly-selected instances. Unfortunately, in real-world applications such as medical image classification, there is often class imbalance, so randomlyselected instances mostly belong to the same majority class, which precludes CSSL from learning inter-class differences. To address this issue, we propose a novel framework, Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR), which improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels. The framework employs a novel self-paced sampling strategy to ensure the accuracy of pseudo labels. We evaluate ItS2CLR on three medical datasets, showing that it improves the quality of instance-level pseudo labels and representations, and outperforms existing MIL methods in terms of both bag and instance level accuracy.
1 INTRODUCTION
The goal of multiple instance learning (MIL) is to perform classification on data that is arranged in bags of instances. Each instance is either positive or negative, but these instance-level labels are not available during training; only bag-level labels are available. A bag is labeled as positive if any of the instances in it are positive, and negative otherwise. An important application of MIL is cancer diagnosis from histopathology slides. Each slide is divided into hundreds or thousands of tiles but typically only slide-level labels are available (Courtiol et al., 2018; Campanella et al., 2019; Li et al., 2021; Chen and Krishnan, 2022; Zhang et al., 2022; Lu et al., 2021).
Histopathology slides are typically very large, in the order of gigapixels (the resolution of a typical slide can be as high as 105 × 105), so end-to-end training of deep neural networks is typically infeasible due to memory limitations of GPU hardware. Consequently, state-of-the-art approaches (Campanella et al., 2019; Li et al., 2021; Zhang et al., 2022; Lu et al., 2021; Shao et al., 2021) utilize a two-stage learning pipeline: (1) a feature-extraction stage where each instance is mapped to a representation which summarizes its content, and (2) an aggregation stage where the representations extracted from all instances in a bag are combined to produce a bag-level prediction (Figure 1). Notably, our results indicate that even in rare settings where end-to-end training is possible, this pipeline is still superior (see Section 4.3).
In this work, we focus on a fundamental challenge in MIL: how to train the feature extractor. Currently, there are three main strategies to perform feature-extraction, which have significant shortcomings. (1) Pretraining on a large natural image dataset such as ImageNet (Shao et al., 2021; Lu et al., 2021) is problematic for medical applications because features learned from natural images may generalize poorly to other domains (Lu et al., 2020). (2) Supervised training using bag-level labels as instance-level labels is effective if positive bags contain mostly positive instances (Lerousseau et al., 2020; Xu et al., 2019; Chikontwe et al., 2020), but in many medical datasets this is not the case (Bejnordi et al., 2017; Li et al., 2021). (3) Contrastive self-supervised learning (CSSL) outperforms prior methods (Li et al., 2021; Ciga et al., 2022), but is not as effective in settings with heavy class imbalance, which are of crucial importance in medicine. CSSL operates by pushing apart the representations of different randomly selected instances. When positive bags contain
mostly negative instances, CSSL training ends up pushing apart negative instances from each other, which precludes it from learning features that distinguish positive samples from the negative ones (Figure 2). We discuss this finding in Section 2.
Our goal is to address the shortcomings of current feature-extraction methods. We build upon several key insights. First, it is possible to extract instance-level pseudo labels from trained MIL models, which are more accurate than assigning the bag-level labels to all instances within a positive bag. Second, we can use the pseudo labels to finetune the feature extractor, improving the instance-level representations. Third, these improved representations result in improved bag-level classification and more accurate instance-level pseudo labels. These observations are utilized in our proposed framework, Iterative Self-Paced Supervised Contrastive Learning for MIL Representation (ItS2CLR), as illustrated in Figure 1. After initializing the features with CSSL, we iteratively improve them via supervised contrastive learning (Khosla et al., 2020) using pseudo labels inferred by the aggregator. This feature refinement utilizes pseudo labels sampled according to a novel selfpaced strategy, which ensures that they are sufficiently accurate (see Section 3.2). In summary, our contributions are the following:
1. We propose ItS2CLR – a novel MIL framework where instance features are iteratively improved using pseudo labels extracted from the MIL aggregator. The framework combines supervised contrastive learning with a self-paced sampling scheme to ensure that pseudo labels are accurate.
2. We demonstrate that the proposed approach outperforms existing MIL methods in terms of bagand instance-level accuracy on three real-world medical datasets relevant to cancer diagnosis: two histopathology datasets and a breast ultrasound dataset. It also outperforms alternative finetuning methods, such as instance-level cross-entropy minimization and end-to-end training.
3. In a series of controlled experiments, we show that ItS2CLR is effective when applied to different feature-extraction architectures and when combined with different aggregators.
2 CSSL MAY NOT LEARN DISCRIMINATIVE FEATURES IN MIL
Recent MIL approaches use contrastive self-supervised learning (CSSL) to train the feature extractor (Li et al., 2021; Saillard et al., 2021; Rymarczyk et al., 2021). In this section, we show that CSSL has a crucial limitation in realistic MIL settings, which precludes it from learning discriminative features. CSSL aims to learn a representation space where samples from the same class are close to each other, and samples from different classes are far from each other, without access to class labels. This is achieved by minimizing the InfoNCE loss (Oord et al., 2018).
LCSSL = Ex,xaug,{xdiffi }ni=1
[ − log exp (fψ(x) · fψ(x aug)/τ)
exp (fψ(x) · fψ(xaug)/τ) + ∑n i=1 exp ( fψ(x) · fψ(xdiffi )/τ )] , (1)
where fψ = f ◦ψ, in which f : Rm → Rd is the feature extractor mapping the input data to a representation, ψ : Rd → Rd′ is a projection head with a feed-forward network and ℓ2 normalization, and τ is a temperature hyperparameter. The expectation is taken over samples x ∈ Rm drawn uniformly from the training set. Minimizing the loss brings the representation of an instance x closer to the representation of its random augmentation, xaug, and pushes the representation of x away from the representations of n other examples {xdiffi }ni=1 in the training set. A key assumption in CSSL is that x belongs to a different class than most of the randomly-sampled examples xdiff1 , . . . , x diff n . This usually holds in standard classification datasets with many classes such as ImageNet (Deng et al., 2009), but not in MIL tasks relevant to medical diagnosis, where a majority of instances are negative (e.g. 95% in Camelyon16). Hence, most terms in the sum∑n i=1 exp ( fψ(x) · fψ(xdiffi )/τ ) in the loss in Equation 1 correspond to pairs of examples (x, xdiffi ) both belonging to the negative class. Therefore, minimizing the loss mostly pushes apart the representations of negative instances, as illustrated in the top panel of Figure 2. This is an example of class collision (Arora et al., 2019; Chuang et al., 2020), a general problem in CSSL, which has been shown to impair performance on downstream tasks (Ash et al., 2021; Zheng et al., 2021).
Class collision makes CSSL learn representations that are not discriminative between classes. In order to study this phenomenon, we report the average inter-class distances and intra-class deviations for representations learned by CSSL on Camelyon16 in Table 1. The inter-class distance reflects how far the instance representations from different classes are apart; the intra-class distance reflects the variation of instance representations within each class. As predicted, the intra-class deviation corresponding to the representations of negative instances learned by CSSL is large. Representations learned by ItS2CLR have larger inter-class distance (more separated classes) and smaller intra-class deviation (less variance among instances belonging to the same class) than those learned by CSSL. This suggests that the features learned by ItS2CLR are more discriminative, which is confirmed by the results in Section 4.
Note that using bag-level labels does not solve the problem of class collision. When x is negative, even if we select {xdiffi }ni=1 from the positive bags in equation 1, most of the selected instances
will still be negative. Overcoming the class-collision problem requires explicitly detecting positive instances. This motivates our proposed framework, described in the following section.
3 MIL VIA ITERATIVE SELF-PACED SUPERVISED CONTRASTIVE LEARNING
Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR) addresses the limitation of contrastive self-supervised learning (CSSL) described in Section 2. ItS2CLR relies on latent variables indicating whether each instance is positive or negative, which we call instancelevel pseudo labels. To estimate pseudo labels, we use instance-level probabilities obtained from the MIL aggregator (we use the aggregator from DS-MIL (Li et al., 2021) but our framework is compatible with any aggregator that generates instance-level probabilities). The pseudo labels are obtained by binarizing the probabilities according to a threshold η ∈ (0, 1), which is a hyperparameter. ItS2CLR uses the pseudo labels to finetune the feature extractor (initialized using CSSL). In the spirit of iterative self-training techniques (Zhong et al., 2019; Wei et al., 2020; Liu et al., 2022), we alternate between refining the feature extractor, re-computing the pseudo labels, and training the aggregator, as described in Algorithm 1. A key challenge is that the pseudo labels are not completely accurate, especially at the beginning of the training process. To address the impact of incorrect pseudo labels, we apply a contrastive loss to finetune the feature extractor (see Section 3.1), where the contrastive pairs are selected according to a novel self-paced learning scheme (see Section 3.2). The right panel of Figure 1 shows that our approach iteratively improves the pseudo labels on the Camelyon16 dataset (Bejnordi et al., 2017). This finetuning only requires a modest increment in computational time (see Appendix A.3).
3.1 SUPERVISED CONTRASTIVE LEARNING WITH PSEUDO LABELS
To address the class collision problem described in Section 2, we leverage supervised contrastive learning (Pantazis et al., 2021; Dwibedi et al., 2021; Khosla et al., 2020) combined with the pseudo labels estimated by the aggregator. The goal is to learn discriminative representations by pulling together the representations corresponding to instances in the same class, and pushing apart those belong to instances of different classes. For each anchor instance x selected for contrastive learning, we collect a set Sx believed to have the same label as x, and a set Dx believed to have a different label to x. These sets are depicted in the bottom panel of Figure 2. The supervised contrastive loss corresponding to x is defined as:
Lsup (x) = 1 |Sx| ∑ xs∈Sx − log exp (fψ(x) · fψ(xs)/τ)∑ xs∈Sx exp (fψ(xi) · fψ(xs)/τ) + ∑ xd∈Dx exp (fψ(x) · fψ(xd)/τ) . (2) In Section 3.2, we explain how to select x, Sx and Dx to ensure that the selected samples have high-quality pseudo labels.
A tempting alternative to supervised constrastive learning is to train the feature extractor on pseudo labels using standard cross-entropy loss. However, in Section 4.3 we show that this leads to sub-
Algorithm 1 Iterative Self-Paced Supervised Contrastive Learning (It2SCLR) Require: Feature extractor f , projection head ψ, MIL aggregator gϕ, where ϕ is an instance classifier
Require: Bags {Xb}Bb=1, bag-level labels {Yb}Bb=1 1: f (0) ← fSSL ▷ Initialize f with CSSL-pretrained weights 2: for t = 0 to T do
3: hbk ← f (t)(xbk), ∀xbk ∈ Xb, ∀b ▷ Extract instance representation 4: Hb ← {hbk} Kb k=1, ∀b ▷ Group instance embedding into bags 5: g(t)ϕ ← Train with {Hb} B b=1 and {Yb}Bb=1 ▷ Train the aggregator 6: AUC(t)val ← bag-level AUC on the validation set 7: if AUC(t)val ≥ maxt′≤t { AUC(t ′) val } then ▷ If bag prediction improves
8: ŷbk ← 1{ϕ(t)(hb k )>η}, ∀x b k ∈ Xb, ∀b ▷ Update instance pseudo labels 9: end if
10: f (t+1)ψ ← argminfψLsup(f (t) ψ ) ▷ Optimize feature extractor via Eq.(2) 11: end for
stantially worse performance in the downstream MIL classification task due to memorization of incorrect pseudo labels. On the contrary, we can leverage self-paced sampling to choose contrastive pairs with the most confident, even certain, pseudo labels for supervised contrastive learning.
3.2 SAMPLING VIA SELF-PACED LEARNING
A key challenge in It2SCLR is to improve the accuracy of instance-level pseudo labels without ground-truth labels. This is achieved by finetuning the feature extractor on a carefully-selected subset of instances. We select the anchor instance x and the corresponding sets Sx and Dx (defined in Section 3.1 and Figure 2) building upon two key insights: (1) The negative bags only contain negative instances. (2) The probabilities used to build the pseudo labels are indicative of their quality; instances with higher predicted probabilities usually have more accurate pseudo labels (Zou et al., 2018a; Liu et al., 2020; 2022).
Let X−neg denote all instances within the negative bags. By definition of MIL, we can safely assume that all instances in X−neg are negative. In contrast, positive bags contain both positive and negative instances. Let X+pos and X−pos denote the sets of instances in positive bags with positive and negative pseudo labels respectively. During an initial warm-up lasting Twarm-up epochs, we sample anchor instances x only from X−neg to ensure that they are indeed all negative. For each such instance, Sx is built by sampling instances from X−neg, and Dx is built by sampling from X+pos. After the warm-up phase, we start sampling anchor instances from X+pos and X−pos. To ensure that these instances have accurate pseudo labels, we only consider the top-r% instances with the highest probabilities in each of these sets, which we call X+pos(r) and X−pos(r) respectively, as illustrated by Figure 5 (the ratio of positive-to-negative anchors is a fixed hyperparameter p+). For each anchor x, the same-label set Sx is sampled from X+pos(r) if x is positive and from X−neg ∪ X−pos(r) if x is negative. The different-label set Dx is sampled from X−neg ∪ X−pos(r) if x is positive, and from X+pos(r) if x is negative. To exploit the improvement of the instance representations during training, we gradually increase r to include more instances from positive bags, which can be interpreted as a self-paced easy-to-hard learning scheme (Kumar et al., 2010; Jiang et al., 2014; Zou et al., 2018b). Let t and T denote the current epoch, and the total number of epochs respectively. For Twarmup < t ≤ T . we set:
r := r0 + αr (t− Twarm-up) , where αr = (rT − r0)/(T − Twarm-up), (3)
and r0 and rT are hyperparameters. Details on tuning these hyperparameters are provided in Appendix A.4. As demonstrated in the right panel of Figure 1 (see also Appendix B.1), this scheme indeed results in an improvement of the pseudo labels (and hence of the underlying representations).
4 EXPERIMENTS
We evaluate ItS2CLR on three MIL datasets described in Section 4.1. In Section 4.2 we show that ItS2CLR consistently outperforms approaches that use CSSL feature-extraction by a substantial margin on all three datasets for different choices of aggregators. In Section 4.3 we show that ItS2CLR outperforms alternative finetuning approaches based on cross-entropy loss minimization and end-to-end training across a wide range of settings where the prevalence of positive instances and bag size vary. In Section 4.4, we show that ItS2CLR is able to improve features obtained from a variety of pretraining schemes and network architectures.
4.1 DATASETS
We evaluate the proposed framework on three cancer diagnosis tasks. When training our models, we select the model with the highest bag-level performance on the validation set and report the performance on a held-out test set. More information about the datasets, experimental setup, and implementation is provided in Appendix A.
Camelyon16 (Bejnordi et al., 2017) is a popular benchmark for MIL (Li et al., 2021; Shao et al., 2021; Zhang et al., 2022) where the goal is to detect breast-cancer metastasis in lymph node sections. It consists of 400 whole-slide histopathology images. Each whole slide image (WSI) corresponds to a bag with a binary label indicating the presence of cancer. Each WSI is divided into an average of 625 tiles at 5x magnification, which correspond to individual instances. The dataset also contains pixel-wise annotations indicating the presence of cancer, which can be used to derive ground-truth instance-level labels.
TCGA-LUAD is a dataset from The Cancer Genome Atlas (TCGA) (tcg), a landmark cancer genomics program, where the associated task is to detect genetic mutations in cancer cells. Detecting these mutations is important to determine treatment options for LUAD (Coudray et al., 2018; Fu et al., 2020). The data contains 800 labeled tumorous frozen WSIs from lung adenocarcinoma (LUAD). Each WSI is divided into an average of 633 tiles at 10x magnification corresponding to unlabeled instances.
The Breast Ultrasound Dataset contains 28,914 B-mode breast ultrasound exams (Shen et al., 2021a). The associated task is to detect breast cancer. Each exam contains between 4-70 images (18.8 images per exam on average) corresponding to individual instances, but only a bag-level label indicating the presence of cancer is available per exam. Additionally, a subset of images is annotated, which makes it possible to also evaluate instance-level performance. This dataset is imbalanced at the bag level: only 5,593 of 28,914 exams contain cancer.
4.2 COMPARISON WITH CONTRASTIVE SELF-SUPERVISED LEARNING
In this section, we compare the performance of ItS2CLR to a baseline that performs featureextraction via the CSSL method SimCLR (Chen et al., 2020). This approach has achieved stateof-the-art performance on multiple WSI datasets (Li et al., 2021). To ensure a fair comparison, we initialize the feature extractor in ItS2CLR also using SimCLR. Table 2 shows that ItS2CLR clearly outperforms the SimCLR baseline on all three datasets. The performance improvement is particularly significant in Camelyon16 where it achieves a bag-level AUC of 0.943, outperforming the baseline by an absolute margin of 8.87%. ItS2CLR also outperforms an improved baseline reported by Li et al. (2021) with an AUC of 0.917, which uses higher resolution tiles than in our experiments (both 20x and 5x, as opposed to only 5x).
To perform a more exhaustive comparison of the features learned by SimCLR and ItS2CLR, we compare them in combination with several different popular MIL aggregators.1: max pooling, top-k pooling (Shen et al., 2021b), attention-MIL pooling (Ilse et al., 2018), DS-MIL pooling (Li et al., 2021), and transformer (Chen and Krishnan, 2022) (see Appendix C for a detailed description). Table 3 shows that the ItS2CLR features outperform the SimCLR features by a large margin for all aggregators, and are substantially more stable (the standard deviation of the AUC over multiple trials is lower).
We also evaluate instance-level accuracy, which can be used to interpret the bag-level prediction (for instance, by revealing tumor locations). In Table 4, we report the instance-level AUC, F1 score, and Dice score of both ItS2CLR and the SimCLR-based baseline on Camelyon16. ItS2CLR again exhibits stronger performance. Figure 3 shows an example of instance-level predictions in the form of a tumor localization map.
4.3 COMPARISON WITH ALTERNATIVE APPROACHES
In Tables 3, 4 and 5, we compare ItS2CLR with the approaches described below. Table 6 reports additional comparisons at different witness rates (the fraction of positive instances in positive bags), created synthetically by modifying the ratio between negative and positive instances in Camelyon16.
Finetuning with ground-truth instance labels provides an upper bound on the performance that can be achieved through feature improvement. ItS2CLR does not reach this gold standard, but substantially closes the gap.
1To be clear, the ItS2CLR features are learned using the DS-MIL aggregator, as described in Section 3, and then frozen before combining them with the different aggregators.
Cross-entropy finetuning with pseudo labels, which we refer to as CE finetuning, consistently underperforms ItS2CLR when combined with different aggregators, except at high witness rates. We conjecture that this is due to the sensitivity of the cross-entropy loss to incorrect pseudo labels.
Ablated versions of ItS2CLR where we do not apply iterative updates of the pseudo labels (w/o iter.), or our self-paced learning scheme (w/o SPL) or both (w/o both) achieve substantially worse performance than the full approach. This indicates that both of these ingredients are critical in learning discriminative features.
End-to-end training is often computationally infeasible in medical applications. We compare ItS2CLR to end-to-end models on a downsampled version of Camelyon16 (see Appendix A.5) and on the breast ultrasound dataset. For a fair comparison, all end-to-end models use the same CSSLpretrained weights and aggregator as used in ItS2CLR. Table 5 shows that ItS2CLR achieves better instance- and bag-level performance than end-to-end training. The analysis in Appendix B.3 shows that end-to-end overfits quickly when the bag size is large.
4.4 IMPROVING DIFFERENT PRETRAINED REPRESENTATIONS
In this section, we show that ItS2CLR is capable of improving representations learned by different pretraining methods: supervised training on ImageNet and two non-contrastive SSL methods, BYOL (Grill et al., 2020a) and DINO (Caron et al., 2021). DINO is based on the ViT-S/16 architecture (Dosovitskiy et al., 2020), whereas the other methods are based on ResNet-18. Table 7a shows the result of initializing ItS2CLR with these features (as well as with SimCLR). The different initializations achieve varying degrees of pseudo label accuracy, but ItS2CLR improves the performance of all of them, demonstrating the robustness of the proposed framework.
5 RELATED WORK
Self-supervised learning Contrastive learning methods have become popular in unsupervised representation learning, achieving state-of-the-art self-supervised learning performance for natural images (Chen et al., 2020; He et al., 2020; Grill et al., 2020a; Caron et al., 2020; Zbontar et al., 2021; Caron et al., 2021). These methods have also shown promising results in medical imaging (Li et al., 2021; Azizi et al., 2021; Kaku et al., 2021; Zhu et al., 2022; Ciga et al., 2022). Recently, Li et. al. (Li et al., 2021) applied SimCLR (Chen et al., 2020) to extract instance-level features for WSI MIL tasks and achieved state-of-the-art performance. However, Arora et al. (2019) point out the
potential issue of class collision in contrastive learning, i.e. that some negative pairs may actually have the same class. Prior works on alleviating class collision problem include reweighting the negative and positive terms with class ratio (Chuang et al., 2020), pulling closer additional similar pairs (Dwibedi et al., 2021), and avoiding pushing apart negatives that are inferred to belong to the same class based on a similarity metric (Zheng et al., 2021). In contrast, we propose a framework that leverages information from the bag-level labels to iteratively resolve the class collision problem.
There also exist non-contrastive alternatives that avoid introducing negative pairs (e.g. BYOL (Grill et al., 2020b), DINO (Caron et al., 2021), and SimSiam (Chen and He, 2021)). However, Wang et al. (2021) report that removing the negatives can make different object categories overlap and result in under-clustering, which limits the model’s ability to learn discriminative features.
Multiple instance learning A major part of MIL works focuses on improving the MIL aggregator. Traditionally, non-learnable pooling operators such as mean-pooling and max-pooling were commonly used in MIL (Pinheiro and Collobert, 2015; Feng and Zhou, 2017). More recent methods parameterize the aggregator using neural networks that employ attention mechanisms (Ilse et al., 2018; Li et al., 2021; Shao et al., 2021; Chen and Krishnan, 2022). This research direction is complementary to our proposed approach, which focuses on obtaining better instance representations, and can be combined with different types of aggregators (see Section 4.2).
6 CONCLUSION
In this work, we investigate how to improve feature-extraction in multiple-instance learning models. We identify a limitation of contrastive self-supervised learning: class collision hinders it from learning discriminative features in class-imbalanced MIL problems. To address this, we propose a novel framework that iteratively refines the features with pseudo labels estimated by the aggregator. Our method outperforms the existing state-of-the-art MIL methods on three medical datasets, and can be combined with different aggregators and pretrained feature extractors.
The proposed method does not outperform a cross-entropy-based baseline at very high witness rates, suggesting that it is mostly suitable for low witness rates scenarios (however, it is worth noting that this is the regime that is more commonly encountered in medical applications such as cancer diagnosis). In addition, there is a performance gap between our method and finetuning using instance-level ground truth, which suggests there is further room for improvement.
A EXPERIMENTS
A.1 DATASET
Camelyon16 Camelyon16 is a public dataset for detection of metastasis in breast cancer. This dataset consists of 271 training and 129 test whole slide images (WSI), which are further divided into roughly 3.2 million patches at 20× magnification and 0.25 million patches at 5× magnification. On average, at 20× and 5× magnification each slide contains approximately 8,000 and 625 patches respectively. Each WSI is paired with pixel-level annotations indicating the position of tumors (if any are present). We ignore the pixel-level annotations during training and consider only slide-level labels (i.e. the slide is considered positive if it contains any annotated tumor regions). As a result, positive bags contain mixtures of patches with tumors and patches with healthy tissue. Negative bags contain only patches with healthy tissue. The ratio between positive and negative patches in this dataset is highly imbalanced. Only a small fraction of patches (less than 10%) in the positive slides contains tumor.
TCGA-LUAD TCGA for Lung Adenocarcinoma (LUAD) is a subset of TCGA (The Cancer Genome Atlas), a landmark cancer genomics program. It consists of 800 tumorous frozen wholeslide histopathology images and the corresponding genetic mutation status. Each WSI is paired with binary labels indicating whether each gene is mutated or wild type. In this experiment, we build MIL models to detect four mutations - EGFR, KRAS, STK11, and TP53, which are sensitizing mutations that can impact treatment options in LUAD (Coudray et al., 2018; Fu et al., 2020). We split the data randomly into training, validation and test sets so that each patient will appear in only one of the subsets. After splitting the data, 477 images are in the training set, 96 images are in the validation set, and 227 images are in the test set.
Breast Ultrasound dataset The Breast Ultrasound Dataset includes 28,914 ultrasound exams (Shen et al., 2021a). An exam is labeled as cancer-positive if there is a pathology-confirmed malignant finding associated with this exam. In this dataset, 5593 exams are cancer-positive. On average, each exam contains approximately 18 images. Patients in the dataset were randomly divided into a training set (60%), a validation set (10%), and test set (30%). Each patient was included in only one of the three sets. We show 5 example breast ultrasound images in Figure 4.
A.2 IMPLEMENTATION DETAILS
All experiments were conducted on NVIDIA RTX8000 GPUs and NVIDIA V100 GPUs. For all models, we perform model selection during training based on bag-level AUC evaluated on the validation set.
Camelyon16 We follow the same preprocessing and pretraining steps as Li et. al. (Li et al., 2021). To preprocess the slides, we cropped the slides into tiles at 5x magnification, filtered out tiles that do not contain enough tissues (average saturation < 30), and resized the images into a resolution of 224 x 224 pixels. Resizing was performed using the Pillow package (Clark, 2015) with default settings (nearest neighbor sampling).
We pretrain the feature extractor, ResNet18 (He et al., 2016), with SimCLR (Chen et al., 2020) for a maximum of 600 epochs. Each patch is represented by a 512-dimensional vector. We set the batch size at 512 and temperature at 0.5. We use SGD with the learning rate of 0.03, weight decay of 0.0001, and cosine annealing scheduler. We also train a MIL aggregator using the instance features extracted by the feature extractor in order to monitor the bag AUC of the downstream task on the validation set. During finetuning with Its2CLR, we fine-tune the feature extractor for a maximum of 50 epochs.
The batch size is set to 512, and the learning rate is set to 10−2. At the feature extractor training stage, we apply random data augmentation to each instance, including:
• Random (p = 0.8) color jittering: brightness, contrast, and saturation factors are uniformly sampled from [0.2, 1.8], hue factor is uniformly sampled from [−0.2, 0.2];
• Random gray scale (p = 0.2); • Random Gaussian blur with kernel size of 0.06 times the size of an image; • Random horizontal/vertical flipping with 0.5 probability.
When training the DS-MIL aggregator, we follow the settings in (Li et al., 2021). We use the Adam optimizer during training. Since each bag may contain different a number of instances, we follow (Li et al., 2021) and set the batch size to just one bag. We train each model for a maximum of 350 epochs. We use an initial learning rate of 2× 10−4, and use the StepLR scheduler to reduce the learning rate by 0.5 every 75 epochs. Details on the hyperparameters used for training the aggregator are in Appendix C.
TCGA-LUAD To preprocess the slides, we cropped them into tiles at 10x magnification, filtered out the background tiles that do not contain enough tissues (when average saturation is less than 30), and resized the images into the resolution of 224 x 224 pixels. Resizing was performed using the Pillow package (Clark, 2015) with nearest neighbor sampling. These tiles were color-normalized to match the Vahadane method (Vahadane et al., 2016).
To train the feature extractor, we perform the same process as for Camelyon16.
We also use DS-MIL (Li et al., 2021) as the aggregator. When training the aggregator, we resample the ratio of positive and negative bags to keep the class ratio balanced. We train the aggregator for a maximum of 100 epochs using the Adam optimizer with the learning rate set to 2×10−4 and reduce the learning rate by 0.5 every 50 epochs.
Breast Ultrasound We follow the same preprocessing steps as Shen et. al. (Shen et al., 2021a). All images were resized to 224 x 224 pixels using bilinear interpolation. We used ResNet18 (He et al., 2016) as the feature extractor and pretrained it using SimCLR (Chen et al., 2020) for 100 epochs. We adopt the same pretraining approach as for Camelyon16. We used the Instance Attention-MIL
as an aggregator (Ilse et al., 2018). Given a bag of images x1, ..., xk and a feature extractor f , the aggregator first computes instance-level predictions ŷi for each image xi. It then calculates an attention score αi ∈ [0, 1] for each image xi using its feature vectors f(xi). Lastly, the baglevel prediction is computed as the average instance prediction weighted by the attention score ŷ =∑k i αiŷi. To optimize the aggregator, we trained it using Adam with a learning rate set to 10
−3 for a maximum of 350 epochs.
A.3 COMPUTATIONAL COMPLEXITY
It takes 600 epochs (around 90 hours ) to train SimCLR. Our finetuning only takes 50 extra epochs (around 10 hours), which is only 1/10 of the pretraining time. Updating pseudo labels is also efficient. Updating instance features and training MIL aggregators only take around 10 minutes, and we only do it every 5 epochs.
A.4 HYPERPARAMETERS OF TRAINING THE FEATURE EXTRACTOR IN ITS2CLR
Hyperparameter tuning The hyperparameters of the proposed method include: the initial threshold used for binarization of the prediction to produce pseudo labels η ∈ [0.1, 0.9], the proportion of positive queries sampled p+ ∈ [0.05, 0.5], the initial ratio r0 ∈ [0.01, 0.7] and the final ratio rT ∈ [0.2, 0.8] of in the self-paced sampling scheme. For Camelyon16, we obtain the highest baglevel validation AUC using the following hyperparameters: η = 0.3, p+ = 0.2, r0 = 0.2 and rT = 0.8. We use the feature extractor trained under this setting in Tables 2, 3 and 4. The complete list of hyperparameters in different experiments is reported in Table 8.
Sensitivity analysis We conduct the sensitivity analysis for each hyperparameter on Camelyon 16, and observe a robust performance over a range of hyperparameter values.
• Threshold η: The choice of η influences the instance-level pseudo labels. As shown in Figure 5, the outputs of the instance-level classifier are mostly close to 0 or 1, so the pseudo labels do not dramatically vary for a wide range of η. We conducted a small ablation experiment on the importance of η. Figure 6 (left) shows that ItS2CLR is quite robust to the value of η, except for some extreme values. If η is too small (e.g. 0.1), it can introduce a significant number of false positives. If η is too large (e.g. 0.8), it can mistakenly exclude some useful positive samples, causing a drop in the performance. In the main paper, since negative instances are more prevalent than positive instances, a threshold of 0.3 (less than 0.5) can increase the recall for the positive instances.
• Sampling ratio of query instance over pseudo labels: We use p+ to denote the percentage of positive query instances used during the contrastive learning stage. Figure 6 (right) shows that it is desirable to choose a relatively small p+. Since there are far fewer positive instances than negative instance, keeping the ratio of positive queries low can avoid repetitively sampling from a limited number of positive instances. Also, since the negative instance set X−neg is clean, there is more label noise among the positive pseudo labels.
• The initial rate r0 and final rate rT for the self-paced sampling scheduler: Figure 7 shows that ItS2CLR is also generally robust to the r0 and rT . However, extremely large initial rate r0 (high confidence in the pseudo labels) may introduce more noise during the training and hurt the performance. Conversely, extremely small rT (low confidence in the pseudo labels) may prohibit the model from using more data, also hurting performance.
• Sampling during warm-up: During the warmup phase, we sample query instances from X−neg. An alternative choice can be sampling we sample the query instance from X+pos and the corresponding set Dx from X−neg. However, our experiments show that the resulting bag-level AUC drops to 90.91% under this setting, which is significantly lower than 94.25% by the proposed method. This comparison demonstrates the importance of using clean negative instances as query images during warmup.
A.5 EXPERIMENTS ON SYNTHETIC VERSIONS OF CAMELYON16
Simulation of witness rates (WR)
Since the ground truth instance-level labels are available for Camelyon16, we can conduct experiments on synthetic versions of the dataset to study the impact of the prevalence of positive instances
on the performance of the proposed approach and the baselines. Section 2 describes how the performance of CSSL is affected by low witness rates (fraction of positive instances in each positive bag). To study the robustness of the proposed framework to the witness rate in the data, we increase or decrease the witness rate of Camelyon16 by randomly dropping negative or positive instances within each bag respectively. The percentage of retained instances and the resulting witness rates are reported in Table 6.
Downsampled version of Camelyon16 for end-to-end training
In order to enable end-to-end training we downsample each bag in Camelyon16 to around 500 instances so that it fits in the memory of a GPU. To achieve this, we divide the large bag into smaller bags while keeping the witness rate of each sub-bag at a similar level as the original bag. In more detail, if the bag size is smaller or equal to 500, we keep the original bag. If the bag size is greater than 500, we divide the bag into several sub-bags such that each sub-bag contains approximately 500 instances. After that, we randomly divide the negative instances within the original bag into desired number of sub-bags evenly. For the positive bag, we randomly partition the positive instances within the original bag into desired number of sub-bags evenly as well. If the number of positive instances within that positive bag is smaller than the desired number of sub-bags, we correct the number of sub-bags to be the number of positive instances. We then combine the positive instances and the negative instances to form sub-bags. This ensures that the bag-level label is correct and the witness rate for each positive sub-bag remains similar to the original bag.
A.6 DESCRIPTION OF THE ABLATION STUDY
Details for CE finetuning with/without Iterative Updating
• CE + w/o iterative updating: we use the same set of the initial pseudo labels as our ItS2CLR framework. All the instances within the negative bags are labeled as negative, for instances in positive bags, we assign the pseudo label according to the instance prediction by the aggregator. In the following finetuning epochs of the aggregator, the pseudo label is kept fixed.
• CE + iterative updating: based on CE + w/o iterative updating, the pseudo label is updated every few epochs, which is in turn used to guide the finetuning. We will make sure that this is explained clearly in the revised version of the paper.
Details for ItS2CLR With/ without SPL.
• ItS2CLR without iterative update: We keep everything the same as the full Its2CLR procedure (including the SPL strategy), but we do not apply steps 7, 8 and 9 in Algorithm 1. As a result, the pseudo labels are fixed to always equal the initial set of pseudo labels.
• ItS2CLR without SPL: We keep everything the same as the full Its2CLR procedure (including iterative updating), but modify step 10 in Algorithm 1. We do not utilize the pseudo label to train the model in a self-paced learning way as in Section 3.2. We utilize all the pseudo-labeled data from the beginning of the finetuning.
B ADDITIONAL RESULTS
We present here additional results to supplement those presented in Section 4.
B.1 LEARNING CURVES
F1-Score plot corresponding to Figure 1: In Figure 8, we show the max F1 score curve corresponding to the right side of Figure 1. This plot confirms the importance of self-paced learning and iterative updating in ItS2CLR.
Instance pseudo label AUC comparison with cross-entropy finetuning: Figure 9 compares ItS2CLR with an alternative approach that finetunes the feature extractor using cross-entropy (CE) loss on the Camelyon16 dataset. Without iterative updating, CE finetuning rapidly overfits to the noise in the pseudo labels. Iterative updating prevents this to some extent, but does not match the performance of ItS2CLR, which produces increasingly accurate pseudo labels as the iterations proceed.
B.2 INSTANCE-LEVEL EVALUATION
In order to evaluate instance-level performance, we report classification metrics including AUC, F1-score, AUPRC and Dice score for localization.
The Dice score is defined as follows: Dice = 2 ∑ i yipi∑
i yi + ∑ i pi , (4)
where yi and pi are the ground truth and predicted probability for the ith sample. It penalizes the prediction with low confidence. The prediced probability is computed from the output of the MIL model si via linear scaling: pi = σ (asi + b) , (5) where a ∈ [−5, 5] and b ∈ [0.1, 10] are chosen to maximize the Dice score on the validation set. Max pooling aggregator: In Table 4, we show that out model achieves better weakly supervised localization performance compared to other methods when DS-MIL is used as the aggregator. In Table 9, we show that the same conclusion holds for an aggregator based on max-pooling.
Linear evaluation: In Table 10, we report results obtained by training a logistic regression model using the features obtained from the same approaches in Table 4, following a standard linear eval-
uation pipeline in representation learning (Chen et al., 2020). ItS2CLR again achieves the best instance-level performance. We also produce bag-level predictions using the maximum output of the linear classifier for each bag, which again showcases that instance-level performance results in superior bag-level classification.
B.3 COMPARISON WITH END-TO-END TRAINING
In this section we provide additional results to complement Table 5, where ItS2CLR is compared to end-to-end models. The end-to-end training is conducted with the same aggregators for each dataset as described in Section 4 and Appendix A.2.
Camelyon16 Figure 10 shows that an end-to-end model trained on the downsampled version of Camelyon16 described in Section A.5 rapidly overfits when trained from scratch and from SimCLRpretrained weights. The two-stage model, on the other hand, is less prone to overfitting. Table 11 shows that the two-stage learning pipeline outperforms end-to-end training, and is in turn outperfomed by ItS2CLR.
Breast Ultrasound dataset Table 12 shows that for the breast-ultrasound dataset end-to-end training outperforms the SimCLR+Aggregator baseline, but is outperformed by ItS2CLR.
B.4 TUMOR LOCALIZATION MAPS
Figure 11 provides additional tumor localization maps.
C MIL AGGREGATORS
C.1 FORMULATION OF MIL AGGREGATORS
In this section, we describe the different MIL aggregators benchmarked in Section 4.2 and Table 3.
Let B denote a collection of sets of feature vectors in Rd. The bags of extracted features in the dataset are denoted by {Hb}Bb=1 ⊂ B. An aggregator is defined as a function g : B → [0, 1] mapping bags of extracted features to a score in [0, 1].
There exist two main approaches in MIL:
1. The instance-level approach: using a logistic classifier on each instance, then aggregating instance predictions over a bag (e.g. max-pooling, top k-pooling).
2. The embedding-level approach: aggregating the instance embeddings, then obtaining a bag-level prediction via a bag-level classifier (e.g. attention-based aggregator, Transformer).
We denote the embeddings of the instances within a bag by H = {hk}Kk=1, where K is the number of instances.
Max-pooling obtains bag-level predictions by taking the maximum of the instance-level predictions produced by a logistic instance classifier ϕ, that is
Top-k pooling Shen et al. (2021b) produces bag-level predictios using the mean of the top-M ranked instance-level predictions produced by a logistic instance classifier ϕ, whereM is a hyperparameter.
Let topM (ϕ,H) denote the indices of the elements inH for which ϕ produces the highestM scores,
gϕ(H) = 1
M ∑ k∈topM(ϕ,H) ϕ(hk). (7)
Attention-based MIL Ilse et al. (2018) aggregates instance embeddings using a sum weighted by attention weights. Then the bag-level estimation is computed from the aggregated embeddings by a logistic bag-level classifier φ:
gφ(H) = φ ( K∑ k=1 akhk ) , (8)
where ak is the attention weight on instance k. ak is computed by:
ak = exp
( wT tanh(V hTk ) )∑K j=1 exp ( wT tanh(V hTj )
) , (9) wherew ∈ Rp×1 and V ∈ Rp×d are learnable parameters and p is the dimension of the hidden layer. DS-MIL combines instance-level and embedding-level aggregation, we refer to DS-MIL (Li et al., 2021) for more details on this approach.
Transformer Chen and Krishnan (2022) aggregation uses an L-layer Transformer to process the set of instance features H . The initial set H(0) is set equal to H . Then it goes through Transformer as following:
H ′(l) = MSA ( H(l−1) ) +H(l−1))
H(l) = MLP ( H ′(l−1) ) +H ′(l−1)
(10)
for l = 1, · · · , L, where MSA is multiple-head self-attention, MLP is a multi-layer perceptron network. Then the processed vectors H(l) are fed to Attention-based MIL (Ilse et al., 2018) to obtain bag-level predictions.
gφ(H l) = φ ( K∑ k=1 akh l k ) , (11)
where ak is the attention weight on instance k. ak is computed by:
ak = exp
( wT tanh(V (hlk) T ) )∑K
j=1 exp ( wT tanh(V (hlj) T ) ) , (12)
wherew ∈ Rp×1 and V ∈ Rp×d are learnable parameters and p is the dimension of the hidden layer.
C.2 IMPLEMENTATION DETAILS
Top-k pooling we select the ratio in Top-k pooling from the set {0.1%, 1%, 3%, 10%, 20%}. DS-MIL The weight between the two cross-entropy loss functions in DS-MIL is selected from the interval [0.1, 5] based on the best validation performance.
Attention-based MIL. The hidden dimension of the attention module to compute the attention weight is set equal to the dimension of the input feature vector (512).
Transformer. We add a light-weighted two-layer Transformer blocks to process instance features. We did not observe increase in performance with additional blocks. | 1. What is the focus of the paper regarding instance-level representation in pathological WSIs?
2. What are the strengths of the proposed approach, particularly in its extension from contrastive learning to semi-supervised learning?
3. What are the weaknesses of the paper, especially regarding its comparative study and novelty?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors proposed a supervised contrastive learning framework for the instance-level representation in pathological WSIs.The instance-level representation and the pseudo labels of the instances are improved jointly in an iterative learning process. Whether the new pseudo-labels are improved is validated on a small data set. An easy-to-hard self-paced sampling strategy is also adopted for the training. Three common public pathological image datasets are employed for the experiments. Superior results of the proposed method are reported in comparison to the vanilla contrastive self-supervised learning framework.
Strengths And Weaknesses
Strength:
The proposed method is clearly introduced, and the manuscript is overall easy to follow
The standard contrastive learning framework for image instances combined with the semi-supervised scheme (iterative pseudo-label refinement) is extended to the instance level for large pathological WSI images.
Weakness:
I am concerned about the comparative study of the proposed method vs. the contrastive self-supervised learning framework. It is not a fair comparison since it is a supervised learning method compared with a self-supervised one.
SOTA results on each of the employed datasets should be reported.
Semi-supervised learning methods could be compared here since the pseudo-label refinement is an essential component of the proposed method.
Clarity, Quality, Novelty And Reproducibility
The clarity of the presented work is good.
The proposed method is rather a combination of contrastive self-supervised learning, the easy-to-hard sampling strategy assisted with image-level labels, and the semi-supervised pseudo-label refinement framework. Thus, technical innovation in methodology is limited.
Enough information is provided for reproducing the proposed framework. |
ICLR | Title
Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning
Abstract
Learning representations for individual instances when only bag-level labels are available is a fundamental challenge in multiple instance learning (MIL). Recent works have shown promising results using contrastive self-supervised learning (CSSL), which learns to push apart representations corresponding to two different randomly-selected instances. Unfortunately, in real-world applications such as medical image classification, there is often class imbalance, so randomlyselected instances mostly belong to the same majority class, which precludes CSSL from learning inter-class differences. To address this issue, we propose a novel framework, Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR), which improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels. The framework employs a novel self-paced sampling strategy to ensure the accuracy of pseudo labels. We evaluate ItS2CLR on three medical datasets, showing that it improves the quality of instance-level pseudo labels and representations, and outperforms existing MIL methods in terms of both bag and instance level accuracy.
1 INTRODUCTION
The goal of multiple instance learning (MIL) is to perform classification on data that is arranged in bags of instances. Each instance is either positive or negative, but these instance-level labels are not available during training; only bag-level labels are available. A bag is labeled as positive if any of the instances in it are positive, and negative otherwise. An important application of MIL is cancer diagnosis from histopathology slides. Each slide is divided into hundreds or thousands of tiles but typically only slide-level labels are available (Courtiol et al., 2018; Campanella et al., 2019; Li et al., 2021; Chen and Krishnan, 2022; Zhang et al., 2022; Lu et al., 2021).
Histopathology slides are typically very large, in the order of gigapixels (the resolution of a typical slide can be as high as 105 × 105), so end-to-end training of deep neural networks is typically infeasible due to memory limitations of GPU hardware. Consequently, state-of-the-art approaches (Campanella et al., 2019; Li et al., 2021; Zhang et al., 2022; Lu et al., 2021; Shao et al., 2021) utilize a two-stage learning pipeline: (1) a feature-extraction stage where each instance is mapped to a representation which summarizes its content, and (2) an aggregation stage where the representations extracted from all instances in a bag are combined to produce a bag-level prediction (Figure 1). Notably, our results indicate that even in rare settings where end-to-end training is possible, this pipeline is still superior (see Section 4.3).
In this work, we focus on a fundamental challenge in MIL: how to train the feature extractor. Currently, there are three main strategies to perform feature-extraction, which have significant shortcomings. (1) Pretraining on a large natural image dataset such as ImageNet (Shao et al., 2021; Lu et al., 2021) is problematic for medical applications because features learned from natural images may generalize poorly to other domains (Lu et al., 2020). (2) Supervised training using bag-level labels as instance-level labels is effective if positive bags contain mostly positive instances (Lerousseau et al., 2020; Xu et al., 2019; Chikontwe et al., 2020), but in many medical datasets this is not the case (Bejnordi et al., 2017; Li et al., 2021). (3) Contrastive self-supervised learning (CSSL) outperforms prior methods (Li et al., 2021; Ciga et al., 2022), but is not as effective in settings with heavy class imbalance, which are of crucial importance in medicine. CSSL operates by pushing apart the representations of different randomly selected instances. When positive bags contain
mostly negative instances, CSSL training ends up pushing apart negative instances from each other, which precludes it from learning features that distinguish positive samples from the negative ones (Figure 2). We discuss this finding in Section 2.
Our goal is to address the shortcomings of current feature-extraction methods. We build upon several key insights. First, it is possible to extract instance-level pseudo labels from trained MIL models, which are more accurate than assigning the bag-level labels to all instances within a positive bag. Second, we can use the pseudo labels to finetune the feature extractor, improving the instance-level representations. Third, these improved representations result in improved bag-level classification and more accurate instance-level pseudo labels. These observations are utilized in our proposed framework, Iterative Self-Paced Supervised Contrastive Learning for MIL Representation (ItS2CLR), as illustrated in Figure 1. After initializing the features with CSSL, we iteratively improve them via supervised contrastive learning (Khosla et al., 2020) using pseudo labels inferred by the aggregator. This feature refinement utilizes pseudo labels sampled according to a novel selfpaced strategy, which ensures that they are sufficiently accurate (see Section 3.2). In summary, our contributions are the following:
1. We propose ItS2CLR – a novel MIL framework where instance features are iteratively improved using pseudo labels extracted from the MIL aggregator. The framework combines supervised contrastive learning with a self-paced sampling scheme to ensure that pseudo labels are accurate.
2. We demonstrate that the proposed approach outperforms existing MIL methods in terms of bagand instance-level accuracy on three real-world medical datasets relevant to cancer diagnosis: two histopathology datasets and a breast ultrasound dataset. It also outperforms alternative finetuning methods, such as instance-level cross-entropy minimization and end-to-end training.
3. In a series of controlled experiments, we show that ItS2CLR is effective when applied to different feature-extraction architectures and when combined with different aggregators.
2 CSSL MAY NOT LEARN DISCRIMINATIVE FEATURES IN MIL
Recent MIL approaches use contrastive self-supervised learning (CSSL) to train the feature extractor (Li et al., 2021; Saillard et al., 2021; Rymarczyk et al., 2021). In this section, we show that CSSL has a crucial limitation in realistic MIL settings, which precludes it from learning discriminative features. CSSL aims to learn a representation space where samples from the same class are close to each other, and samples from different classes are far from each other, without access to class labels. This is achieved by minimizing the InfoNCE loss (Oord et al., 2018).
LCSSL = Ex,xaug,{xdiffi }ni=1
[ − log exp (fψ(x) · fψ(x aug)/τ)
exp (fψ(x) · fψ(xaug)/τ) + ∑n i=1 exp ( fψ(x) · fψ(xdiffi )/τ )] , (1)
where fψ = f ◦ψ, in which f : Rm → Rd is the feature extractor mapping the input data to a representation, ψ : Rd → Rd′ is a projection head with a feed-forward network and ℓ2 normalization, and τ is a temperature hyperparameter. The expectation is taken over samples x ∈ Rm drawn uniformly from the training set. Minimizing the loss brings the representation of an instance x closer to the representation of its random augmentation, xaug, and pushes the representation of x away from the representations of n other examples {xdiffi }ni=1 in the training set. A key assumption in CSSL is that x belongs to a different class than most of the randomly-sampled examples xdiff1 , . . . , x diff n . This usually holds in standard classification datasets with many classes such as ImageNet (Deng et al., 2009), but not in MIL tasks relevant to medical diagnosis, where a majority of instances are negative (e.g. 95% in Camelyon16). Hence, most terms in the sum∑n i=1 exp ( fψ(x) · fψ(xdiffi )/τ ) in the loss in Equation 1 correspond to pairs of examples (x, xdiffi ) both belonging to the negative class. Therefore, minimizing the loss mostly pushes apart the representations of negative instances, as illustrated in the top panel of Figure 2. This is an example of class collision (Arora et al., 2019; Chuang et al., 2020), a general problem in CSSL, which has been shown to impair performance on downstream tasks (Ash et al., 2021; Zheng et al., 2021).
Class collision makes CSSL learn representations that are not discriminative between classes. In order to study this phenomenon, we report the average inter-class distances and intra-class deviations for representations learned by CSSL on Camelyon16 in Table 1. The inter-class distance reflects how far the instance representations from different classes are apart; the intra-class distance reflects the variation of instance representations within each class. As predicted, the intra-class deviation corresponding to the representations of negative instances learned by CSSL is large. Representations learned by ItS2CLR have larger inter-class distance (more separated classes) and smaller intra-class deviation (less variance among instances belonging to the same class) than those learned by CSSL. This suggests that the features learned by ItS2CLR are more discriminative, which is confirmed by the results in Section 4.
Note that using bag-level labels does not solve the problem of class collision. When x is negative, even if we select {xdiffi }ni=1 from the positive bags in equation 1, most of the selected instances
will still be negative. Overcoming the class-collision problem requires explicitly detecting positive instances. This motivates our proposed framework, described in the following section.
3 MIL VIA ITERATIVE SELF-PACED SUPERVISED CONTRASTIVE LEARNING
Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR) addresses the limitation of contrastive self-supervised learning (CSSL) described in Section 2. ItS2CLR relies on latent variables indicating whether each instance is positive or negative, which we call instancelevel pseudo labels. To estimate pseudo labels, we use instance-level probabilities obtained from the MIL aggregator (we use the aggregator from DS-MIL (Li et al., 2021) but our framework is compatible with any aggregator that generates instance-level probabilities). The pseudo labels are obtained by binarizing the probabilities according to a threshold η ∈ (0, 1), which is a hyperparameter. ItS2CLR uses the pseudo labels to finetune the feature extractor (initialized using CSSL). In the spirit of iterative self-training techniques (Zhong et al., 2019; Wei et al., 2020; Liu et al., 2022), we alternate between refining the feature extractor, re-computing the pseudo labels, and training the aggregator, as described in Algorithm 1. A key challenge is that the pseudo labels are not completely accurate, especially at the beginning of the training process. To address the impact of incorrect pseudo labels, we apply a contrastive loss to finetune the feature extractor (see Section 3.1), where the contrastive pairs are selected according to a novel self-paced learning scheme (see Section 3.2). The right panel of Figure 1 shows that our approach iteratively improves the pseudo labels on the Camelyon16 dataset (Bejnordi et al., 2017). This finetuning only requires a modest increment in computational time (see Appendix A.3).
3.1 SUPERVISED CONTRASTIVE LEARNING WITH PSEUDO LABELS
To address the class collision problem described in Section 2, we leverage supervised contrastive learning (Pantazis et al., 2021; Dwibedi et al., 2021; Khosla et al., 2020) combined with the pseudo labels estimated by the aggregator. The goal is to learn discriminative representations by pulling together the representations corresponding to instances in the same class, and pushing apart those belong to instances of different classes. For each anchor instance x selected for contrastive learning, we collect a set Sx believed to have the same label as x, and a set Dx believed to have a different label to x. These sets are depicted in the bottom panel of Figure 2. The supervised contrastive loss corresponding to x is defined as:
Lsup (x) = 1 |Sx| ∑ xs∈Sx − log exp (fψ(x) · fψ(xs)/τ)∑ xs∈Sx exp (fψ(xi) · fψ(xs)/τ) + ∑ xd∈Dx exp (fψ(x) · fψ(xd)/τ) . (2) In Section 3.2, we explain how to select x, Sx and Dx to ensure that the selected samples have high-quality pseudo labels.
A tempting alternative to supervised constrastive learning is to train the feature extractor on pseudo labels using standard cross-entropy loss. However, in Section 4.3 we show that this leads to sub-
Algorithm 1 Iterative Self-Paced Supervised Contrastive Learning (It2SCLR) Require: Feature extractor f , projection head ψ, MIL aggregator gϕ, where ϕ is an instance classifier
Require: Bags {Xb}Bb=1, bag-level labels {Yb}Bb=1 1: f (0) ← fSSL ▷ Initialize f with CSSL-pretrained weights 2: for t = 0 to T do
3: hbk ← f (t)(xbk), ∀xbk ∈ Xb, ∀b ▷ Extract instance representation 4: Hb ← {hbk} Kb k=1, ∀b ▷ Group instance embedding into bags 5: g(t)ϕ ← Train with {Hb} B b=1 and {Yb}Bb=1 ▷ Train the aggregator 6: AUC(t)val ← bag-level AUC on the validation set 7: if AUC(t)val ≥ maxt′≤t { AUC(t ′) val } then ▷ If bag prediction improves
8: ŷbk ← 1{ϕ(t)(hb k )>η}, ∀x b k ∈ Xb, ∀b ▷ Update instance pseudo labels 9: end if
10: f (t+1)ψ ← argminfψLsup(f (t) ψ ) ▷ Optimize feature extractor via Eq.(2) 11: end for
stantially worse performance in the downstream MIL classification task due to memorization of incorrect pseudo labels. On the contrary, we can leverage self-paced sampling to choose contrastive pairs with the most confident, even certain, pseudo labels for supervised contrastive learning.
3.2 SAMPLING VIA SELF-PACED LEARNING
A key challenge in It2SCLR is to improve the accuracy of instance-level pseudo labels without ground-truth labels. This is achieved by finetuning the feature extractor on a carefully-selected subset of instances. We select the anchor instance x and the corresponding sets Sx and Dx (defined in Section 3.1 and Figure 2) building upon two key insights: (1) The negative bags only contain negative instances. (2) The probabilities used to build the pseudo labels are indicative of their quality; instances with higher predicted probabilities usually have more accurate pseudo labels (Zou et al., 2018a; Liu et al., 2020; 2022).
Let X−neg denote all instances within the negative bags. By definition of MIL, we can safely assume that all instances in X−neg are negative. In contrast, positive bags contain both positive and negative instances. Let X+pos and X−pos denote the sets of instances in positive bags with positive and negative pseudo labels respectively. During an initial warm-up lasting Twarm-up epochs, we sample anchor instances x only from X−neg to ensure that they are indeed all negative. For each such instance, Sx is built by sampling instances from X−neg, and Dx is built by sampling from X+pos. After the warm-up phase, we start sampling anchor instances from X+pos and X−pos. To ensure that these instances have accurate pseudo labels, we only consider the top-r% instances with the highest probabilities in each of these sets, which we call X+pos(r) and X−pos(r) respectively, as illustrated by Figure 5 (the ratio of positive-to-negative anchors is a fixed hyperparameter p+). For each anchor x, the same-label set Sx is sampled from X+pos(r) if x is positive and from X−neg ∪ X−pos(r) if x is negative. The different-label set Dx is sampled from X−neg ∪ X−pos(r) if x is positive, and from X+pos(r) if x is negative. To exploit the improvement of the instance representations during training, we gradually increase r to include more instances from positive bags, which can be interpreted as a self-paced easy-to-hard learning scheme (Kumar et al., 2010; Jiang et al., 2014; Zou et al., 2018b). Let t and T denote the current epoch, and the total number of epochs respectively. For Twarmup < t ≤ T . we set:
r := r0 + αr (t− Twarm-up) , where αr = (rT − r0)/(T − Twarm-up), (3)
and r0 and rT are hyperparameters. Details on tuning these hyperparameters are provided in Appendix A.4. As demonstrated in the right panel of Figure 1 (see also Appendix B.1), this scheme indeed results in an improvement of the pseudo labels (and hence of the underlying representations).
4 EXPERIMENTS
We evaluate ItS2CLR on three MIL datasets described in Section 4.1. In Section 4.2 we show that ItS2CLR consistently outperforms approaches that use CSSL feature-extraction by a substantial margin on all three datasets for different choices of aggregators. In Section 4.3 we show that ItS2CLR outperforms alternative finetuning approaches based on cross-entropy loss minimization and end-to-end training across a wide range of settings where the prevalence of positive instances and bag size vary. In Section 4.4, we show that ItS2CLR is able to improve features obtained from a variety of pretraining schemes and network architectures.
4.1 DATASETS
We evaluate the proposed framework on three cancer diagnosis tasks. When training our models, we select the model with the highest bag-level performance on the validation set and report the performance on a held-out test set. More information about the datasets, experimental setup, and implementation is provided in Appendix A.
Camelyon16 (Bejnordi et al., 2017) is a popular benchmark for MIL (Li et al., 2021; Shao et al., 2021; Zhang et al., 2022) where the goal is to detect breast-cancer metastasis in lymph node sections. It consists of 400 whole-slide histopathology images. Each whole slide image (WSI) corresponds to a bag with a binary label indicating the presence of cancer. Each WSI is divided into an average of 625 tiles at 5x magnification, which correspond to individual instances. The dataset also contains pixel-wise annotations indicating the presence of cancer, which can be used to derive ground-truth instance-level labels.
TCGA-LUAD is a dataset from The Cancer Genome Atlas (TCGA) (tcg), a landmark cancer genomics program, where the associated task is to detect genetic mutations in cancer cells. Detecting these mutations is important to determine treatment options for LUAD (Coudray et al., 2018; Fu et al., 2020). The data contains 800 labeled tumorous frozen WSIs from lung adenocarcinoma (LUAD). Each WSI is divided into an average of 633 tiles at 10x magnification corresponding to unlabeled instances.
The Breast Ultrasound Dataset contains 28,914 B-mode breast ultrasound exams (Shen et al., 2021a). The associated task is to detect breast cancer. Each exam contains between 4-70 images (18.8 images per exam on average) corresponding to individual instances, but only a bag-level label indicating the presence of cancer is available per exam. Additionally, a subset of images is annotated, which makes it possible to also evaluate instance-level performance. This dataset is imbalanced at the bag level: only 5,593 of 28,914 exams contain cancer.
4.2 COMPARISON WITH CONTRASTIVE SELF-SUPERVISED LEARNING
In this section, we compare the performance of ItS2CLR to a baseline that performs featureextraction via the CSSL method SimCLR (Chen et al., 2020). This approach has achieved stateof-the-art performance on multiple WSI datasets (Li et al., 2021). To ensure a fair comparison, we initialize the feature extractor in ItS2CLR also using SimCLR. Table 2 shows that ItS2CLR clearly outperforms the SimCLR baseline on all three datasets. The performance improvement is particularly significant in Camelyon16 where it achieves a bag-level AUC of 0.943, outperforming the baseline by an absolute margin of 8.87%. ItS2CLR also outperforms an improved baseline reported by Li et al. (2021) with an AUC of 0.917, which uses higher resolution tiles than in our experiments (both 20x and 5x, as opposed to only 5x).
To perform a more exhaustive comparison of the features learned by SimCLR and ItS2CLR, we compare them in combination with several different popular MIL aggregators.1: max pooling, top-k pooling (Shen et al., 2021b), attention-MIL pooling (Ilse et al., 2018), DS-MIL pooling (Li et al., 2021), and transformer (Chen and Krishnan, 2022) (see Appendix C for a detailed description). Table 3 shows that the ItS2CLR features outperform the SimCLR features by a large margin for all aggregators, and are substantially more stable (the standard deviation of the AUC over multiple trials is lower).
We also evaluate instance-level accuracy, which can be used to interpret the bag-level prediction (for instance, by revealing tumor locations). In Table 4, we report the instance-level AUC, F1 score, and Dice score of both ItS2CLR and the SimCLR-based baseline on Camelyon16. ItS2CLR again exhibits stronger performance. Figure 3 shows an example of instance-level predictions in the form of a tumor localization map.
4.3 COMPARISON WITH ALTERNATIVE APPROACHES
In Tables 3, 4 and 5, we compare ItS2CLR with the approaches described below. Table 6 reports additional comparisons at different witness rates (the fraction of positive instances in positive bags), created synthetically by modifying the ratio between negative and positive instances in Camelyon16.
Finetuning with ground-truth instance labels provides an upper bound on the performance that can be achieved through feature improvement. ItS2CLR does not reach this gold standard, but substantially closes the gap.
1To be clear, the ItS2CLR features are learned using the DS-MIL aggregator, as described in Section 3, and then frozen before combining them with the different aggregators.
Cross-entropy finetuning with pseudo labels, which we refer to as CE finetuning, consistently underperforms ItS2CLR when combined with different aggregators, except at high witness rates. We conjecture that this is due to the sensitivity of the cross-entropy loss to incorrect pseudo labels.
Ablated versions of ItS2CLR where we do not apply iterative updates of the pseudo labels (w/o iter.), or our self-paced learning scheme (w/o SPL) or both (w/o both) achieve substantially worse performance than the full approach. This indicates that both of these ingredients are critical in learning discriminative features.
End-to-end training is often computationally infeasible in medical applications. We compare ItS2CLR to end-to-end models on a downsampled version of Camelyon16 (see Appendix A.5) and on the breast ultrasound dataset. For a fair comparison, all end-to-end models use the same CSSLpretrained weights and aggregator as used in ItS2CLR. Table 5 shows that ItS2CLR achieves better instance- and bag-level performance than end-to-end training. The analysis in Appendix B.3 shows that end-to-end overfits quickly when the bag size is large.
4.4 IMPROVING DIFFERENT PRETRAINED REPRESENTATIONS
In this section, we show that ItS2CLR is capable of improving representations learned by different pretraining methods: supervised training on ImageNet and two non-contrastive SSL methods, BYOL (Grill et al., 2020a) and DINO (Caron et al., 2021). DINO is based on the ViT-S/16 architecture (Dosovitskiy et al., 2020), whereas the other methods are based on ResNet-18. Table 7a shows the result of initializing ItS2CLR with these features (as well as with SimCLR). The different initializations achieve varying degrees of pseudo label accuracy, but ItS2CLR improves the performance of all of them, demonstrating the robustness of the proposed framework.
5 RELATED WORK
Self-supervised learning Contrastive learning methods have become popular in unsupervised representation learning, achieving state-of-the-art self-supervised learning performance for natural images (Chen et al., 2020; He et al., 2020; Grill et al., 2020a; Caron et al., 2020; Zbontar et al., 2021; Caron et al., 2021). These methods have also shown promising results in medical imaging (Li et al., 2021; Azizi et al., 2021; Kaku et al., 2021; Zhu et al., 2022; Ciga et al., 2022). Recently, Li et. al. (Li et al., 2021) applied SimCLR (Chen et al., 2020) to extract instance-level features for WSI MIL tasks and achieved state-of-the-art performance. However, Arora et al. (2019) point out the
potential issue of class collision in contrastive learning, i.e. that some negative pairs may actually have the same class. Prior works on alleviating class collision problem include reweighting the negative and positive terms with class ratio (Chuang et al., 2020), pulling closer additional similar pairs (Dwibedi et al., 2021), and avoiding pushing apart negatives that are inferred to belong to the same class based on a similarity metric (Zheng et al., 2021). In contrast, we propose a framework that leverages information from the bag-level labels to iteratively resolve the class collision problem.
There also exist non-contrastive alternatives that avoid introducing negative pairs (e.g. BYOL (Grill et al., 2020b), DINO (Caron et al., 2021), and SimSiam (Chen and He, 2021)). However, Wang et al. (2021) report that removing the negatives can make different object categories overlap and result in under-clustering, which limits the model’s ability to learn discriminative features.
Multiple instance learning A major part of MIL works focuses on improving the MIL aggregator. Traditionally, non-learnable pooling operators such as mean-pooling and max-pooling were commonly used in MIL (Pinheiro and Collobert, 2015; Feng and Zhou, 2017). More recent methods parameterize the aggregator using neural networks that employ attention mechanisms (Ilse et al., 2018; Li et al., 2021; Shao et al., 2021; Chen and Krishnan, 2022). This research direction is complementary to our proposed approach, which focuses on obtaining better instance representations, and can be combined with different types of aggregators (see Section 4.2).
6 CONCLUSION
In this work, we investigate how to improve feature-extraction in multiple-instance learning models. We identify a limitation of contrastive self-supervised learning: class collision hinders it from learning discriminative features in class-imbalanced MIL problems. To address this, we propose a novel framework that iteratively refines the features with pseudo labels estimated by the aggregator. Our method outperforms the existing state-of-the-art MIL methods on three medical datasets, and can be combined with different aggregators and pretrained feature extractors.
The proposed method does not outperform a cross-entropy-based baseline at very high witness rates, suggesting that it is mostly suitable for low witness rates scenarios (however, it is worth noting that this is the regime that is more commonly encountered in medical applications such as cancer diagnosis). In addition, there is a performance gap between our method and finetuning using instance-level ground truth, which suggests there is further room for improvement.
A EXPERIMENTS
A.1 DATASET
Camelyon16 Camelyon16 is a public dataset for detection of metastasis in breast cancer. This dataset consists of 271 training and 129 test whole slide images (WSI), which are further divided into roughly 3.2 million patches at 20× magnification and 0.25 million patches at 5× magnification. On average, at 20× and 5× magnification each slide contains approximately 8,000 and 625 patches respectively. Each WSI is paired with pixel-level annotations indicating the position of tumors (if any are present). We ignore the pixel-level annotations during training and consider only slide-level labels (i.e. the slide is considered positive if it contains any annotated tumor regions). As a result, positive bags contain mixtures of patches with tumors and patches with healthy tissue. Negative bags contain only patches with healthy tissue. The ratio between positive and negative patches in this dataset is highly imbalanced. Only a small fraction of patches (less than 10%) in the positive slides contains tumor.
TCGA-LUAD TCGA for Lung Adenocarcinoma (LUAD) is a subset of TCGA (The Cancer Genome Atlas), a landmark cancer genomics program. It consists of 800 tumorous frozen wholeslide histopathology images and the corresponding genetic mutation status. Each WSI is paired with binary labels indicating whether each gene is mutated or wild type. In this experiment, we build MIL models to detect four mutations - EGFR, KRAS, STK11, and TP53, which are sensitizing mutations that can impact treatment options in LUAD (Coudray et al., 2018; Fu et al., 2020). We split the data randomly into training, validation and test sets so that each patient will appear in only one of the subsets. After splitting the data, 477 images are in the training set, 96 images are in the validation set, and 227 images are in the test set.
Breast Ultrasound dataset The Breast Ultrasound Dataset includes 28,914 ultrasound exams (Shen et al., 2021a). An exam is labeled as cancer-positive if there is a pathology-confirmed malignant finding associated with this exam. In this dataset, 5593 exams are cancer-positive. On average, each exam contains approximately 18 images. Patients in the dataset were randomly divided into a training set (60%), a validation set (10%), and test set (30%). Each patient was included in only one of the three sets. We show 5 example breast ultrasound images in Figure 4.
A.2 IMPLEMENTATION DETAILS
All experiments were conducted on NVIDIA RTX8000 GPUs and NVIDIA V100 GPUs. For all models, we perform model selection during training based on bag-level AUC evaluated on the validation set.
Camelyon16 We follow the same preprocessing and pretraining steps as Li et. al. (Li et al., 2021). To preprocess the slides, we cropped the slides into tiles at 5x magnification, filtered out tiles that do not contain enough tissues (average saturation < 30), and resized the images into a resolution of 224 x 224 pixels. Resizing was performed using the Pillow package (Clark, 2015) with default settings (nearest neighbor sampling).
We pretrain the feature extractor, ResNet18 (He et al., 2016), with SimCLR (Chen et al., 2020) for a maximum of 600 epochs. Each patch is represented by a 512-dimensional vector. We set the batch size at 512 and temperature at 0.5. We use SGD with the learning rate of 0.03, weight decay of 0.0001, and cosine annealing scheduler. We also train a MIL aggregator using the instance features extracted by the feature extractor in order to monitor the bag AUC of the downstream task on the validation set. During finetuning with Its2CLR, we fine-tune the feature extractor for a maximum of 50 epochs.
The batch size is set to 512, and the learning rate is set to 10−2. At the feature extractor training stage, we apply random data augmentation to each instance, including:
• Random (p = 0.8) color jittering: brightness, contrast, and saturation factors are uniformly sampled from [0.2, 1.8], hue factor is uniformly sampled from [−0.2, 0.2];
• Random gray scale (p = 0.2); • Random Gaussian blur with kernel size of 0.06 times the size of an image; • Random horizontal/vertical flipping with 0.5 probability.
When training the DS-MIL aggregator, we follow the settings in (Li et al., 2021). We use the Adam optimizer during training. Since each bag may contain different a number of instances, we follow (Li et al., 2021) and set the batch size to just one bag. We train each model for a maximum of 350 epochs. We use an initial learning rate of 2× 10−4, and use the StepLR scheduler to reduce the learning rate by 0.5 every 75 epochs. Details on the hyperparameters used for training the aggregator are in Appendix C.
TCGA-LUAD To preprocess the slides, we cropped them into tiles at 10x magnification, filtered out the background tiles that do not contain enough tissues (when average saturation is less than 30), and resized the images into the resolution of 224 x 224 pixels. Resizing was performed using the Pillow package (Clark, 2015) with nearest neighbor sampling. These tiles were color-normalized to match the Vahadane method (Vahadane et al., 2016).
To train the feature extractor, we perform the same process as for Camelyon16.
We also use DS-MIL (Li et al., 2021) as the aggregator. When training the aggregator, we resample the ratio of positive and negative bags to keep the class ratio balanced. We train the aggregator for a maximum of 100 epochs using the Adam optimizer with the learning rate set to 2×10−4 and reduce the learning rate by 0.5 every 50 epochs.
Breast Ultrasound We follow the same preprocessing steps as Shen et. al. (Shen et al., 2021a). All images were resized to 224 x 224 pixels using bilinear interpolation. We used ResNet18 (He et al., 2016) as the feature extractor and pretrained it using SimCLR (Chen et al., 2020) for 100 epochs. We adopt the same pretraining approach as for Camelyon16. We used the Instance Attention-MIL
as an aggregator (Ilse et al., 2018). Given a bag of images x1, ..., xk and a feature extractor f , the aggregator first computes instance-level predictions ŷi for each image xi. It then calculates an attention score αi ∈ [0, 1] for each image xi using its feature vectors f(xi). Lastly, the baglevel prediction is computed as the average instance prediction weighted by the attention score ŷ =∑k i αiŷi. To optimize the aggregator, we trained it using Adam with a learning rate set to 10
−3 for a maximum of 350 epochs.
A.3 COMPUTATIONAL COMPLEXITY
It takes 600 epochs (around 90 hours ) to train SimCLR. Our finetuning only takes 50 extra epochs (around 10 hours), which is only 1/10 of the pretraining time. Updating pseudo labels is also efficient. Updating instance features and training MIL aggregators only take around 10 minutes, and we only do it every 5 epochs.
A.4 HYPERPARAMETERS OF TRAINING THE FEATURE EXTRACTOR IN ITS2CLR
Hyperparameter tuning The hyperparameters of the proposed method include: the initial threshold used for binarization of the prediction to produce pseudo labels η ∈ [0.1, 0.9], the proportion of positive queries sampled p+ ∈ [0.05, 0.5], the initial ratio r0 ∈ [0.01, 0.7] and the final ratio rT ∈ [0.2, 0.8] of in the self-paced sampling scheme. For Camelyon16, we obtain the highest baglevel validation AUC using the following hyperparameters: η = 0.3, p+ = 0.2, r0 = 0.2 and rT = 0.8. We use the feature extractor trained under this setting in Tables 2, 3 and 4. The complete list of hyperparameters in different experiments is reported in Table 8.
Sensitivity analysis We conduct the sensitivity analysis for each hyperparameter on Camelyon 16, and observe a robust performance over a range of hyperparameter values.
• Threshold η: The choice of η influences the instance-level pseudo labels. As shown in Figure 5, the outputs of the instance-level classifier are mostly close to 0 or 1, so the pseudo labels do not dramatically vary for a wide range of η. We conducted a small ablation experiment on the importance of η. Figure 6 (left) shows that ItS2CLR is quite robust to the value of η, except for some extreme values. If η is too small (e.g. 0.1), it can introduce a significant number of false positives. If η is too large (e.g. 0.8), it can mistakenly exclude some useful positive samples, causing a drop in the performance. In the main paper, since negative instances are more prevalent than positive instances, a threshold of 0.3 (less than 0.5) can increase the recall for the positive instances.
• Sampling ratio of query instance over pseudo labels: We use p+ to denote the percentage of positive query instances used during the contrastive learning stage. Figure 6 (right) shows that it is desirable to choose a relatively small p+. Since there are far fewer positive instances than negative instance, keeping the ratio of positive queries low can avoid repetitively sampling from a limited number of positive instances. Also, since the negative instance set X−neg is clean, there is more label noise among the positive pseudo labels.
• The initial rate r0 and final rate rT for the self-paced sampling scheduler: Figure 7 shows that ItS2CLR is also generally robust to the r0 and rT . However, extremely large initial rate r0 (high confidence in the pseudo labels) may introduce more noise during the training and hurt the performance. Conversely, extremely small rT (low confidence in the pseudo labels) may prohibit the model from using more data, also hurting performance.
• Sampling during warm-up: During the warmup phase, we sample query instances from X−neg. An alternative choice can be sampling we sample the query instance from X+pos and the corresponding set Dx from X−neg. However, our experiments show that the resulting bag-level AUC drops to 90.91% under this setting, which is significantly lower than 94.25% by the proposed method. This comparison demonstrates the importance of using clean negative instances as query images during warmup.
A.5 EXPERIMENTS ON SYNTHETIC VERSIONS OF CAMELYON16
Simulation of witness rates (WR)
Since the ground truth instance-level labels are available for Camelyon16, we can conduct experiments on synthetic versions of the dataset to study the impact of the prevalence of positive instances
on the performance of the proposed approach and the baselines. Section 2 describes how the performance of CSSL is affected by low witness rates (fraction of positive instances in each positive bag). To study the robustness of the proposed framework to the witness rate in the data, we increase or decrease the witness rate of Camelyon16 by randomly dropping negative or positive instances within each bag respectively. The percentage of retained instances and the resulting witness rates are reported in Table 6.
Downsampled version of Camelyon16 for end-to-end training
In order to enable end-to-end training we downsample each bag in Camelyon16 to around 500 instances so that it fits in the memory of a GPU. To achieve this, we divide the large bag into smaller bags while keeping the witness rate of each sub-bag at a similar level as the original bag. In more detail, if the bag size is smaller or equal to 500, we keep the original bag. If the bag size is greater than 500, we divide the bag into several sub-bags such that each sub-bag contains approximately 500 instances. After that, we randomly divide the negative instances within the original bag into desired number of sub-bags evenly. For the positive bag, we randomly partition the positive instances within the original bag into desired number of sub-bags evenly as well. If the number of positive instances within that positive bag is smaller than the desired number of sub-bags, we correct the number of sub-bags to be the number of positive instances. We then combine the positive instances and the negative instances to form sub-bags. This ensures that the bag-level label is correct and the witness rate for each positive sub-bag remains similar to the original bag.
A.6 DESCRIPTION OF THE ABLATION STUDY
Details for CE finetuning with/without Iterative Updating
• CE + w/o iterative updating: we use the same set of the initial pseudo labels as our ItS2CLR framework. All the instances within the negative bags are labeled as negative, for instances in positive bags, we assign the pseudo label according to the instance prediction by the aggregator. In the following finetuning epochs of the aggregator, the pseudo label is kept fixed.
• CE + iterative updating: based on CE + w/o iterative updating, the pseudo label is updated every few epochs, which is in turn used to guide the finetuning. We will make sure that this is explained clearly in the revised version of the paper.
Details for ItS2CLR With/ without SPL.
• ItS2CLR without iterative update: We keep everything the same as the full Its2CLR procedure (including the SPL strategy), but we do not apply steps 7, 8 and 9 in Algorithm 1. As a result, the pseudo labels are fixed to always equal the initial set of pseudo labels.
• ItS2CLR without SPL: We keep everything the same as the full Its2CLR procedure (including iterative updating), but modify step 10 in Algorithm 1. We do not utilize the pseudo label to train the model in a self-paced learning way as in Section 3.2. We utilize all the pseudo-labeled data from the beginning of the finetuning.
B ADDITIONAL RESULTS
We present here additional results to supplement those presented in Section 4.
B.1 LEARNING CURVES
F1-Score plot corresponding to Figure 1: In Figure 8, we show the max F1 score curve corresponding to the right side of Figure 1. This plot confirms the importance of self-paced learning and iterative updating in ItS2CLR.
Instance pseudo label AUC comparison with cross-entropy finetuning: Figure 9 compares ItS2CLR with an alternative approach that finetunes the feature extractor using cross-entropy (CE) loss on the Camelyon16 dataset. Without iterative updating, CE finetuning rapidly overfits to the noise in the pseudo labels. Iterative updating prevents this to some extent, but does not match the performance of ItS2CLR, which produces increasingly accurate pseudo labels as the iterations proceed.
B.2 INSTANCE-LEVEL EVALUATION
In order to evaluate instance-level performance, we report classification metrics including AUC, F1-score, AUPRC and Dice score for localization.
The Dice score is defined as follows: Dice = 2 ∑ i yipi∑
i yi + ∑ i pi , (4)
where yi and pi are the ground truth and predicted probability for the ith sample. It penalizes the prediction with low confidence. The prediced probability is computed from the output of the MIL model si via linear scaling: pi = σ (asi + b) , (5) where a ∈ [−5, 5] and b ∈ [0.1, 10] are chosen to maximize the Dice score on the validation set. Max pooling aggregator: In Table 4, we show that out model achieves better weakly supervised localization performance compared to other methods when DS-MIL is used as the aggregator. In Table 9, we show that the same conclusion holds for an aggregator based on max-pooling.
Linear evaluation: In Table 10, we report results obtained by training a logistic regression model using the features obtained from the same approaches in Table 4, following a standard linear eval-
uation pipeline in representation learning (Chen et al., 2020). ItS2CLR again achieves the best instance-level performance. We also produce bag-level predictions using the maximum output of the linear classifier for each bag, which again showcases that instance-level performance results in superior bag-level classification.
B.3 COMPARISON WITH END-TO-END TRAINING
In this section we provide additional results to complement Table 5, where ItS2CLR is compared to end-to-end models. The end-to-end training is conducted with the same aggregators for each dataset as described in Section 4 and Appendix A.2.
Camelyon16 Figure 10 shows that an end-to-end model trained on the downsampled version of Camelyon16 described in Section A.5 rapidly overfits when trained from scratch and from SimCLRpretrained weights. The two-stage model, on the other hand, is less prone to overfitting. Table 11 shows that the two-stage learning pipeline outperforms end-to-end training, and is in turn outperfomed by ItS2CLR.
Breast Ultrasound dataset Table 12 shows that for the breast-ultrasound dataset end-to-end training outperforms the SimCLR+Aggregator baseline, but is outperformed by ItS2CLR.
B.4 TUMOR LOCALIZATION MAPS
Figure 11 provides additional tumor localization maps.
C MIL AGGREGATORS
C.1 FORMULATION OF MIL AGGREGATORS
In this section, we describe the different MIL aggregators benchmarked in Section 4.2 and Table 3.
Let B denote a collection of sets of feature vectors in Rd. The bags of extracted features in the dataset are denoted by {Hb}Bb=1 ⊂ B. An aggregator is defined as a function g : B → [0, 1] mapping bags of extracted features to a score in [0, 1].
There exist two main approaches in MIL:
1. The instance-level approach: using a logistic classifier on each instance, then aggregating instance predictions over a bag (e.g. max-pooling, top k-pooling).
2. The embedding-level approach: aggregating the instance embeddings, then obtaining a bag-level prediction via a bag-level classifier (e.g. attention-based aggregator, Transformer).
We denote the embeddings of the instances within a bag by H = {hk}Kk=1, where K is the number of instances.
Max-pooling obtains bag-level predictions by taking the maximum of the instance-level predictions produced by a logistic instance classifier ϕ, that is
Top-k pooling Shen et al. (2021b) produces bag-level predictios using the mean of the top-M ranked instance-level predictions produced by a logistic instance classifier ϕ, whereM is a hyperparameter.
Let topM (ϕ,H) denote the indices of the elements inH for which ϕ produces the highestM scores,
gϕ(H) = 1
M ∑ k∈topM(ϕ,H) ϕ(hk). (7)
Attention-based MIL Ilse et al. (2018) aggregates instance embeddings using a sum weighted by attention weights. Then the bag-level estimation is computed from the aggregated embeddings by a logistic bag-level classifier φ:
gφ(H) = φ ( K∑ k=1 akhk ) , (8)
where ak is the attention weight on instance k. ak is computed by:
ak = exp
( wT tanh(V hTk ) )∑K j=1 exp ( wT tanh(V hTj )
) , (9) wherew ∈ Rp×1 and V ∈ Rp×d are learnable parameters and p is the dimension of the hidden layer. DS-MIL combines instance-level and embedding-level aggregation, we refer to DS-MIL (Li et al., 2021) for more details on this approach.
Transformer Chen and Krishnan (2022) aggregation uses an L-layer Transformer to process the set of instance features H . The initial set H(0) is set equal to H . Then it goes through Transformer as following:
H ′(l) = MSA ( H(l−1) ) +H(l−1))
H(l) = MLP ( H ′(l−1) ) +H ′(l−1)
(10)
for l = 1, · · · , L, where MSA is multiple-head self-attention, MLP is a multi-layer perceptron network. Then the processed vectors H(l) are fed to Attention-based MIL (Ilse et al., 2018) to obtain bag-level predictions.
gφ(H l) = φ ( K∑ k=1 akh l k ) , (11)
where ak is the attention weight on instance k. ak is computed by:
ak = exp
( wT tanh(V (hlk) T ) )∑K
j=1 exp ( wT tanh(V (hlj) T ) ) , (12)
wherew ∈ Rp×1 and V ∈ Rp×d are learnable parameters and p is the dimension of the hidden layer.
C.2 IMPLEMENTATION DETAILS
Top-k pooling we select the ratio in Top-k pooling from the set {0.1%, 1%, 3%, 10%, 20%}. DS-MIL The weight between the two cross-entropy loss functions in DS-MIL is selected from the interval [0.1, 5] based on the best validation performance.
Attention-based MIL. The hidden dimension of the attention module to compute the attention weight is set equal to the dimension of the input feature vector (512).
Transformer. We add a light-weighted two-layer Transformer blocks to process instance features. We did not observe increase in performance with additional blocks. | 1. What is the focus of the paper in the context of multiple-instance learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of representation learning and class separation?
3. Do you have any concerns or suggestions regarding the methodology and architecture used in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any comparisons or discussions missing in the paper that would be valuable for understanding the contribution and effectiveness of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper adresses multiple-instance-learning (MIL) in the context of class imbalance. In the MIL setting, instances are first mapped to feature vectors using an appropriate embedding. In a second step, feature vectors are aggregated to produce a bag-level predictions. The authors point out that SOTA approaches leverage self-supervised contrastive loss terms to learn better representations as part of the first step but these mechanisms sample instances at random which can lead to class-specialized learnt representations when datasets are class-imbalanced. They explain that, for the dominant class label, negative pairs will contain two examples of the same class with high probability.
To improve representation learning in this case, the authors introduce an iterative process that leverages pseudo-labels obtained from a previous training run. These pseudo-labels are at the instance level and can thus be used during the next representation learning step as part of a modified InfoNCE loss.
Strengths And Weaknesses
Pros :
the proposed approach succeeds at learning a representation where classes are better separated (as shown from Table 1)
the methodological contributions (modified loss and self-pacing) are quite well presented.
Cons :
the fact that an aggregator that learns instance classifier in addition to bag classifier is required is not sufficiently discussed or justified. Essentially, the instance level classifier requires that the aggregator comprises a branch in which each tile is mapped to a scalar. I believe that many other architectures than DS-MIL could thus have been tested.
the authors do not compare themselves to SOTA performances. For instance, on Camelyon16 , SSL (MoCo v2) + Attention MIL = 98.7 of AUC [Dehaene et al 2020] which is far better than [Li et al. 2021]. Without this comparison, the reader cannot know if it is worth using/reproducing the method.
Some aspects of the paper lack a bit of clarity (Alg. 1 and experimental part) for the authors rely too much on the appendices.
Clarity, Quality, Novelty And Reproducibility
Clarity : Algorithm 1 and the general organization deserve more explanation. However the motivation, loss modification, and self-pacing parts are clear. The experimental is dense and a some information is hard to find.
Quality : the method is technically sound but it is not validated w.r.t. SOTA approaches
Originality : the approach is fairly original and well positioned in the literature.
Reproducibility : the method is simple and does not seem difficult to reproduce. This said, I haven't seen in the text comments on reproducibility and regarding the possibility to make the code publicly available. |
ICLR | Title
Causal Reasoning from Meta-reinforcement learning
Abstract
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences. We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
N/A
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences. We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
1 INTRODUCTION
Many machine learning algorithms are rooted in discovering patterns of correlation in data. While this has been sufficient to excel in several areas (Krizhevsky et al., 2012; Cho et al., 2014), sometimes the problems we are interested in are fundamentally causal. Answering questions such as “Does smoking cause cancer?” or “Was this person denied a job due to racial discrimination?” or “Did this marketing campaign cause sales to go up?” all require an ability to reason about causes and effects and cannot be achieved by purely associative inference. Even for problems that are not obviously causal, like image classification, it has been suggested that some failure modes emerge from lack of causal understanding. Causal reasoning may be an essential component of natural intelligence and is present in human babies, rats and even birds (Leslie, 1982; Gopnik et al., 2001; 2004; Blaisdell et al., 2006; Lagnado et al., 2013). There is a rich literature on formal approaches for defining and performing causal reasoning (Pearl, 2000; Spirtes et al., 2000; Dawid, 2007; Pearl et al., 2016).
Here we investigate whether procedures for learning and using causal structure can be produced by meta-learning. The approach of meta-learning is to learn the learning (or inference) procedure itself, directly from data. We adopt the specific method of Duan et al. (2016) and Wang et al. (2016), training a recurrent neural network (RNN) through model-free reinforcement learning. We train on a large family of tasks, each underpinned by a different causal structure.
The use of meta-learning avoids the need to manually implement explicit causal reasoning methods in an algorithm, offers advantages of scalability by amortizing computations, and allows automatic incorporation of complex prior knowledge (Andrychowicz et al., 2016; Wang et al., 2016; Finn et al., 2017). Additionally, by learning end-to-end, the algorithm has the potential to find the internal representations of causal structure best suited for the types of causal inference required.
2 PROBLEM SPECIFICATION AND APPROACH
This work probed how an agent could learn to perform causal reasoning in three distinct settings – observational, interventional, and counterfactual – corresponding to different types of data available to the agent during the first phase of an episode.
In the observational setting (Experiment 1), the agent could only obtain passive observations from the environment. This type of data allows an agent to infer associations (associative reasoning) and, when the structure of the underlying causal model permits it, to estimate the effect that changing a variable in the environment has on another variable, namely to estimate causal effects (cause-effect reasoning).
In the interventional setting (Experiment 2), the agent could directly set the values of some variables in the environment. This type of data in principle allows an agent to estimate causal effects for any underlying causal model.
In the counterfactual setting (Experiment 3), the agent first had an opportunity to learn about the causal graph through interventions. At the last step of the episode, it was asked a counterfactual question of the form “What would have happened if a different intervention had been made in the previous time-step?”.
Next we will formalize these three settings and patterns of reasoning possible in each, using the graphical model framework (Pearl, 2000; Spirtes et al., 2000; Dawid, 2007)1, and introduce the meta-learning methods that we will use to train agents that are capable of such reasoning.
2.1 CAUSALITY
Causal relationships among random variables can be expressed using causal directed acyclic graphs (DAGs) (see Appendix). A causal DAG is a graphical model that captures both independence and causal relations. Each node Xi corresponds to a random variable, and the joint distribution p(X1, ... ,XN) is given by the product of conditional distributions of each node Xi given its parent nodes pa(Xi), i.e. p(X1:N≡X1,...,XN)= ∏N i=1p(Xi|pa(Xi)).
Edges carry causal semantics: if there exists a directed path fromXi toXj , thenXi is a potential cause of Xj. Directed paths are also called causal paths. The causal effect of Xi on Xj is the conditional distribution ofXj givenXi restricted to only causal paths.
G
A
E H
p(A)
p(E|A) p(H|A,E)
G→E=e
A
E H
p(A)
δE=e p(H|A,E)
An example causal DAG G is given in the figure on the left, where E represents hours of exercise in a week, H cardiac health, andA age. The causal effect ofE onH is the conditional distribution restricted to the path E→H, i.e. excluding the path E←A→H. The variable A is called a confounder, as it confounds the causal effect with non-causal statistical influence.
Simply observing cardiac health conditioning on exercise level from p(H|E) (associative reasoning) cannot answer if change in exercise levels cause changes in cardiac health (cause-effect reasoning), since there is always the possibility that correlation between the two is because of the common confounder of age.
Cause-effect Reasoning. The causal effect can be seen as the conditional distribution p→E=e(H|E = e)2 on the graph G→E=e above (right), resulting from intervening on E by replacing p(E|A) with a delta distribution δE=e (thereby removing the link from A to E) and leaving the remaining conditional distributions p(H|E,A) and p(A) unaltered. The rules of do-calculus (Pearl, 2000; Pearl et al., 2016) tell us how to compute p→E=e(H|E=e) using observations from G. In this case p→E=e(H|E=e)=∑
Ap(H|E=e,A)p(A)3. Therefore, do-calculus enables us to reason in the intervened graph G→E=e even if our observations are from G. This is the scenario captured by our observational setting outlined above. Such inferences are always possible if the confounders are observed, but in the presence of unobserved confounders, for many DAG structures the only way to compute causal effects is by collecting observations directly from G→E, i.e. by actively intervening on the world to fix the value of the variable E= e and observing the remaining variables. In our interventional setting, outlined above, the agent has access to such interventions.
1This approach typically decouples the challenges of causal induction, i.e. of inferring the structure of the causal graph from data, and that of causal reasoning on the induced graph. The formalism we describe here assumes that the structure of the causal graph is known. In our experiments however, our agents concurrently carry out causal induction.
2In the causality literature, this distribution would most often be indicated with p(H|do(E=e)). We prefer to use p→E=e(H|E=e) to highlight that intervening on E results in changing the original distribution p, by structurally altering the causal DAG.
3Notice that conditioning on E=e would instead give p(H|E=e)= ∑
Ap(H|E=e,A)p(A|E=e).
Counterfactual Reasoning. Cause-effect reasoning can be used to correctly answer predictive questions of the type “Does exercising improve cardiac health?” by accounting for causal structure and confounding. However, it cannot answer retrospective questions about what would have happened. For example, given an individual iwho has died of a heart attack, this method would not be able to answer questions of the type “What would the cardiac health of this individual have been had they done more exercise?”. This type of question requires estimating unobserved sources of noise and then reasoning about the effects of this noise under a graph conditioned on a different intervention.
2.2 META-LEARNING
Meta-learning refers to a broad range of approaches in which aspects of the learning algorithm itself are learned from the data. Many individual components of deep learning algorithms have been successfully meta-learned, including the optimizer (Andrychowicz et al., 2016), initial parameter settings (Finn et al., 2017), a metric space (Vinyals et al., 2016), and use of external memory (Santoro et al., 2016).
Following the approach of (Duan et al., 2016; Wang et al., 2016), we parameterize the entire learning algorithm as a recurrent neural network (RNN), and we train the weights of the RNN with model-free reinforcement learning (RL). The RNN is trained on a broad distribution of problems which each require learning. When trained in this way, the RNN is able to implement a learning algorithm capable of efficiently solving novel learning problems in or near the training distribution.
Learning the weights of the RNN by model-free RL can be thought of as the “outer loop” of learning. The outer loop shapes the weights of the RNN into an “inner loop” learning algorithm. This inner loop algorithm plays out in the activation dynamics of the RNN and can continue learning even when the weights of the network are frozen. The inner loop algorithm can also have very different properties from the outer loop algorithm used to train it. For example, in previous work this approach was used to negotiate the exploration-exploitation tradeoff in multi-armed bandits (Duan et al., 2016) and learn algorithms which dynamically adjust their own learning rates (Wang et al., 2016; 2018). In the present work we explore the possibility of obtaining a causally-aware inner-loop learning algorithm. See the Appendix for a more formal approach to meta-learning.
3 TASK SETUP AND AGENT ARCHITECTURE
In the experiments, in each episode the agent interacted with a different causal DAG G. G was drawn randomly from the space of possible DAGs under the constraints given in the next paragraph. Each episode consisted of T steps, and was divided into two phases: information and quiz. The information phase, corresponding to the first T−1 steps, allowed the agent to collect information by interacting with or passively observing samples from G. The agent could potentially use this information to infer the connectivity and weights of G. The quiz phase, corresponding to the final step T , required the agent to exploit the causal knowledge it collected in the information phase, to select the node with the highest value under a random external intervention.
Causal graphs, observations, and actions. We generated all graphs onN=5 nodes, with edges only in the upper triangular of the adjacency matrix (this guarantees that all the graphs obtained are DAGs), with edge weights, wji∈{−1,0,1} (uniformly sampled), and removed 300 for held-out testing. The remaining 58749 (or 3N(N−1)/2−300) were used as the training set. Each node’s value, Xi ∈R, was Gaussiandistributed. The values of parentless nodes were drawn from N (µ = 0.0,σ = 0.1). The conditional probability of a node with parents was p(Xi|pa(Xi)) = N (µ = ∑ jwjiXj,σ = 0.1), where pa(Xi) represents the parents of nodeXi in G. The values of the 4 observable nodes (the root node, was always hidden), were concatenated to create vt and provided to the agent in its observation vector, ot=[vt,mt], wheremt is a one-hot vector indicating external intervention during the quiz phase (explained below).4
In both phases, on each step, t, the agent’s action, at, was a discrete choice from the range {1...2(N−1)}. Action choices in {1...N − 1} corresponded to information actions, and choices in {N ...2(N − 1)} corresponded to quiz actions.
4While a simple domain provides the most unencumbered test for causal reasoning, we also carried out simulations with more complex causal graphs (graphs with non-linear connections, and larger graphs of size N = 6) and stronger requirements for generalization (holding-out entire equivalence classes of causal graphs from training) to demonstrate the robustness of our approach (see Appendix).
Information phase. In the information phase, an information action, at, caused an intervention on the at-th node, setting its value to Xat = 5. We choose an intervention value outside the likely range of sampled observations, to facilitate learning of the causal graph. The observation from the intervened graph, G→Xat=5, was sampled similarly to G, except the incoming edges toXat were severed, and its intervened value was used for conditioning its children’s values. The node values in G→Xat=5 were distributed as p→Xi=5(X1:N\i|Xi=5). If a quiz action was chosen during the information phase, it was ignored, the G values were sampled as if no intervention had been made, and the agent was given a penalty of rt=−5 in order to encourage it to take quiz actions at only during quiz phase. After the action was selected, an observation was provided to the agent. The default length of this phase was fixed to T =N =5 since in the noise-free limit, a minimum of T−1=4 interventions are required in general to resolve the causal structure, and score perfectly on the test phase.
Quiz phase. In the quiz phase, one non-hidden node was selected at random to be intervened on externally, Xj, and its value was set to −5. We chose an intervention value of −5 never previously observed by the agent in that episode, thus disallowing the agent from memorizing the results of interventions in the information phase to perform well on the quiz phase. The agent was informed of this by the observed mT−1 (a one-hot vector which indicated which node would be intervened on), from the final pre-quiz phase time-step, T−1. Note, mt was set to a zero-vector for steps t<T−1. A quiz action, aT , chosen by the agent indicated the node whose value would be given to the agent as a reward. In other words, the agent would receive reward, rT =XaT−(N−1). Again, if a quiz action was chosen during the information phase, the node values were not sampled and the agent was simply given a penalty of rT =−5. Active vs passive agents. Our agents had to perform two distinct tasks during the information phase: a) actively choose which nodes to set values on, and b) infer the causal DAG from its observations. We refer to this setup as the “active” condition. To control for (a), we created the “passive” condition, where the agent’s information phase actions are not learned. To provide a benchmark for how well the active agent can perform task (a), we fixed the passive agent’s intervention policy to be an exhaustive sweep through all observable nodes. This is close to optimal for this domain – in fact it is the optimal policy for noise-free conditional node values. We also compared the active agent’s performance to a baseline agent whose policy is to intervene randomly on the observable nodes in the information phase, in the Appendix.
Two kinds of learning The “inner loop” of learning (see Section 2.2) occurs within each episode where the agent is learning from the evidence it gathers during the information phase in order to perform well in the quiz phase. The same agent then enters a new episode, where it has to repeat the task on a different DAG. Test performance is reported on DAGs that the agent has never previously seen, after all the weights of the RNN have been fixed. Hence, the only transfer from training to test (or the “outer loop” of learning) is the ability to discover causal dependencies based on observations in the information phase, and to perform causal inference in the quiz phase.
Agent Architecture and Training
We used a long short-term memory (LSTM) network (Hochreiter & Schmidhuber, 1997) (with 96 hidden units) that, at each time-step t, receives a concatenated vector containing [ot,at−1,rt−1] as input, where ot is the observation5, at−1 is the previous action (as a one-hot vector) and rt−1 the reward (as a single real-value)6. The outputs, calculated as linear projections of the LSTM’s hidden state, are a set of policy logits (with dimensionality equal to the number of available actions), plus a scalar baseline. The policy logits are transformed by a softmax function, and then sampled to give a selected action.
Learning was by asynchronous advantage actor-critic (Mnih et al., 2016). In this framework, the loss function consists of three terms – the policy gradient, the baseline cost and an entropy cost. The baseline cost was weighted by 0.05 relative to the policy gradient cost. The weighting of the entropy cost was annealed over the course of training from 0.05 to 0. Optimization was done by RMSProp with =10−5, momentum = 0.9 and decay = 0.95. Learning rate was annealed from 3×10−6 to 0. For all experiments, after training, the agent was tested with the learning rate set to zero, on a held-out test set.
5’Observation’ ot refers to the reinforcement learning term, i.e. the input from the environment to the agent. This is distinct from observations in the causal sense (referred to as observational data) i.e. samples from a casual structure where there is no information about interventions that have been carried out.
6These are both set to zero for the first step in an episode.
4 EXPERIMENTS
Our three experiments (observational, interventional, and counterfactual) differed in the properties of the vt that was observed by the agent during the information phase, and thereby limited the extent of causal reasoning possible within each data setting. Our measure of performance is the reward earned in the quiz phase for held-out DAGs. Choosing a random node node in the quiz phase results in a reward of −5/4=−1.25, since one node (the externally intervened node) always has value−5 and the others have on average 0 value. By learning to simply avoid the externally intervened node, the agent can earn on average 0 reward. Consistently picking the node with the highest value in the quiz phase requires the agent to perform causal reasoning. For each agent, we take the average reward earned across 1200 episodes (300 held-out test DAGs, with 4 possible external interventions). We train 12 copies of each agent and report the average reward earned by these, with error bars showing 95% confidence intervals.
4.1 EXPERIMENT 1: OBSERVATIONAL SETTING
In Experiment 1, the agent could neither intervene to set the value of variables in the environment, nor observe any external interventions. In other words, it only received observations from G, not G→Xj (whereXj is a node that has been intervened on). This limits the extent of causal inference possible. In this experiment, we tested six agents, four of which were learned: “Observational”, “Long Observational”, “Active Conditional”, “Passive Conditional”, “Observational MAP Baseline”(not learned) and the “Optimal Associative Baseline” (not learned). We also ran two other standard RL baselines—see the Appendix for details.
Observational Agents: In the information phase, the actions of the agent were ignored7, and the observational agent always received the values of the observable nodes as sampled from the joint distribution associated with G. In addition to the default T=5 episode length, we also trained this agent with 4× longer episode length (Long Observational Agent), to measure performance increase with more observational data.
Conditional Agents: The information phase actions corresponded to observing a world in which the selected nodeXj is equal toXj=5, and the remaining nodes are sampled from the conditional distribution p(X1:N\j|Xj=5), whereX1:N\j indicates the set of all nodes exceptXj. This differs from intervening on the variableXj by setting it to the valueXj=5, since here we take a conditional sample from G rather than from G→Xj=5 (i.e. from p→Xj=5(X1:N\j|Xj=5)), and inference about the corresponding node’s parents is possible. Therefore, this agent still has access to only observational data, as with the observational agents. However, on average it receives more diagnostic information about the relation between the random variables in G, since it can observe samples where a node takes a value far outside the likely range of sampled observations. We run active and passive versions of this agent as described in Section 3
Optimal Associative Baseline: This baseline receives the true joint distribution p(X1:N) implied by the DAG in that episode, therefore it has full knowledge of the correlation structure of the environment8. It can therefore do exact associative reasoning of the form p(Xj|Xi=x), but cannot do any cause-effect reasoning of the form p→Xi=x(Xj|Xi=x). In the quiz phase, this baseline chooses the node that has the maximum value according to the true p(Xj|Xi=x) in that episode, whereXi is the node externally intervened upon, and x=−5. Observational MAP Baseline: This baseline follows the traditional method of separating causal induction and causal inference. We first carry out exact maximum a posteriori (MAP) inference over the space of DAGs in each episode (i.e. causal induction) by selecting the DAG (GMAP) of the 59049 unique possibilities that maximizes the likelihood of the data observed, v1:T , by the Observational Agent in that episode. This is equivalent to maximizing the posterior probability since the prior over graphs is uniform.
RESULTS
We focus on three key questions in this experiment: (i) Can our agents learn to do associative reasoning with observational data?, (ii) Can they learn to do cause-effect reasoning from observational data?, and (iii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?
7These agents also did not receive the out-of-phase action penalties during the information phase since their actions are totally ignored.
8Notice that the agent does not know the graphical structure, i.e. it does not know which nodes are parents of which other nodes
For (i), we see that the Observational Agents achieve reward above the random baseline (see the Appendix), and that more observations (Long Observational Agent) lead to better performance (Fig. 2a), indicating that the agent is indeed learning the statistical dependencies between the nodes. We see that the performance of the Passive-Conditional Agent is better than either of the Observational Agents, since the data it observes is very informative about the statistical dependencies in the environment. Finally, we see that the PassiveConditional Agent’s performance is comparable (in fact surpasses as discussed below) the performance of the Optimal Associative Baseline, indicating that it is able to do perfect associative inference.
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
Figure 1: Active and Passive Conditional Agents
For (ii), we see the crucial result that the Passive-Conditional Agent’s performance is significantly above the Optimal Associative Baseline, i.e. it performs better than what is possible using only correlations. We compare their performances, split by whether or the node that was intervened on in the quiz phase of the episode has a parent (Fig. 2b). If the intervened nodeXj has no parents, then G=G→Xj , and there is no advantage to being able to do cause-effect reasoning. We see indeed
that the Passive-Conditional agent performs better than the Optimal Associative Baseline only when the intervened node has parents (denoted by hatched bars in Fig. 2b), indicating that this agent is able to carry out some cause-effect reasoning, despite access to only observational data – i.e. it learns some form of do-calculus. We show the quiz phase for an example test DAG in Fig. 2c, seeing that the Optimal Associative Baseline chooses according to the node values predicted by G whereas the Passive-Conditional Agent chooses according the node values predicted by G→Xj . For (iii), we see (Fig. 2) that the Active-Conditional Agent’s performance is only marginally below the performance of the Passive-Conditional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
4.2 EXPERIMENT 2: INTERVENTIONAL SETTING
In Experiment 2, the agent receives interventional data in the information phase – it can choose to intervene on any observable node, Xj, and observe a sample from the resulting graph G→Xj . As discussed in Section 2.1, access to intervention data permits cause-effect reasoning even in the presence of unobserved confounders, a feat which is in general impossible with access only to observational data. In this experiment, we test four new agents, two of which were learned: “Active Interventional”, “Passive Interventional”, “Interventional MAP Baseline”(not learned), and “Optimal Cause-Effect Baseline” (not learned).
Interventional Agents: The information phase actions correspond to performing an intervention on the selected nodeXj and sampling from G→Xj (see Section 3 for details). We run active and passive versions of this agent as described in Section 3.
Interventional MAP Baseline: This baseline infers a DAG by maximizing the likelihood of the data observed by the Passive Interventional Agent in that episode. In the quiz phase, we predict the values of
each node according to GMAP→Xj whereXj is the node externally intervened upon (i.e. causal inference), and choose the node with the highest value.
Optimal Cause-Effect Baseline: This baseline receives the true DAG, G. In the quiz phase, it chooses the node that has the maximum value according to G→Xj , whereXj is the node externally intervened upon.
RESULTS
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
For (i) we see in Fig. 4a that the Passive-Interventional Agent’s performance is comparable to the Optimal Cause-Effect Baseline, indicating that it is able to do close to perfect cause-effect reasoning in this domain.
For (ii) we see in Fig. 4a the crucial result that the Passive-Interventional Agent’s performance is significantly better than the Passive-Conditional Agent. We compare the performances of these two agents, split by whether the node that was intervened on in the quiz phase of the episode had unobserved confounders with other variables in the graph (Fig. 4b). In confounded cases, as described in Section 2.1, cause-effect reasoning is impossible with only observational data. We see that the performance of the Passive-Interventional Agent does not vary significantly with confoundedness, whereas the performance of the Passive-Conditional Agent is significantly lower in the confounded cases. This indicates that the improvement in the performance of the agent that has access to interventional data (as compared to the agents that had access to only observational data) is largely driven by its ability to also do cause-effect reasoning in the presence of confounders. This is highlighted by Fig. 4c, which shows the quiz phase for an example DAG, where the Passive-Conditional agent is unable to resolve the confounder, but the Passive-Interventional agent can.
For (iii), we see in Fig. 3 that the Active-Interventional Agent’s performance is only marginally below the performance of the near optimal Passive-Interventional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
4.3 EXPERIMENT 3: COUNTERFACTUAL SETTING
In Experiment 3, the agent was again allowed to make interventions as in Experiment 2, but in this case the quiz phase task entailed answering a counterfactual question. We explain here what a counterfactual question in this domain looks like. Consider the conditional distribution p(Xi|pa(Xi))=N ( ∑ jwjiXj,0.1)
as described in Section 3 asXi= ∑
jwjiXj+ where is distributed asN (0.0,0.1), and represents the specific randomness introduced when taking one sample from the DAG. After observing the nodesX1:N
in the DAG in one sample, we can infer this specific randomness i for each nodeXi (i.e. abduction as described in the Appendix) and answer counterfactual questions like “What would the values of the nodes be, hadXj in that particular sample taken on a different value than what we observed?”, for any of the nodesXj. We test 2 new learned agents: “Active Counterfactual” and “Passive Counterfactual”.
Counterfactual Agents: This agent is exactly analogous to the Interventional agent, with the addition that the exogenous noise in the last information phase step t=T−1 (where sayXp=+5), is stored and the same noise is used in the quiz phase step t=T (where sayXf =−5). While the question our agents have had to answer correctly so far in order to maximize their reward in the quiz phase was “Which of the nodes X1:N\j will have the highest value whenXf is set to−5?”, in this setting, we ask “Which of the nodes X1:N\j would have had the highest value in the last step of the information phase, if instead of having Xp=+5, we hadXf =−5?”. We run active and passive versions of this agent as described in Section 3. Optimal Counterfactual Baseline: This baseline receives the true DAG and does exact abduction based on the exogenous noise observed in the penultimate step of the information phase, and combines this correctly with the appropriate interventional inference on the true DAG in the quiz phase.
RESULTS
We focus on two key questions in this experiment: (i) Can our agents learn to do counterfactual reasoning?, (ii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?
For (i), we see that the Passive-Counterfactual Agent achieves higher reward than the Passive-Interventional Agent and the Optimal Cause-Effect Baseline. To evaluate whether this difference results from the agent’s use of abduction (see the Appendix for details), we split the test set into two groups, depending on whether or not the decision for which node will have the highest value in the quiz phase is affected by exogenous noise, i.e. whether or not the node with the maximum value in the quiz phase changes if the noise is
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
and Passive-Interventional Agents in the cases where the maximum values are distinct, however the Passive-Counterfactual Agent significantly outperforms the Passive-Interventional Agent in cases where there are degenerate maximum values.
For (ii), we see in Fig. 6 that the Active-Counterfactual Agent’s performance is only marginally below the performance of the Passive-Counterfactual agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
5 SUMMARY OF RESULTS
We introduced and tested a framework for learning causal reasoning in various data settings—observational, interventional, and counterfactual—using deep meta-RL. Crucially, our approach did not require explicit encoding of formal principles of causal inference. Rather, by optimizing an agent to perform a task that depended on causal structure, the agent learned implicit strategies to use the available data for causal reasoning, including drawing inferences from passive observation, actively intervening, and making counterfactual predictions. Below, we summarize the keys results from each of the three experiments.
In Section 4.1 and Fig. 2, we show that the agent learns to perform do-calculus. In Fig. 2(a) we see that, compared to the highest possible reward achievable without causal knowledge, the trained agent received more reward. This observation is corroborated by Fig. 2(b) which shows that performance increased selectively in cases where do-calculus made a prediction distinguishable from the predictions based on correlations. These are situations where the externally intervened node had a parent – meaning that the intervention resulted in a different graph.
In Section 4.2 and Fig. 4, we show that the agent learns to resolve unobserved confounders using interventions (a feat impossible with only observational data). In Fig. 4(a) we see that the agent with access to interventional data performs better than an agent with access to only observational data. Fig. 4(b) shows that the performance increase is greater in cases where the intervened node shared an unobserved parent (a confounder) with other variables in the graph. In this section we also compare the agent’s performance to a MAP estimate of the causal structure and find that the agent’s performance matches it, indicating that the agent is indeed doing close to optimal causal inference.
In Section 4.3 and Fig. 5, we show that the agent learns to use counterfactuals. In Fig. 5(a) we see that the agent with additional access to the specific randomness in the test phase performs better than an agent with access to only interventional data. In Fig. 5(b), we find that the increased performance is observed only in cases where the maximum mean value in the graph is degenerate, and optimal choice is affected by the exogenous noise – i.e. where multiple nodes have the same value on average and the specific randomness can be used to distinguish their actual values in that specific case.
6 DISCUSSION AND FUTURE WORK
This work is the first demonstration that causal reasoning can arise out of model-free reinforcement learning. This opens up the possibility of leveraging powerful learning-based methods for causal inference in complex settings. Traditional formal approaches usually decouple the two problems of causal induction (i.e. inferring the structure of the underlying model) and causal inference (i.e. estimating causal effects and answering counterfactual questions), and despite advances in both (Ortega & Stocker, 2015; Bramley et al., 2017; Parida et al., 2018; Sen et al., 2017; Forney et al., 2017; Lattimore et al., 2016), inducing models often requires assumptions that are difficult to fit to complex real-world conditions. By learning these end-to-end, our method can potentially find representations of causal structure best tuned to the specific causal inferences required. Another key advantage of our meta-RL approach is that it allows the agent to learn to interact with the environment in order to acquire necessary observations in the service of its task—i.e. to perform active learning. In our experimental domain, our agents’ active intervention policy was close to optimal, which demonstrates the promise of agents that can learn to experiment on their environment and perform rich causal reasoning on the observations.
Future work should explore agents that perform experiments to support structured exploration in RL, and optimal experiment design in complex domains where large numbers of blind interventions are prohibitive. To this end, follow-up work should focus on scaling up our approach to larger environments, with more complex causal structure and a more diverse range of tasks. Though the results here are a first step in this direction which use relatively standard deep RL components, our approach will likely benefit from more advanced architectures (e.g. Espeholt et al., 2018; Hessel et al., 2018; Hester et al., 2017) that allow longer more complex episodes, as well as models which are more explicitly compositional (e.g. Battaglia et al., 2018; Andreas et al., 2016) or have richer semantics (e.g. Ganin et al., 2018), that more explicitly leverage symmetries like equivalance classes in the environment.
A ADDITIONAL BASELINES
6 4 2 0 2 4 6 Reward Earned 0.0
0.1
0.2
0.3
0.4
0.5 0.6 P er ce nt ag e of D A G s
Q-total Q-episode Optimal
Figure 7: Reward distribution
We can also compare the performance of these agents to two standard model-free RL baselines. The Q-total agent learns a Q-value for each action across all steps for all the episodes. The Q-episode agent learns a Q-value for each action conditioned on the input at each time step [ot,at−1,rt−1], but with no LSTM memory to store previous actions and observations. Since the relationship between action and reward is random between episodes, Q-total was equivalent to selecting actions randomly, resulting in a considerably negative reward. The Q-episode agent essentially makes sure to not choose the arm that is indicated bymt to be the external intervention (which is assured to be equal to −5), and essentially chooses randomly otherwise, giving an average reward of 0.
B FORMAL DESCRIPTION OF META-LEARNING
Consider a distributionD over Markov Decision Processes (MDPs). We train an agent with memory (in our case an RNN-based agent) on this distribution. In each episode, we sample a task m∼D. At each step t within an episode, the agent sees an observation ot, executes an action at, and receives a reward rt. Both at−1 and rt−1 are given as additional inputs to the network. Thus, via the recurrence of the network, each action is a function of the entire trajectory Ht = {o0,a0,r0,...,ot−1,at−1,rt−1,ot} of the episode. Because this function is parameterized by the neural network, its complexity is limited only by the size of the network.
C ABDUCTION-ACTION-PREDICTION METHOD FOR COUNTERFACTUAL REASONING
Pearl et al. (2016)’s “abduction-action-prediction” method prescribes one method for answering counterfactual queries, by estimating the specific unobserved makeup of individual i and by transferring it to the counterfactual world. Assume, for example, the following model for G of Section 2.1: E=wAEA+η, H=wAHA+wEHE+ , where the weights wij represent the known causal effects in G and and η are terms of (e.g.) Gaussian noise that represent the unobserved randomness in the makeup of each individual9. Suppose that for individual i we observe: A= ai, E= ei, H = hi. We can answer the counterfactual question of “What if individual i had done more exercise, i.e.E=e′, instead?” by: a) Abduction: estimate the individual’s specific makeup with i=hi−wAHai−wEHei, b) Action: setE to more exercise e′, c) Prediction: predict a new value for cardiac health as h′=wAHai+wEHe′+ i.
D EXPERIMENT 4: NON-LINEAR CAUSAL GRAPHS
0.0 0.5 1.0 1.5 2.0
Optimal Cause-Effect
MAP
Passive-Interventional
Long Observational.
Observational.
Figure 8: Experiment 4 results
The purview of the previous experiments was to show a proof of concept on a simple tractable system, demonstrating that causal induction and inference can be learned and implemented via a meta-learned agent. In this experiment, we generalize some of the results to nonlinear, non-Gaussian causal graphs which are more typical of real-world causal graphs and to demonstrate that our results hold without loss of generality on such systems.
Here we investigate causal DAGs with a quadratic dependence on the parents by changing the conditional distribution to p(Xi|pa(Xi)) = N ( 1Ni ∑ j wji(Xj+X 2 j ),σ). Here, although each node is normally
9These are zero in expectation, so without access to their value for an individual we simply use G: E=wAEA, H=wAHA+wEHE to make causal predictions.
distributed given its parents, the joint distribution is not multivariate Gaussian due to the non-linearity in how the means are determined. We find that the Long-Observational achieves more reward than the Observational agent indicating that the agent is in fact learning the statistical dependencies between the nodes, within an episode. We also find that although the Active-Interventional agent is not far behind the performance of the MAP baseline, and achieves reward well above the Long-Observational10 The fact that the MAP baseline gets so close to the Optimal Cause-Effect baseline indicates that the Active agent is choosing close to optimal actions.
E EXPERIMENT 5: LARGER CAUSAL GRAPHS WITH GENERALIZATION TO NEW EQUIVALENCE CLASSES
0.0 1.0 Avg. Reward
Long-Obs. Obs.
Passive-Cond. Passive-Int. Passive-CF
(a) (b)
0.0 1.0 Avg. Reward
Random-Int.
Active-Int.
Passive-Int.
Optimal C-E
the entire equivalence class of each test graph was held out from the training set11. Performance on the test set therefore indicates generalization of the inference procedures learned to previously unseen equivalence classes of causal DAGs. For these experiments, we used graphs withN=6 nodes, because 5-node graphs have too few equivalence classes to partition in this way. All other details were the same as in the main paper.
We see in Fig. 9a that the agents learn to generalize well to these held out examples, and we find the same pattern of behavior noted in the main text where the rewards earned are ordered such that Observational agent< Passive-Conditional agent< Passive-Interventional agent< Passive-Counterfactual agent. We see additionally in Fig. 9b that the Active-Interventional agent performs at par with the Passive-Interventional agent (which is allowed to see the results of interventions on all nodes) and significantly better than an additional baseline we use here of the Random-Interventional agent whose information phase policy is to intervene on nodes at random, indicating that the intervention policy learned by the Active agent is good.
F GRAPHICAL MODELS AND BELIEF NETWORKS
Graphical models (Pearl, 1988; Bishop, 2006; Koller & Friedman, 2009; Barber, 2012; Murphy, 2012) are a marriage between graph and probability theory that allows to graphically represent and assess statistical dependence. In the following sections, we give some basic definitions and describe a method (d-separation) for graphically assessing statistical independence in belief networks.
BASIC DEFINITIONS
X2X1 X3
X4
(a)
X2X1 X3
X4
(b)
10The conditional distribution p(X1:N\j|Xj=5), and therefore Conditional agents, were non-trivial to calculate for the quadratic case.
11The hidden node was guaranteed to be a root node by rejecting all DAGs where the hidden node has parents
A directed acyclic graph (DAG) is a directed graph with no directed paths starting and ending at the same node. For example, the directed graph in Fig. 10(a) is acyclic. The addition of a link fromX4 toX1 gives rise to a cyclic graph (Fig. 10(b)).
A nodeXi with a directed link toXj is called parent ofXj. In this case,Xj is called child ofXi. A node is a collider on a specified path if it has (at least) two parents on that path. Notice that a node can be a collider on a path and a non-collider on another path. For example, in Fig. 10(a)X3 is a collider on the pathX1→X3←X2 and a non-collider on the pathX2→X3→X4. A nodeXi is an ancestor of a nodeXj if there exists a directed path fromXi toXj. In this case,Xj is a descendant ofXi. A graphical model is a graph in which nodes represent random variables and links express statistical relationships between the variables.
A belief network is a directed acyclic graphical model in which each node Xi is associated with the conditional distribution p(Xi|pa(Xi)), where pa(Xi) indicates the parents ofXi. The joint distribution of all nodes in the graph, p(X1:N), is given by the product of all conditional distributions, i.e.
p(X1:N)= N∏ i=1 p(Xi|pa(Xi)).
ASSESSING STATISTICAL INDEPENDENCE IN BELIEF NETWORKS
Given the sets of random variables X ,Y and Z, X and Y are statistically independent given Z (X ⊥Y|Z) if all paths from any element of X to any element of Y are closed (or blocked). A path is closed if at least one of the following conditions is satisfied:
(Ia) There is a non-collider on the path which belongs to the conditioning set Z. (Ib) There is a collider on the path such that neither the collider nor any of its descendants belong to
the conditioning set Z. | 1. How does the paper contribute to the broader project of showing that causal reasoning can emerge from decision-making tasks and modern RL agents can become attuned to the causal structure of the world without being explicitly trained to answer causal questions?
2. What are the specific issues that the reviewer was confused about, and how do they require further substantive work?
3. How does the approach taken by the authors relate to the existing literature on causal induction, and what are the implications of this relationship for the paper's claims and contributions?
4. What additional clarification or information would help the reader better understand the task facing the agents and the nature of the agents' success?
5. How might the choice of baselines and comparison agents affect the interpretation of the results, and what would be a more appropriate way to compare the performance of the agents of interest? | Review | Review
Note: This review is coming in a bit late, already after one round of responses. So I write this with the benefit of having read the helpful previous exchange.
I am generally positive about the paper and the broader project. The idea of showing that causal reasoning naturally emerges from certain decision-making tasks and that modern (meta-learning) RL agents can become attuned to causal structure of the world without being explicitly trained to answer causal questions is an attractive one. I also find much about the specific paper elegant and creative. Considering three grades of causal sophistication (from conditional probability to cause-effect reasoning to counterfactual prediction) seems like the right thing to do in this setting.
Despite these positive qualities, I was confused by many of the same issues as other reviewers, and I think the paper does need some more serious revisions. Some of these are simple matters of clarification as the authors acknowledge; others, however, require further substantive work. It sounds like the authors are committed to doing some of this work, and I would like to add one more vote of encouragement. While the paper may be slightly too preliminary for acceptance at this time, I am optimistic that a future version of this paper will be a wonderful contribution.
(*) The authors say at several points that the approach “did not require explicit knowledge of formal principles of causal inference.” But there seem to be a whole of lot of causal assumptions that are critically implicit in the setup. It would be good to understand this better. In particular, the different agents are hardwired to have access to different kinds of information. The interventional agent is provided with data that the conditional agent simply doesn’t get to see. Likewise, the counterfactual agent is provided with information about noise. Any sufficiently powerful learning system will realize that (and even how) the given information is relevant to the decision-making task at hand. A lot of the work (all of the work?) seems to be done by supplying the information that we know would be relevant.
(*) Previous reviewers have already made this point - I think it’s crucial - and it’s also related to the previous concern: It is not clear how difficult the tasks facing these agents actually are, nor is it clear that solving them genuinely requires causal understanding. What seems to be shown is that, by supplying information that’s critical for the task at hand, a sufficiently powerful learning agent is able to harness that information successfully. But how difficult is this task, and why does it require causal understanding? I do think that some of the work the authors did is quite helpful, e.g., dividing the test set between the easy and hard cases (orphan / parented, unconfounded / confounded). But I do not feel I have an adequate understanding of the task as seen, so to say, from the perspective of the agent. Specifically:
(*) I completely second the worry one of the reviewers raised about equivalence classes and symmetries. The test set should be chosen more deliberately - not randomly - to rule out deflationary explanations of the agents’ purported success. I’m happy to hear that the authors will be looking more into this and I would be interested to know how the results look.
(*) The “baselines” in this paper are often not baselines at all, but rather various optimal approaches to alternative formulations of the task. I feel we need more actual baselines in order to see how well the agents of interest are doing. I don’t know how to interpret phrases like “close to perfect” without a better understanding of how things look below perfection.
As a concrete case of this, just like the other reviewers, I was initially quite confused about the passive agents and why they did better than the active agents. These are passive agents who actually get to make multiple observations, rather than baseline passive agents who choose interventions in a suboptimal way. I think it would be helpful to compare against an agent who makes the same number of observations but chooses them in a suboptimal (e.g., random) way.
(*) In relation to the existing literature on causal induction, it’s telling that implementing a perfect MAP agent in this setting is even possible. This makes me worry further about how easy these tasks are (again, provided one has all of the relevant information about the task). But it also shows that comparison with existing causal inference methods is simply inappropriate here, since those methods are designed for realistic settings where MAP inference is far from possible. I think that’s fine, but I also think it should be clarified in the paper. The point is not (at least yet) that these methods are competitors to causal inference methods that do “require explicit knowledge of formal principles of causal inference,” but rather that we have a proof-of-concept that some elementary causal understanding may emerge from typical RL tasks when agents are faced with the right kinds of tasks and given access to the right kinds of data. That’s an interesting claim on its own. The penultimate paragraph in the paper (among other passages) seems to me quite misleading on this point.
(*) One very minor question I have is why actions were softmax selected even in the quiz phase. What were the softmax parameters? And would some of the agents not perform a bit better if they maximized? |
ICLR | Title
Causal Reasoning from Meta-reinforcement learning
Abstract
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences. We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
N/A
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences. We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
1 INTRODUCTION
Many machine learning algorithms are rooted in discovering patterns of correlation in data. While this has been sufficient to excel in several areas (Krizhevsky et al., 2012; Cho et al., 2014), sometimes the problems we are interested in are fundamentally causal. Answering questions such as “Does smoking cause cancer?” or “Was this person denied a job due to racial discrimination?” or “Did this marketing campaign cause sales to go up?” all require an ability to reason about causes and effects and cannot be achieved by purely associative inference. Even for problems that are not obviously causal, like image classification, it has been suggested that some failure modes emerge from lack of causal understanding. Causal reasoning may be an essential component of natural intelligence and is present in human babies, rats and even birds (Leslie, 1982; Gopnik et al., 2001; 2004; Blaisdell et al., 2006; Lagnado et al., 2013). There is a rich literature on formal approaches for defining and performing causal reasoning (Pearl, 2000; Spirtes et al., 2000; Dawid, 2007; Pearl et al., 2016).
Here we investigate whether procedures for learning and using causal structure can be produced by meta-learning. The approach of meta-learning is to learn the learning (or inference) procedure itself, directly from data. We adopt the specific method of Duan et al. (2016) and Wang et al. (2016), training a recurrent neural network (RNN) through model-free reinforcement learning. We train on a large family of tasks, each underpinned by a different causal structure.
The use of meta-learning avoids the need to manually implement explicit causal reasoning methods in an algorithm, offers advantages of scalability by amortizing computations, and allows automatic incorporation of complex prior knowledge (Andrychowicz et al., 2016; Wang et al., 2016; Finn et al., 2017). Additionally, by learning end-to-end, the algorithm has the potential to find the internal representations of causal structure best suited for the types of causal inference required.
2 PROBLEM SPECIFICATION AND APPROACH
This work probed how an agent could learn to perform causal reasoning in three distinct settings – observational, interventional, and counterfactual – corresponding to different types of data available to the agent during the first phase of an episode.
In the observational setting (Experiment 1), the agent could only obtain passive observations from the environment. This type of data allows an agent to infer associations (associative reasoning) and, when the structure of the underlying causal model permits it, to estimate the effect that changing a variable in the environment has on another variable, namely to estimate causal effects (cause-effect reasoning).
In the interventional setting (Experiment 2), the agent could directly set the values of some variables in the environment. This type of data in principle allows an agent to estimate causal effects for any underlying causal model.
In the counterfactual setting (Experiment 3), the agent first had an opportunity to learn about the causal graph through interventions. At the last step of the episode, it was asked a counterfactual question of the form “What would have happened if a different intervention had been made in the previous time-step?”.
Next we will formalize these three settings and patterns of reasoning possible in each, using the graphical model framework (Pearl, 2000; Spirtes et al., 2000; Dawid, 2007)1, and introduce the meta-learning methods that we will use to train agents that are capable of such reasoning.
2.1 CAUSALITY
Causal relationships among random variables can be expressed using causal directed acyclic graphs (DAGs) (see Appendix). A causal DAG is a graphical model that captures both independence and causal relations. Each node Xi corresponds to a random variable, and the joint distribution p(X1, ... ,XN) is given by the product of conditional distributions of each node Xi given its parent nodes pa(Xi), i.e. p(X1:N≡X1,...,XN)= ∏N i=1p(Xi|pa(Xi)).
Edges carry causal semantics: if there exists a directed path fromXi toXj , thenXi is a potential cause of Xj. Directed paths are also called causal paths. The causal effect of Xi on Xj is the conditional distribution ofXj givenXi restricted to only causal paths.
G
A
E H
p(A)
p(E|A) p(H|A,E)
G→E=e
A
E H
p(A)
δE=e p(H|A,E)
An example causal DAG G is given in the figure on the left, where E represents hours of exercise in a week, H cardiac health, andA age. The causal effect ofE onH is the conditional distribution restricted to the path E→H, i.e. excluding the path E←A→H. The variable A is called a confounder, as it confounds the causal effect with non-causal statistical influence.
Simply observing cardiac health conditioning on exercise level from p(H|E) (associative reasoning) cannot answer if change in exercise levels cause changes in cardiac health (cause-effect reasoning), since there is always the possibility that correlation between the two is because of the common confounder of age.
Cause-effect Reasoning. The causal effect can be seen as the conditional distribution p→E=e(H|E = e)2 on the graph G→E=e above (right), resulting from intervening on E by replacing p(E|A) with a delta distribution δE=e (thereby removing the link from A to E) and leaving the remaining conditional distributions p(H|E,A) and p(A) unaltered. The rules of do-calculus (Pearl, 2000; Pearl et al., 2016) tell us how to compute p→E=e(H|E=e) using observations from G. In this case p→E=e(H|E=e)=∑
Ap(H|E=e,A)p(A)3. Therefore, do-calculus enables us to reason in the intervened graph G→E=e even if our observations are from G. This is the scenario captured by our observational setting outlined above. Such inferences are always possible if the confounders are observed, but in the presence of unobserved confounders, for many DAG structures the only way to compute causal effects is by collecting observations directly from G→E, i.e. by actively intervening on the world to fix the value of the variable E= e and observing the remaining variables. In our interventional setting, outlined above, the agent has access to such interventions.
1This approach typically decouples the challenges of causal induction, i.e. of inferring the structure of the causal graph from data, and that of causal reasoning on the induced graph. The formalism we describe here assumes that the structure of the causal graph is known. In our experiments however, our agents concurrently carry out causal induction.
2In the causality literature, this distribution would most often be indicated with p(H|do(E=e)). We prefer to use p→E=e(H|E=e) to highlight that intervening on E results in changing the original distribution p, by structurally altering the causal DAG.
3Notice that conditioning on E=e would instead give p(H|E=e)= ∑
Ap(H|E=e,A)p(A|E=e).
Counterfactual Reasoning. Cause-effect reasoning can be used to correctly answer predictive questions of the type “Does exercising improve cardiac health?” by accounting for causal structure and confounding. However, it cannot answer retrospective questions about what would have happened. For example, given an individual iwho has died of a heart attack, this method would not be able to answer questions of the type “What would the cardiac health of this individual have been had they done more exercise?”. This type of question requires estimating unobserved sources of noise and then reasoning about the effects of this noise under a graph conditioned on a different intervention.
2.2 META-LEARNING
Meta-learning refers to a broad range of approaches in which aspects of the learning algorithm itself are learned from the data. Many individual components of deep learning algorithms have been successfully meta-learned, including the optimizer (Andrychowicz et al., 2016), initial parameter settings (Finn et al., 2017), a metric space (Vinyals et al., 2016), and use of external memory (Santoro et al., 2016).
Following the approach of (Duan et al., 2016; Wang et al., 2016), we parameterize the entire learning algorithm as a recurrent neural network (RNN), and we train the weights of the RNN with model-free reinforcement learning (RL). The RNN is trained on a broad distribution of problems which each require learning. When trained in this way, the RNN is able to implement a learning algorithm capable of efficiently solving novel learning problems in or near the training distribution.
Learning the weights of the RNN by model-free RL can be thought of as the “outer loop” of learning. The outer loop shapes the weights of the RNN into an “inner loop” learning algorithm. This inner loop algorithm plays out in the activation dynamics of the RNN and can continue learning even when the weights of the network are frozen. The inner loop algorithm can also have very different properties from the outer loop algorithm used to train it. For example, in previous work this approach was used to negotiate the exploration-exploitation tradeoff in multi-armed bandits (Duan et al., 2016) and learn algorithms which dynamically adjust their own learning rates (Wang et al., 2016; 2018). In the present work we explore the possibility of obtaining a causally-aware inner-loop learning algorithm. See the Appendix for a more formal approach to meta-learning.
3 TASK SETUP AND AGENT ARCHITECTURE
In the experiments, in each episode the agent interacted with a different causal DAG G. G was drawn randomly from the space of possible DAGs under the constraints given in the next paragraph. Each episode consisted of T steps, and was divided into two phases: information and quiz. The information phase, corresponding to the first T−1 steps, allowed the agent to collect information by interacting with or passively observing samples from G. The agent could potentially use this information to infer the connectivity and weights of G. The quiz phase, corresponding to the final step T , required the agent to exploit the causal knowledge it collected in the information phase, to select the node with the highest value under a random external intervention.
Causal graphs, observations, and actions. We generated all graphs onN=5 nodes, with edges only in the upper triangular of the adjacency matrix (this guarantees that all the graphs obtained are DAGs), with edge weights, wji∈{−1,0,1} (uniformly sampled), and removed 300 for held-out testing. The remaining 58749 (or 3N(N−1)/2−300) were used as the training set. Each node’s value, Xi ∈R, was Gaussiandistributed. The values of parentless nodes were drawn from N (µ = 0.0,σ = 0.1). The conditional probability of a node with parents was p(Xi|pa(Xi)) = N (µ = ∑ jwjiXj,σ = 0.1), where pa(Xi) represents the parents of nodeXi in G. The values of the 4 observable nodes (the root node, was always hidden), were concatenated to create vt and provided to the agent in its observation vector, ot=[vt,mt], wheremt is a one-hot vector indicating external intervention during the quiz phase (explained below).4
In both phases, on each step, t, the agent’s action, at, was a discrete choice from the range {1...2(N−1)}. Action choices in {1...N − 1} corresponded to information actions, and choices in {N ...2(N − 1)} corresponded to quiz actions.
4While a simple domain provides the most unencumbered test for causal reasoning, we also carried out simulations with more complex causal graphs (graphs with non-linear connections, and larger graphs of size N = 6) and stronger requirements for generalization (holding-out entire equivalence classes of causal graphs from training) to demonstrate the robustness of our approach (see Appendix).
Information phase. In the information phase, an information action, at, caused an intervention on the at-th node, setting its value to Xat = 5. We choose an intervention value outside the likely range of sampled observations, to facilitate learning of the causal graph. The observation from the intervened graph, G→Xat=5, was sampled similarly to G, except the incoming edges toXat were severed, and its intervened value was used for conditioning its children’s values. The node values in G→Xat=5 were distributed as p→Xi=5(X1:N\i|Xi=5). If a quiz action was chosen during the information phase, it was ignored, the G values were sampled as if no intervention had been made, and the agent was given a penalty of rt=−5 in order to encourage it to take quiz actions at only during quiz phase. After the action was selected, an observation was provided to the agent. The default length of this phase was fixed to T =N =5 since in the noise-free limit, a minimum of T−1=4 interventions are required in general to resolve the causal structure, and score perfectly on the test phase.
Quiz phase. In the quiz phase, one non-hidden node was selected at random to be intervened on externally, Xj, and its value was set to −5. We chose an intervention value of −5 never previously observed by the agent in that episode, thus disallowing the agent from memorizing the results of interventions in the information phase to perform well on the quiz phase. The agent was informed of this by the observed mT−1 (a one-hot vector which indicated which node would be intervened on), from the final pre-quiz phase time-step, T−1. Note, mt was set to a zero-vector for steps t<T−1. A quiz action, aT , chosen by the agent indicated the node whose value would be given to the agent as a reward. In other words, the agent would receive reward, rT =XaT−(N−1). Again, if a quiz action was chosen during the information phase, the node values were not sampled and the agent was simply given a penalty of rT =−5. Active vs passive agents. Our agents had to perform two distinct tasks during the information phase: a) actively choose which nodes to set values on, and b) infer the causal DAG from its observations. We refer to this setup as the “active” condition. To control for (a), we created the “passive” condition, where the agent’s information phase actions are not learned. To provide a benchmark for how well the active agent can perform task (a), we fixed the passive agent’s intervention policy to be an exhaustive sweep through all observable nodes. This is close to optimal for this domain – in fact it is the optimal policy for noise-free conditional node values. We also compared the active agent’s performance to a baseline agent whose policy is to intervene randomly on the observable nodes in the information phase, in the Appendix.
Two kinds of learning The “inner loop” of learning (see Section 2.2) occurs within each episode where the agent is learning from the evidence it gathers during the information phase in order to perform well in the quiz phase. The same agent then enters a new episode, where it has to repeat the task on a different DAG. Test performance is reported on DAGs that the agent has never previously seen, after all the weights of the RNN have been fixed. Hence, the only transfer from training to test (or the “outer loop” of learning) is the ability to discover causal dependencies based on observations in the information phase, and to perform causal inference in the quiz phase.
Agent Architecture and Training
We used a long short-term memory (LSTM) network (Hochreiter & Schmidhuber, 1997) (with 96 hidden units) that, at each time-step t, receives a concatenated vector containing [ot,at−1,rt−1] as input, where ot is the observation5, at−1 is the previous action (as a one-hot vector) and rt−1 the reward (as a single real-value)6. The outputs, calculated as linear projections of the LSTM’s hidden state, are a set of policy logits (with dimensionality equal to the number of available actions), plus a scalar baseline. The policy logits are transformed by a softmax function, and then sampled to give a selected action.
Learning was by asynchronous advantage actor-critic (Mnih et al., 2016). In this framework, the loss function consists of three terms – the policy gradient, the baseline cost and an entropy cost. The baseline cost was weighted by 0.05 relative to the policy gradient cost. The weighting of the entropy cost was annealed over the course of training from 0.05 to 0. Optimization was done by RMSProp with =10−5, momentum = 0.9 and decay = 0.95. Learning rate was annealed from 3×10−6 to 0. For all experiments, after training, the agent was tested with the learning rate set to zero, on a held-out test set.
5’Observation’ ot refers to the reinforcement learning term, i.e. the input from the environment to the agent. This is distinct from observations in the causal sense (referred to as observational data) i.e. samples from a casual structure where there is no information about interventions that have been carried out.
6These are both set to zero for the first step in an episode.
4 EXPERIMENTS
Our three experiments (observational, interventional, and counterfactual) differed in the properties of the vt that was observed by the agent during the information phase, and thereby limited the extent of causal reasoning possible within each data setting. Our measure of performance is the reward earned in the quiz phase for held-out DAGs. Choosing a random node node in the quiz phase results in a reward of −5/4=−1.25, since one node (the externally intervened node) always has value−5 and the others have on average 0 value. By learning to simply avoid the externally intervened node, the agent can earn on average 0 reward. Consistently picking the node with the highest value in the quiz phase requires the agent to perform causal reasoning. For each agent, we take the average reward earned across 1200 episodes (300 held-out test DAGs, with 4 possible external interventions). We train 12 copies of each agent and report the average reward earned by these, with error bars showing 95% confidence intervals.
4.1 EXPERIMENT 1: OBSERVATIONAL SETTING
In Experiment 1, the agent could neither intervene to set the value of variables in the environment, nor observe any external interventions. In other words, it only received observations from G, not G→Xj (whereXj is a node that has been intervened on). This limits the extent of causal inference possible. In this experiment, we tested six agents, four of which were learned: “Observational”, “Long Observational”, “Active Conditional”, “Passive Conditional”, “Observational MAP Baseline”(not learned) and the “Optimal Associative Baseline” (not learned). We also ran two other standard RL baselines—see the Appendix for details.
Observational Agents: In the information phase, the actions of the agent were ignored7, and the observational agent always received the values of the observable nodes as sampled from the joint distribution associated with G. In addition to the default T=5 episode length, we also trained this agent with 4× longer episode length (Long Observational Agent), to measure performance increase with more observational data.
Conditional Agents: The information phase actions corresponded to observing a world in which the selected nodeXj is equal toXj=5, and the remaining nodes are sampled from the conditional distribution p(X1:N\j|Xj=5), whereX1:N\j indicates the set of all nodes exceptXj. This differs from intervening on the variableXj by setting it to the valueXj=5, since here we take a conditional sample from G rather than from G→Xj=5 (i.e. from p→Xj=5(X1:N\j|Xj=5)), and inference about the corresponding node’s parents is possible. Therefore, this agent still has access to only observational data, as with the observational agents. However, on average it receives more diagnostic information about the relation between the random variables in G, since it can observe samples where a node takes a value far outside the likely range of sampled observations. We run active and passive versions of this agent as described in Section 3
Optimal Associative Baseline: This baseline receives the true joint distribution p(X1:N) implied by the DAG in that episode, therefore it has full knowledge of the correlation structure of the environment8. It can therefore do exact associative reasoning of the form p(Xj|Xi=x), but cannot do any cause-effect reasoning of the form p→Xi=x(Xj|Xi=x). In the quiz phase, this baseline chooses the node that has the maximum value according to the true p(Xj|Xi=x) in that episode, whereXi is the node externally intervened upon, and x=−5. Observational MAP Baseline: This baseline follows the traditional method of separating causal induction and causal inference. We first carry out exact maximum a posteriori (MAP) inference over the space of DAGs in each episode (i.e. causal induction) by selecting the DAG (GMAP) of the 59049 unique possibilities that maximizes the likelihood of the data observed, v1:T , by the Observational Agent in that episode. This is equivalent to maximizing the posterior probability since the prior over graphs is uniform.
RESULTS
We focus on three key questions in this experiment: (i) Can our agents learn to do associative reasoning with observational data?, (ii) Can they learn to do cause-effect reasoning from observational data?, and (iii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?
7These agents also did not receive the out-of-phase action penalties during the information phase since their actions are totally ignored.
8Notice that the agent does not know the graphical structure, i.e. it does not know which nodes are parents of which other nodes
For (i), we see that the Observational Agents achieve reward above the random baseline (see the Appendix), and that more observations (Long Observational Agent) lead to better performance (Fig. 2a), indicating that the agent is indeed learning the statistical dependencies between the nodes. We see that the performance of the Passive-Conditional Agent is better than either of the Observational Agents, since the data it observes is very informative about the statistical dependencies in the environment. Finally, we see that the PassiveConditional Agent’s performance is comparable (in fact surpasses as discussed below) the performance of the Optimal Associative Baseline, indicating that it is able to do perfect associative inference.
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
Figure 1: Active and Passive Conditional Agents
For (ii), we see the crucial result that the Passive-Conditional Agent’s performance is significantly above the Optimal Associative Baseline, i.e. it performs better than what is possible using only correlations. We compare their performances, split by whether or the node that was intervened on in the quiz phase of the episode has a parent (Fig. 2b). If the intervened nodeXj has no parents, then G=G→Xj , and there is no advantage to being able to do cause-effect reasoning. We see indeed
that the Passive-Conditional agent performs better than the Optimal Associative Baseline only when the intervened node has parents (denoted by hatched bars in Fig. 2b), indicating that this agent is able to carry out some cause-effect reasoning, despite access to only observational data – i.e. it learns some form of do-calculus. We show the quiz phase for an example test DAG in Fig. 2c, seeing that the Optimal Associative Baseline chooses according to the node values predicted by G whereas the Passive-Conditional Agent chooses according the node values predicted by G→Xj . For (iii), we see (Fig. 2) that the Active-Conditional Agent’s performance is only marginally below the performance of the Passive-Conditional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
4.2 EXPERIMENT 2: INTERVENTIONAL SETTING
In Experiment 2, the agent receives interventional data in the information phase – it can choose to intervene on any observable node, Xj, and observe a sample from the resulting graph G→Xj . As discussed in Section 2.1, access to intervention data permits cause-effect reasoning even in the presence of unobserved confounders, a feat which is in general impossible with access only to observational data. In this experiment, we test four new agents, two of which were learned: “Active Interventional”, “Passive Interventional”, “Interventional MAP Baseline”(not learned), and “Optimal Cause-Effect Baseline” (not learned).
Interventional Agents: The information phase actions correspond to performing an intervention on the selected nodeXj and sampling from G→Xj (see Section 3 for details). We run active and passive versions of this agent as described in Section 3.
Interventional MAP Baseline: This baseline infers a DAG by maximizing the likelihood of the data observed by the Passive Interventional Agent in that episode. In the quiz phase, we predict the values of
each node according to GMAP→Xj whereXj is the node externally intervened upon (i.e. causal inference), and choose the node with the highest value.
Optimal Cause-Effect Baseline: This baseline receives the true DAG, G. In the quiz phase, it chooses the node that has the maximum value according to G→Xj , whereXj is the node externally intervened upon.
RESULTS
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
For (i) we see in Fig. 4a that the Passive-Interventional Agent’s performance is comparable to the Optimal Cause-Effect Baseline, indicating that it is able to do close to perfect cause-effect reasoning in this domain.
For (ii) we see in Fig. 4a the crucial result that the Passive-Interventional Agent’s performance is significantly better than the Passive-Conditional Agent. We compare the performances of these two agents, split by whether the node that was intervened on in the quiz phase of the episode had unobserved confounders with other variables in the graph (Fig. 4b). In confounded cases, as described in Section 2.1, cause-effect reasoning is impossible with only observational data. We see that the performance of the Passive-Interventional Agent does not vary significantly with confoundedness, whereas the performance of the Passive-Conditional Agent is significantly lower in the confounded cases. This indicates that the improvement in the performance of the agent that has access to interventional data (as compared to the agents that had access to only observational data) is largely driven by its ability to also do cause-effect reasoning in the presence of confounders. This is highlighted by Fig. 4c, which shows the quiz phase for an example DAG, where the Passive-Conditional agent is unable to resolve the confounder, but the Passive-Interventional agent can.
For (iii), we see in Fig. 3 that the Active-Interventional Agent’s performance is only marginally below the performance of the near optimal Passive-Interventional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
4.3 EXPERIMENT 3: COUNTERFACTUAL SETTING
In Experiment 3, the agent was again allowed to make interventions as in Experiment 2, but in this case the quiz phase task entailed answering a counterfactual question. We explain here what a counterfactual question in this domain looks like. Consider the conditional distribution p(Xi|pa(Xi))=N ( ∑ jwjiXj,0.1)
as described in Section 3 asXi= ∑
jwjiXj+ where is distributed asN (0.0,0.1), and represents the specific randomness introduced when taking one sample from the DAG. After observing the nodesX1:N
in the DAG in one sample, we can infer this specific randomness i for each nodeXi (i.e. abduction as described in the Appendix) and answer counterfactual questions like “What would the values of the nodes be, hadXj in that particular sample taken on a different value than what we observed?”, for any of the nodesXj. We test 2 new learned agents: “Active Counterfactual” and “Passive Counterfactual”.
Counterfactual Agents: This agent is exactly analogous to the Interventional agent, with the addition that the exogenous noise in the last information phase step t=T−1 (where sayXp=+5), is stored and the same noise is used in the quiz phase step t=T (where sayXf =−5). While the question our agents have had to answer correctly so far in order to maximize their reward in the quiz phase was “Which of the nodes X1:N\j will have the highest value whenXf is set to−5?”, in this setting, we ask “Which of the nodes X1:N\j would have had the highest value in the last step of the information phase, if instead of having Xp=+5, we hadXf =−5?”. We run active and passive versions of this agent as described in Section 3. Optimal Counterfactual Baseline: This baseline receives the true DAG and does exact abduction based on the exogenous noise observed in the penultimate step of the information phase, and combines this correctly with the appropriate interventional inference on the true DAG in the quiz phase.
RESULTS
We focus on two key questions in this experiment: (i) Can our agents learn to do counterfactual reasoning?, (ii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?
For (i), we see that the Passive-Counterfactual Agent achieves higher reward than the Passive-Interventional Agent and the Optimal Cause-Effect Baseline. To evaluate whether this difference results from the agent’s use of abduction (see the Appendix for details), we split the test set into two groups, depending on whether or not the decision for which node will have the highest value in the quiz phase is affected by exogenous noise, i.e. whether or not the node with the maximum value in the quiz phase changes if the noise is
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
and Passive-Interventional Agents in the cases where the maximum values are distinct, however the Passive-Counterfactual Agent significantly outperforms the Passive-Interventional Agent in cases where there are degenerate maximum values.
For (ii), we see in Fig. 6 that the Active-Counterfactual Agent’s performance is only marginally below the performance of the Passive-Counterfactual agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
5 SUMMARY OF RESULTS
We introduced and tested a framework for learning causal reasoning in various data settings—observational, interventional, and counterfactual—using deep meta-RL. Crucially, our approach did not require explicit encoding of formal principles of causal inference. Rather, by optimizing an agent to perform a task that depended on causal structure, the agent learned implicit strategies to use the available data for causal reasoning, including drawing inferences from passive observation, actively intervening, and making counterfactual predictions. Below, we summarize the keys results from each of the three experiments.
In Section 4.1 and Fig. 2, we show that the agent learns to perform do-calculus. In Fig. 2(a) we see that, compared to the highest possible reward achievable without causal knowledge, the trained agent received more reward. This observation is corroborated by Fig. 2(b) which shows that performance increased selectively in cases where do-calculus made a prediction distinguishable from the predictions based on correlations. These are situations where the externally intervened node had a parent – meaning that the intervention resulted in a different graph.
In Section 4.2 and Fig. 4, we show that the agent learns to resolve unobserved confounders using interventions (a feat impossible with only observational data). In Fig. 4(a) we see that the agent with access to interventional data performs better than an agent with access to only observational data. Fig. 4(b) shows that the performance increase is greater in cases where the intervened node shared an unobserved parent (a confounder) with other variables in the graph. In this section we also compare the agent’s performance to a MAP estimate of the causal structure and find that the agent’s performance matches it, indicating that the agent is indeed doing close to optimal causal inference.
In Section 4.3 and Fig. 5, we show that the agent learns to use counterfactuals. In Fig. 5(a) we see that the agent with additional access to the specific randomness in the test phase performs better than an agent with access to only interventional data. In Fig. 5(b), we find that the increased performance is observed only in cases where the maximum mean value in the graph is degenerate, and optimal choice is affected by the exogenous noise – i.e. where multiple nodes have the same value on average and the specific randomness can be used to distinguish their actual values in that specific case.
6 DISCUSSION AND FUTURE WORK
This work is the first demonstration that causal reasoning can arise out of model-free reinforcement learning. This opens up the possibility of leveraging powerful learning-based methods for causal inference in complex settings. Traditional formal approaches usually decouple the two problems of causal induction (i.e. inferring the structure of the underlying model) and causal inference (i.e. estimating causal effects and answering counterfactual questions), and despite advances in both (Ortega & Stocker, 2015; Bramley et al., 2017; Parida et al., 2018; Sen et al., 2017; Forney et al., 2017; Lattimore et al., 2016), inducing models often requires assumptions that are difficult to fit to complex real-world conditions. By learning these end-to-end, our method can potentially find representations of causal structure best tuned to the specific causal inferences required. Another key advantage of our meta-RL approach is that it allows the agent to learn to interact with the environment in order to acquire necessary observations in the service of its task—i.e. to perform active learning. In our experimental domain, our agents’ active intervention policy was close to optimal, which demonstrates the promise of agents that can learn to experiment on their environment and perform rich causal reasoning on the observations.
Future work should explore agents that perform experiments to support structured exploration in RL, and optimal experiment design in complex domains where large numbers of blind interventions are prohibitive. To this end, follow-up work should focus on scaling up our approach to larger environments, with more complex causal structure and a more diverse range of tasks. Though the results here are a first step in this direction which use relatively standard deep RL components, our approach will likely benefit from more advanced architectures (e.g. Espeholt et al., 2018; Hessel et al., 2018; Hester et al., 2017) that allow longer more complex episodes, as well as models which are more explicitly compositional (e.g. Battaglia et al., 2018; Andreas et al., 2016) or have richer semantics (e.g. Ganin et al., 2018), that more explicitly leverage symmetries like equivalance classes in the environment.
A ADDITIONAL BASELINES
6 4 2 0 2 4 6 Reward Earned 0.0
0.1
0.2
0.3
0.4
0.5 0.6 P er ce nt ag e of D A G s
Q-total Q-episode Optimal
Figure 7: Reward distribution
We can also compare the performance of these agents to two standard model-free RL baselines. The Q-total agent learns a Q-value for each action across all steps for all the episodes. The Q-episode agent learns a Q-value for each action conditioned on the input at each time step [ot,at−1,rt−1], but with no LSTM memory to store previous actions and observations. Since the relationship between action and reward is random between episodes, Q-total was equivalent to selecting actions randomly, resulting in a considerably negative reward. The Q-episode agent essentially makes sure to not choose the arm that is indicated bymt to be the external intervention (which is assured to be equal to −5), and essentially chooses randomly otherwise, giving an average reward of 0.
B FORMAL DESCRIPTION OF META-LEARNING
Consider a distributionD over Markov Decision Processes (MDPs). We train an agent with memory (in our case an RNN-based agent) on this distribution. In each episode, we sample a task m∼D. At each step t within an episode, the agent sees an observation ot, executes an action at, and receives a reward rt. Both at−1 and rt−1 are given as additional inputs to the network. Thus, via the recurrence of the network, each action is a function of the entire trajectory Ht = {o0,a0,r0,...,ot−1,at−1,rt−1,ot} of the episode. Because this function is parameterized by the neural network, its complexity is limited only by the size of the network.
C ABDUCTION-ACTION-PREDICTION METHOD FOR COUNTERFACTUAL REASONING
Pearl et al. (2016)’s “abduction-action-prediction” method prescribes one method for answering counterfactual queries, by estimating the specific unobserved makeup of individual i and by transferring it to the counterfactual world. Assume, for example, the following model for G of Section 2.1: E=wAEA+η, H=wAHA+wEHE+ , where the weights wij represent the known causal effects in G and and η are terms of (e.g.) Gaussian noise that represent the unobserved randomness in the makeup of each individual9. Suppose that for individual i we observe: A= ai, E= ei, H = hi. We can answer the counterfactual question of “What if individual i had done more exercise, i.e.E=e′, instead?” by: a) Abduction: estimate the individual’s specific makeup with i=hi−wAHai−wEHei, b) Action: setE to more exercise e′, c) Prediction: predict a new value for cardiac health as h′=wAHai+wEHe′+ i.
D EXPERIMENT 4: NON-LINEAR CAUSAL GRAPHS
0.0 0.5 1.0 1.5 2.0
Optimal Cause-Effect
MAP
Passive-Interventional
Long Observational.
Observational.
Figure 8: Experiment 4 results
The purview of the previous experiments was to show a proof of concept on a simple tractable system, demonstrating that causal induction and inference can be learned and implemented via a meta-learned agent. In this experiment, we generalize some of the results to nonlinear, non-Gaussian causal graphs which are more typical of real-world causal graphs and to demonstrate that our results hold without loss of generality on such systems.
Here we investigate causal DAGs with a quadratic dependence on the parents by changing the conditional distribution to p(Xi|pa(Xi)) = N ( 1Ni ∑ j wji(Xj+X 2 j ),σ). Here, although each node is normally
9These are zero in expectation, so without access to their value for an individual we simply use G: E=wAEA, H=wAHA+wEHE to make causal predictions.
distributed given its parents, the joint distribution is not multivariate Gaussian due to the non-linearity in how the means are determined. We find that the Long-Observational achieves more reward than the Observational agent indicating that the agent is in fact learning the statistical dependencies between the nodes, within an episode. We also find that although the Active-Interventional agent is not far behind the performance of the MAP baseline, and achieves reward well above the Long-Observational10 The fact that the MAP baseline gets so close to the Optimal Cause-Effect baseline indicates that the Active agent is choosing close to optimal actions.
E EXPERIMENT 5: LARGER CAUSAL GRAPHS WITH GENERALIZATION TO NEW EQUIVALENCE CLASSES
0.0 1.0 Avg. Reward
Long-Obs. Obs.
Passive-Cond. Passive-Int. Passive-CF
(a) (b)
0.0 1.0 Avg. Reward
Random-Int.
Active-Int.
Passive-Int.
Optimal C-E
the entire equivalence class of each test graph was held out from the training set11. Performance on the test set therefore indicates generalization of the inference procedures learned to previously unseen equivalence classes of causal DAGs. For these experiments, we used graphs withN=6 nodes, because 5-node graphs have too few equivalence classes to partition in this way. All other details were the same as in the main paper.
We see in Fig. 9a that the agents learn to generalize well to these held out examples, and we find the same pattern of behavior noted in the main text where the rewards earned are ordered such that Observational agent< Passive-Conditional agent< Passive-Interventional agent< Passive-Counterfactual agent. We see additionally in Fig. 9b that the Active-Interventional agent performs at par with the Passive-Interventional agent (which is allowed to see the results of interventions on all nodes) and significantly better than an additional baseline we use here of the Random-Interventional agent whose information phase policy is to intervene on nodes at random, indicating that the intervention policy learned by the Active agent is good.
F GRAPHICAL MODELS AND BELIEF NETWORKS
Graphical models (Pearl, 1988; Bishop, 2006; Koller & Friedman, 2009; Barber, 2012; Murphy, 2012) are a marriage between graph and probability theory that allows to graphically represent and assess statistical dependence. In the following sections, we give some basic definitions and describe a method (d-separation) for graphically assessing statistical independence in belief networks.
BASIC DEFINITIONS
X2X1 X3
X4
(a)
X2X1 X3
X4
(b)
10The conditional distribution p(X1:N\j|Xj=5), and therefore Conditional agents, were non-trivial to calculate for the quadratic case.
11The hidden node was guaranteed to be a root node by rejecting all DAGs where the hidden node has parents
A directed acyclic graph (DAG) is a directed graph with no directed paths starting and ending at the same node. For example, the directed graph in Fig. 10(a) is acyclic. The addition of a link fromX4 toX1 gives rise to a cyclic graph (Fig. 10(b)).
A nodeXi with a directed link toXj is called parent ofXj. In this case,Xj is called child ofXi. A node is a collider on a specified path if it has (at least) two parents on that path. Notice that a node can be a collider on a path and a non-collider on another path. For example, in Fig. 10(a)X3 is a collider on the pathX1→X3←X2 and a non-collider on the pathX2→X3→X4. A nodeXi is an ancestor of a nodeXj if there exists a directed path fromXi toXj. In this case,Xj is a descendant ofXi. A graphical model is a graph in which nodes represent random variables and links express statistical relationships between the variables.
A belief network is a directed acyclic graphical model in which each node Xi is associated with the conditional distribution p(Xi|pa(Xi)), where pa(Xi) indicates the parents ofXi. The joint distribution of all nodes in the graph, p(X1:N), is given by the product of all conditional distributions, i.e.
p(X1:N)= N∏ i=1 p(Xi|pa(Xi)).
ASSESSING STATISTICAL INDEPENDENCE IN BELIEF NETWORKS
Given the sets of random variables X ,Y and Z, X and Y are statistically independent given Z (X ⊥Y|Z) if all paths from any element of X to any element of Y are closed (or blocked). A path is closed if at least one of the following conditions is satisfied:
(Ia) There is a non-collider on the path which belongs to the conditioning set Z. (Ib) There is a collider on the path such that neither the collider nor any of its descendants belong to
the conditioning set Z. | 1. What is the main contribution of the paper regarding causal reasoning and RL?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the experimental settings and results presented in the paper?
4. Are there any literature gaps or missing references in the paper regarding causality, bandits, reinforcement learning, and related topics?
5. What are the clarity issues and unclear details in the description of the paper's content?
6. Can the paper's agents work on more diverse causal graphs beyond the limited simulated dataset?
7. How would the reviewer improve the paper, especially regarding its claims, experiments, and comparisons with other works? | Review | Review
This paper aims at training agents to perform causal reasoning with RL in three settings: observational (the agent can only obtain one observational sample at a time), interventional (the agent can obtain an interventional sample at a time for a given perfect intervention on a given variable) and a counterfactual setting (the agent can obtains interventional samples, but the prediction is about the case in which the same noise variables were sampled, but a different intervention was performed) . In each of these settings, after T-1 steps of information gathering, the algorithm is supposed to select the node with the highest value in the last step. Different types of agents are evaluated on a limited simulated dataset, with weak and not completely interpretable results.
Pros:
-Using RL to learn causal reasoning is a very interesting and worthwhile task.
-The paper tries to systematize the comparison of different settings with different available data.
Cons:
-Task does not necessarily require causal knowledge (predict the node with the highest value in this restricted linear setting)
-Very limited experimental setting (causal graphs with 5 nodes, one of which hidden, linear Gaussian with +/- 1 coefficients, with interventions in training set always +5, and in test set always -5) and lukewarm results, that don’t seem enough for the strong claims. This is one of the easiest ways to improve the paper.
-In the rare cases in which there are some causal baselines (e.g. MAP baseline), they seem to outperform the proposed algorithms (e.g. Experiment 2)
-Somehow the “active” setting in which the agent can decide the intervention targets seems to always perform worse than the “passive” setting, in which the targets are already chosen. This is very puzzling for me, I thought that choosing the targets should improve the results...
-Seem to be missing a lot of literature on causality and bandits, or reinforcement learning (for example: https://arxiv.org/abs/1606.03203, https://arxiv.org/abs/1701.02789, http://proceedings.mlr.press/v70/forney17a.html)
-Many details were unclear to me and in general the clarity of the description could be improved
In general, I think the paper could be opening up an interesting research direction, but unfortunately I’m not sure it is ready yet.
Details:
-Abstract: “Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains is often difficult because the frameworks make idealized assumptions”. Although possibly true, this sounds a bit strong, given the paper’s results. What assumptions do your agents make? At the moment the agents you presented work on an incredibly small subset of causal graphs (not even all linear gaussian models with a hidden variable…), and it’s even not compared properly against the standard causal reasoning/causal discovery algorithms...
-Footnote 1: “this formalism for causal reasoning assumes that the structure of the causal graph is known” - (Spirtes et al. 2001) present several causal discovery (here “causal induction”) methods that recover the graph from data.
-Section 2.1 “X_i is a potential cause of X_j” - it’s a cause, not potential, maybe potentially not direct.
-Section 3: 3^(N-1)/2 is not the number of possible DAGs, that’s described by this sequence: https://oeis.org/A003024. Rather that is the number of (possibly cyclic) graphs with either -1, 1 or 0 on the edges.
-“The values of all but one node (the root node, which is always hidden)” - so is it 4 or 5 nodes? Or it is that all possible DAGs on N=6 nodes one of which is hidden? I’m asking because in the following it seems you can intervene on any of the 5 nodes…
-The intervention action is to set a given node to +5 (not really clear why), while in the quiz phase (in which the agent tries to predict the node with the highest variable) there is an intervention on a known node that is set to -5 (again not clear why, but different from the interventions seen in the T-1 steps).
-Active-Conditional is only marginally below Passive-Conditional, “indicating that when the agent is allowed to choose its actions, it makes reasonable choices” - not really, it should perform better, not “marginally below”... Same for all the other settings
-Why not use the MAP baseline for the observational case?
-What data does the Passive Conditional algorithms in Experiment 2? Only observations (so a subset of the data)?
-What are the unobserved confounders you mention in the results of Experiment 2? I thought there is only one unobserved confounder (the root node)? Where do the others come from?
-The counterfactual setting possibly lacks an optimal algorithm? |
ICLR | Title
Causal Reasoning from Meta-reinforcement learning
Abstract
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences. We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
N/A
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences. We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
1 INTRODUCTION
Many machine learning algorithms are rooted in discovering patterns of correlation in data. While this has been sufficient to excel in several areas (Krizhevsky et al., 2012; Cho et al., 2014), sometimes the problems we are interested in are fundamentally causal. Answering questions such as “Does smoking cause cancer?” or “Was this person denied a job due to racial discrimination?” or “Did this marketing campaign cause sales to go up?” all require an ability to reason about causes and effects and cannot be achieved by purely associative inference. Even for problems that are not obviously causal, like image classification, it has been suggested that some failure modes emerge from lack of causal understanding. Causal reasoning may be an essential component of natural intelligence and is present in human babies, rats and even birds (Leslie, 1982; Gopnik et al., 2001; 2004; Blaisdell et al., 2006; Lagnado et al., 2013). There is a rich literature on formal approaches for defining and performing causal reasoning (Pearl, 2000; Spirtes et al., 2000; Dawid, 2007; Pearl et al., 2016).
Here we investigate whether procedures for learning and using causal structure can be produced by meta-learning. The approach of meta-learning is to learn the learning (or inference) procedure itself, directly from data. We adopt the specific method of Duan et al. (2016) and Wang et al. (2016), training a recurrent neural network (RNN) through model-free reinforcement learning. We train on a large family of tasks, each underpinned by a different causal structure.
The use of meta-learning avoids the need to manually implement explicit causal reasoning methods in an algorithm, offers advantages of scalability by amortizing computations, and allows automatic incorporation of complex prior knowledge (Andrychowicz et al., 2016; Wang et al., 2016; Finn et al., 2017). Additionally, by learning end-to-end, the algorithm has the potential to find the internal representations of causal structure best suited for the types of causal inference required.
2 PROBLEM SPECIFICATION AND APPROACH
This work probed how an agent could learn to perform causal reasoning in three distinct settings – observational, interventional, and counterfactual – corresponding to different types of data available to the agent during the first phase of an episode.
In the observational setting (Experiment 1), the agent could only obtain passive observations from the environment. This type of data allows an agent to infer associations (associative reasoning) and, when the structure of the underlying causal model permits it, to estimate the effect that changing a variable in the environment has on another variable, namely to estimate causal effects (cause-effect reasoning).
In the interventional setting (Experiment 2), the agent could directly set the values of some variables in the environment. This type of data in principle allows an agent to estimate causal effects for any underlying causal model.
In the counterfactual setting (Experiment 3), the agent first had an opportunity to learn about the causal graph through interventions. At the last step of the episode, it was asked a counterfactual question of the form “What would have happened if a different intervention had been made in the previous time-step?”.
Next we will formalize these three settings and patterns of reasoning possible in each, using the graphical model framework (Pearl, 2000; Spirtes et al., 2000; Dawid, 2007)1, and introduce the meta-learning methods that we will use to train agents that are capable of such reasoning.
2.1 CAUSALITY
Causal relationships among random variables can be expressed using causal directed acyclic graphs (DAGs) (see Appendix). A causal DAG is a graphical model that captures both independence and causal relations. Each node Xi corresponds to a random variable, and the joint distribution p(X1, ... ,XN) is given by the product of conditional distributions of each node Xi given its parent nodes pa(Xi), i.e. p(X1:N≡X1,...,XN)= ∏N i=1p(Xi|pa(Xi)).
Edges carry causal semantics: if there exists a directed path fromXi toXj , thenXi is a potential cause of Xj. Directed paths are also called causal paths. The causal effect of Xi on Xj is the conditional distribution ofXj givenXi restricted to only causal paths.
G
A
E H
p(A)
p(E|A) p(H|A,E)
G→E=e
A
E H
p(A)
δE=e p(H|A,E)
An example causal DAG G is given in the figure on the left, where E represents hours of exercise in a week, H cardiac health, andA age. The causal effect ofE onH is the conditional distribution restricted to the path E→H, i.e. excluding the path E←A→H. The variable A is called a confounder, as it confounds the causal effect with non-causal statistical influence.
Simply observing cardiac health conditioning on exercise level from p(H|E) (associative reasoning) cannot answer if change in exercise levels cause changes in cardiac health (cause-effect reasoning), since there is always the possibility that correlation between the two is because of the common confounder of age.
Cause-effect Reasoning. The causal effect can be seen as the conditional distribution p→E=e(H|E = e)2 on the graph G→E=e above (right), resulting from intervening on E by replacing p(E|A) with a delta distribution δE=e (thereby removing the link from A to E) and leaving the remaining conditional distributions p(H|E,A) and p(A) unaltered. The rules of do-calculus (Pearl, 2000; Pearl et al., 2016) tell us how to compute p→E=e(H|E=e) using observations from G. In this case p→E=e(H|E=e)=∑
Ap(H|E=e,A)p(A)3. Therefore, do-calculus enables us to reason in the intervened graph G→E=e even if our observations are from G. This is the scenario captured by our observational setting outlined above. Such inferences are always possible if the confounders are observed, but in the presence of unobserved confounders, for many DAG structures the only way to compute causal effects is by collecting observations directly from G→E, i.e. by actively intervening on the world to fix the value of the variable E= e and observing the remaining variables. In our interventional setting, outlined above, the agent has access to such interventions.
1This approach typically decouples the challenges of causal induction, i.e. of inferring the structure of the causal graph from data, and that of causal reasoning on the induced graph. The formalism we describe here assumes that the structure of the causal graph is known. In our experiments however, our agents concurrently carry out causal induction.
2In the causality literature, this distribution would most often be indicated with p(H|do(E=e)). We prefer to use p→E=e(H|E=e) to highlight that intervening on E results in changing the original distribution p, by structurally altering the causal DAG.
3Notice that conditioning on E=e would instead give p(H|E=e)= ∑
Ap(H|E=e,A)p(A|E=e).
Counterfactual Reasoning. Cause-effect reasoning can be used to correctly answer predictive questions of the type “Does exercising improve cardiac health?” by accounting for causal structure and confounding. However, it cannot answer retrospective questions about what would have happened. For example, given an individual iwho has died of a heart attack, this method would not be able to answer questions of the type “What would the cardiac health of this individual have been had they done more exercise?”. This type of question requires estimating unobserved sources of noise and then reasoning about the effects of this noise under a graph conditioned on a different intervention.
2.2 META-LEARNING
Meta-learning refers to a broad range of approaches in which aspects of the learning algorithm itself are learned from the data. Many individual components of deep learning algorithms have been successfully meta-learned, including the optimizer (Andrychowicz et al., 2016), initial parameter settings (Finn et al., 2017), a metric space (Vinyals et al., 2016), and use of external memory (Santoro et al., 2016).
Following the approach of (Duan et al., 2016; Wang et al., 2016), we parameterize the entire learning algorithm as a recurrent neural network (RNN), and we train the weights of the RNN with model-free reinforcement learning (RL). The RNN is trained on a broad distribution of problems which each require learning. When trained in this way, the RNN is able to implement a learning algorithm capable of efficiently solving novel learning problems in or near the training distribution.
Learning the weights of the RNN by model-free RL can be thought of as the “outer loop” of learning. The outer loop shapes the weights of the RNN into an “inner loop” learning algorithm. This inner loop algorithm plays out in the activation dynamics of the RNN and can continue learning even when the weights of the network are frozen. The inner loop algorithm can also have very different properties from the outer loop algorithm used to train it. For example, in previous work this approach was used to negotiate the exploration-exploitation tradeoff in multi-armed bandits (Duan et al., 2016) and learn algorithms which dynamically adjust their own learning rates (Wang et al., 2016; 2018). In the present work we explore the possibility of obtaining a causally-aware inner-loop learning algorithm. See the Appendix for a more formal approach to meta-learning.
3 TASK SETUP AND AGENT ARCHITECTURE
In the experiments, in each episode the agent interacted with a different causal DAG G. G was drawn randomly from the space of possible DAGs under the constraints given in the next paragraph. Each episode consisted of T steps, and was divided into two phases: information and quiz. The information phase, corresponding to the first T−1 steps, allowed the agent to collect information by interacting with or passively observing samples from G. The agent could potentially use this information to infer the connectivity and weights of G. The quiz phase, corresponding to the final step T , required the agent to exploit the causal knowledge it collected in the information phase, to select the node with the highest value under a random external intervention.
Causal graphs, observations, and actions. We generated all graphs onN=5 nodes, with edges only in the upper triangular of the adjacency matrix (this guarantees that all the graphs obtained are DAGs), with edge weights, wji∈{−1,0,1} (uniformly sampled), and removed 300 for held-out testing. The remaining 58749 (or 3N(N−1)/2−300) were used as the training set. Each node’s value, Xi ∈R, was Gaussiandistributed. The values of parentless nodes were drawn from N (µ = 0.0,σ = 0.1). The conditional probability of a node with parents was p(Xi|pa(Xi)) = N (µ = ∑ jwjiXj,σ = 0.1), where pa(Xi) represents the parents of nodeXi in G. The values of the 4 observable nodes (the root node, was always hidden), were concatenated to create vt and provided to the agent in its observation vector, ot=[vt,mt], wheremt is a one-hot vector indicating external intervention during the quiz phase (explained below).4
In both phases, on each step, t, the agent’s action, at, was a discrete choice from the range {1...2(N−1)}. Action choices in {1...N − 1} corresponded to information actions, and choices in {N ...2(N − 1)} corresponded to quiz actions.
4While a simple domain provides the most unencumbered test for causal reasoning, we also carried out simulations with more complex causal graphs (graphs with non-linear connections, and larger graphs of size N = 6) and stronger requirements for generalization (holding-out entire equivalence classes of causal graphs from training) to demonstrate the robustness of our approach (see Appendix).
Information phase. In the information phase, an information action, at, caused an intervention on the at-th node, setting its value to Xat = 5. We choose an intervention value outside the likely range of sampled observations, to facilitate learning of the causal graph. The observation from the intervened graph, G→Xat=5, was sampled similarly to G, except the incoming edges toXat were severed, and its intervened value was used for conditioning its children’s values. The node values in G→Xat=5 were distributed as p→Xi=5(X1:N\i|Xi=5). If a quiz action was chosen during the information phase, it was ignored, the G values were sampled as if no intervention had been made, and the agent was given a penalty of rt=−5 in order to encourage it to take quiz actions at only during quiz phase. After the action was selected, an observation was provided to the agent. The default length of this phase was fixed to T =N =5 since in the noise-free limit, a minimum of T−1=4 interventions are required in general to resolve the causal structure, and score perfectly on the test phase.
Quiz phase. In the quiz phase, one non-hidden node was selected at random to be intervened on externally, Xj, and its value was set to −5. We chose an intervention value of −5 never previously observed by the agent in that episode, thus disallowing the agent from memorizing the results of interventions in the information phase to perform well on the quiz phase. The agent was informed of this by the observed mT−1 (a one-hot vector which indicated which node would be intervened on), from the final pre-quiz phase time-step, T−1. Note, mt was set to a zero-vector for steps t<T−1. A quiz action, aT , chosen by the agent indicated the node whose value would be given to the agent as a reward. In other words, the agent would receive reward, rT =XaT−(N−1). Again, if a quiz action was chosen during the information phase, the node values were not sampled and the agent was simply given a penalty of rT =−5. Active vs passive agents. Our agents had to perform two distinct tasks during the information phase: a) actively choose which nodes to set values on, and b) infer the causal DAG from its observations. We refer to this setup as the “active” condition. To control for (a), we created the “passive” condition, where the agent’s information phase actions are not learned. To provide a benchmark for how well the active agent can perform task (a), we fixed the passive agent’s intervention policy to be an exhaustive sweep through all observable nodes. This is close to optimal for this domain – in fact it is the optimal policy for noise-free conditional node values. We also compared the active agent’s performance to a baseline agent whose policy is to intervene randomly on the observable nodes in the information phase, in the Appendix.
Two kinds of learning The “inner loop” of learning (see Section 2.2) occurs within each episode where the agent is learning from the evidence it gathers during the information phase in order to perform well in the quiz phase. The same agent then enters a new episode, where it has to repeat the task on a different DAG. Test performance is reported on DAGs that the agent has never previously seen, after all the weights of the RNN have been fixed. Hence, the only transfer from training to test (or the “outer loop” of learning) is the ability to discover causal dependencies based on observations in the information phase, and to perform causal inference in the quiz phase.
Agent Architecture and Training
We used a long short-term memory (LSTM) network (Hochreiter & Schmidhuber, 1997) (with 96 hidden units) that, at each time-step t, receives a concatenated vector containing [ot,at−1,rt−1] as input, where ot is the observation5, at−1 is the previous action (as a one-hot vector) and rt−1 the reward (as a single real-value)6. The outputs, calculated as linear projections of the LSTM’s hidden state, are a set of policy logits (with dimensionality equal to the number of available actions), plus a scalar baseline. The policy logits are transformed by a softmax function, and then sampled to give a selected action.
Learning was by asynchronous advantage actor-critic (Mnih et al., 2016). In this framework, the loss function consists of three terms – the policy gradient, the baseline cost and an entropy cost. The baseline cost was weighted by 0.05 relative to the policy gradient cost. The weighting of the entropy cost was annealed over the course of training from 0.05 to 0. Optimization was done by RMSProp with =10−5, momentum = 0.9 and decay = 0.95. Learning rate was annealed from 3×10−6 to 0. For all experiments, after training, the agent was tested with the learning rate set to zero, on a held-out test set.
5’Observation’ ot refers to the reinforcement learning term, i.e. the input from the environment to the agent. This is distinct from observations in the causal sense (referred to as observational data) i.e. samples from a casual structure where there is no information about interventions that have been carried out.
6These are both set to zero for the first step in an episode.
4 EXPERIMENTS
Our three experiments (observational, interventional, and counterfactual) differed in the properties of the vt that was observed by the agent during the information phase, and thereby limited the extent of causal reasoning possible within each data setting. Our measure of performance is the reward earned in the quiz phase for held-out DAGs. Choosing a random node node in the quiz phase results in a reward of −5/4=−1.25, since one node (the externally intervened node) always has value−5 and the others have on average 0 value. By learning to simply avoid the externally intervened node, the agent can earn on average 0 reward. Consistently picking the node with the highest value in the quiz phase requires the agent to perform causal reasoning. For each agent, we take the average reward earned across 1200 episodes (300 held-out test DAGs, with 4 possible external interventions). We train 12 copies of each agent and report the average reward earned by these, with error bars showing 95% confidence intervals.
4.1 EXPERIMENT 1: OBSERVATIONAL SETTING
In Experiment 1, the agent could neither intervene to set the value of variables in the environment, nor observe any external interventions. In other words, it only received observations from G, not G→Xj (whereXj is a node that has been intervened on). This limits the extent of causal inference possible. In this experiment, we tested six agents, four of which were learned: “Observational”, “Long Observational”, “Active Conditional”, “Passive Conditional”, “Observational MAP Baseline”(not learned) and the “Optimal Associative Baseline” (not learned). We also ran two other standard RL baselines—see the Appendix for details.
Observational Agents: In the information phase, the actions of the agent were ignored7, and the observational agent always received the values of the observable nodes as sampled from the joint distribution associated with G. In addition to the default T=5 episode length, we also trained this agent with 4× longer episode length (Long Observational Agent), to measure performance increase with more observational data.
Conditional Agents: The information phase actions corresponded to observing a world in which the selected nodeXj is equal toXj=5, and the remaining nodes are sampled from the conditional distribution p(X1:N\j|Xj=5), whereX1:N\j indicates the set of all nodes exceptXj. This differs from intervening on the variableXj by setting it to the valueXj=5, since here we take a conditional sample from G rather than from G→Xj=5 (i.e. from p→Xj=5(X1:N\j|Xj=5)), and inference about the corresponding node’s parents is possible. Therefore, this agent still has access to only observational data, as with the observational agents. However, on average it receives more diagnostic information about the relation between the random variables in G, since it can observe samples where a node takes a value far outside the likely range of sampled observations. We run active and passive versions of this agent as described in Section 3
Optimal Associative Baseline: This baseline receives the true joint distribution p(X1:N) implied by the DAG in that episode, therefore it has full knowledge of the correlation structure of the environment8. It can therefore do exact associative reasoning of the form p(Xj|Xi=x), but cannot do any cause-effect reasoning of the form p→Xi=x(Xj|Xi=x). In the quiz phase, this baseline chooses the node that has the maximum value according to the true p(Xj|Xi=x) in that episode, whereXi is the node externally intervened upon, and x=−5. Observational MAP Baseline: This baseline follows the traditional method of separating causal induction and causal inference. We first carry out exact maximum a posteriori (MAP) inference over the space of DAGs in each episode (i.e. causal induction) by selecting the DAG (GMAP) of the 59049 unique possibilities that maximizes the likelihood of the data observed, v1:T , by the Observational Agent in that episode. This is equivalent to maximizing the posterior probability since the prior over graphs is uniform.
RESULTS
We focus on three key questions in this experiment: (i) Can our agents learn to do associative reasoning with observational data?, (ii) Can they learn to do cause-effect reasoning from observational data?, and (iii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?
7These agents also did not receive the out-of-phase action penalties during the information phase since their actions are totally ignored.
8Notice that the agent does not know the graphical structure, i.e. it does not know which nodes are parents of which other nodes
For (i), we see that the Observational Agents achieve reward above the random baseline (see the Appendix), and that more observations (Long Observational Agent) lead to better performance (Fig. 2a), indicating that the agent is indeed learning the statistical dependencies between the nodes. We see that the performance of the Passive-Conditional Agent is better than either of the Observational Agents, since the data it observes is very informative about the statistical dependencies in the environment. Finally, we see that the PassiveConditional Agent’s performance is comparable (in fact surpasses as discussed below) the performance of the Optimal Associative Baseline, indicating that it is able to do perfect associative inference.
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
Figure 1: Active and Passive Conditional Agents
For (ii), we see the crucial result that the Passive-Conditional Agent’s performance is significantly above the Optimal Associative Baseline, i.e. it performs better than what is possible using only correlations. We compare their performances, split by whether or the node that was intervened on in the quiz phase of the episode has a parent (Fig. 2b). If the intervened nodeXj has no parents, then G=G→Xj , and there is no advantage to being able to do cause-effect reasoning. We see indeed
that the Passive-Conditional agent performs better than the Optimal Associative Baseline only when the intervened node has parents (denoted by hatched bars in Fig. 2b), indicating that this agent is able to carry out some cause-effect reasoning, despite access to only observational data – i.e. it learns some form of do-calculus. We show the quiz phase for an example test DAG in Fig. 2c, seeing that the Optimal Associative Baseline chooses according to the node values predicted by G whereas the Passive-Conditional Agent chooses according the node values predicted by G→Xj . For (iii), we see (Fig. 2) that the Active-Conditional Agent’s performance is only marginally below the performance of the Passive-Conditional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
4.2 EXPERIMENT 2: INTERVENTIONAL SETTING
In Experiment 2, the agent receives interventional data in the information phase – it can choose to intervene on any observable node, Xj, and observe a sample from the resulting graph G→Xj . As discussed in Section 2.1, access to intervention data permits cause-effect reasoning even in the presence of unobserved confounders, a feat which is in general impossible with access only to observational data. In this experiment, we test four new agents, two of which were learned: “Active Interventional”, “Passive Interventional”, “Interventional MAP Baseline”(not learned), and “Optimal Cause-Effect Baseline” (not learned).
Interventional Agents: The information phase actions correspond to performing an intervention on the selected nodeXj and sampling from G→Xj (see Section 3 for details). We run active and passive versions of this agent as described in Section 3.
Interventional MAP Baseline: This baseline infers a DAG by maximizing the likelihood of the data observed by the Passive Interventional Agent in that episode. In the quiz phase, we predict the values of
each node according to GMAP→Xj whereXj is the node externally intervened upon (i.e. causal inference), and choose the node with the highest value.
Optimal Cause-Effect Baseline: This baseline receives the true DAG, G. In the quiz phase, it chooses the node that has the maximum value according to G→Xj , whereXj is the node externally intervened upon.
RESULTS
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
For (i) we see in Fig. 4a that the Passive-Interventional Agent’s performance is comparable to the Optimal Cause-Effect Baseline, indicating that it is able to do close to perfect cause-effect reasoning in this domain.
For (ii) we see in Fig. 4a the crucial result that the Passive-Interventional Agent’s performance is significantly better than the Passive-Conditional Agent. We compare the performances of these two agents, split by whether the node that was intervened on in the quiz phase of the episode had unobserved confounders with other variables in the graph (Fig. 4b). In confounded cases, as described in Section 2.1, cause-effect reasoning is impossible with only observational data. We see that the performance of the Passive-Interventional Agent does not vary significantly with confoundedness, whereas the performance of the Passive-Conditional Agent is significantly lower in the confounded cases. This indicates that the improvement in the performance of the agent that has access to interventional data (as compared to the agents that had access to only observational data) is largely driven by its ability to also do cause-effect reasoning in the presence of confounders. This is highlighted by Fig. 4c, which shows the quiz phase for an example DAG, where the Passive-Conditional agent is unable to resolve the confounder, but the Passive-Interventional agent can.
For (iii), we see in Fig. 3 that the Active-Interventional Agent’s performance is only marginally below the performance of the near optimal Passive-Interventional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
4.3 EXPERIMENT 3: COUNTERFACTUAL SETTING
In Experiment 3, the agent was again allowed to make interventions as in Experiment 2, but in this case the quiz phase task entailed answering a counterfactual question. We explain here what a counterfactual question in this domain looks like. Consider the conditional distribution p(Xi|pa(Xi))=N ( ∑ jwjiXj,0.1)
as described in Section 3 asXi= ∑
jwjiXj+ where is distributed asN (0.0,0.1), and represents the specific randomness introduced when taking one sample from the DAG. After observing the nodesX1:N
in the DAG in one sample, we can infer this specific randomness i for each nodeXi (i.e. abduction as described in the Appendix) and answer counterfactual questions like “What would the values of the nodes be, hadXj in that particular sample taken on a different value than what we observed?”, for any of the nodesXj. We test 2 new learned agents: “Active Counterfactual” and “Passive Counterfactual”.
Counterfactual Agents: This agent is exactly analogous to the Interventional agent, with the addition that the exogenous noise in the last information phase step t=T−1 (where sayXp=+5), is stored and the same noise is used in the quiz phase step t=T (where sayXf =−5). While the question our agents have had to answer correctly so far in order to maximize their reward in the quiz phase was “Which of the nodes X1:N\j will have the highest value whenXf is set to−5?”, in this setting, we ask “Which of the nodes X1:N\j would have had the highest value in the last step of the information phase, if instead of having Xp=+5, we hadXf =−5?”. We run active and passive versions of this agent as described in Section 3. Optimal Counterfactual Baseline: This baseline receives the true DAG and does exact abduction based on the exogenous noise observed in the penultimate step of the information phase, and combines this correctly with the appropriate interventional inference on the true DAG in the quiz phase.
RESULTS
We focus on two key questions in this experiment: (i) Can our agents learn to do counterfactual reasoning?, (ii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?
For (i), we see that the Passive-Counterfactual Agent achieves higher reward than the Passive-Interventional Agent and the Optimal Cause-Effect Baseline. To evaluate whether this difference results from the agent’s use of abduction (see the Appendix for details), we split the test set into two groups, depending on whether or not the decision for which node will have the highest value in the quiz phase is affected by exogenous noise, i.e. whether or not the node with the maximum value in the quiz phase changes if the noise is
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
and Passive-Interventional Agents in the cases where the maximum values are distinct, however the Passive-Counterfactual Agent significantly outperforms the Passive-Interventional Agent in cases where there are degenerate maximum values.
For (ii), we see in Fig. 6 that the Active-Counterfactual Agent’s performance is only marginally below the performance of the Passive-Counterfactual agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
5 SUMMARY OF RESULTS
We introduced and tested a framework for learning causal reasoning in various data settings—observational, interventional, and counterfactual—using deep meta-RL. Crucially, our approach did not require explicit encoding of formal principles of causal inference. Rather, by optimizing an agent to perform a task that depended on causal structure, the agent learned implicit strategies to use the available data for causal reasoning, including drawing inferences from passive observation, actively intervening, and making counterfactual predictions. Below, we summarize the keys results from each of the three experiments.
In Section 4.1 and Fig. 2, we show that the agent learns to perform do-calculus. In Fig. 2(a) we see that, compared to the highest possible reward achievable without causal knowledge, the trained agent received more reward. This observation is corroborated by Fig. 2(b) which shows that performance increased selectively in cases where do-calculus made a prediction distinguishable from the predictions based on correlations. These are situations where the externally intervened node had a parent – meaning that the intervention resulted in a different graph.
In Section 4.2 and Fig. 4, we show that the agent learns to resolve unobserved confounders using interventions (a feat impossible with only observational data). In Fig. 4(a) we see that the agent with access to interventional data performs better than an agent with access to only observational data. Fig. 4(b) shows that the performance increase is greater in cases where the intervened node shared an unobserved parent (a confounder) with other variables in the graph. In this section we also compare the agent’s performance to a MAP estimate of the causal structure and find that the agent’s performance matches it, indicating that the agent is indeed doing close to optimal causal inference.
In Section 4.3 and Fig. 5, we show that the agent learns to use counterfactuals. In Fig. 5(a) we see that the agent with additional access to the specific randomness in the test phase performs better than an agent with access to only interventional data. In Fig. 5(b), we find that the increased performance is observed only in cases where the maximum mean value in the graph is degenerate, and optimal choice is affected by the exogenous noise – i.e. where multiple nodes have the same value on average and the specific randomness can be used to distinguish their actual values in that specific case.
6 DISCUSSION AND FUTURE WORK
This work is the first demonstration that causal reasoning can arise out of model-free reinforcement learning. This opens up the possibility of leveraging powerful learning-based methods for causal inference in complex settings. Traditional formal approaches usually decouple the two problems of causal induction (i.e. inferring the structure of the underlying model) and causal inference (i.e. estimating causal effects and answering counterfactual questions), and despite advances in both (Ortega & Stocker, 2015; Bramley et al., 2017; Parida et al., 2018; Sen et al., 2017; Forney et al., 2017; Lattimore et al., 2016), inducing models often requires assumptions that are difficult to fit to complex real-world conditions. By learning these end-to-end, our method can potentially find representations of causal structure best tuned to the specific causal inferences required. Another key advantage of our meta-RL approach is that it allows the agent to learn to interact with the environment in order to acquire necessary observations in the service of its task—i.e. to perform active learning. In our experimental domain, our agents’ active intervention policy was close to optimal, which demonstrates the promise of agents that can learn to experiment on their environment and perform rich causal reasoning on the observations.
Future work should explore agents that perform experiments to support structured exploration in RL, and optimal experiment design in complex domains where large numbers of blind interventions are prohibitive. To this end, follow-up work should focus on scaling up our approach to larger environments, with more complex causal structure and a more diverse range of tasks. Though the results here are a first step in this direction which use relatively standard deep RL components, our approach will likely benefit from more advanced architectures (e.g. Espeholt et al., 2018; Hessel et al., 2018; Hester et al., 2017) that allow longer more complex episodes, as well as models which are more explicitly compositional (e.g. Battaglia et al., 2018; Andreas et al., 2016) or have richer semantics (e.g. Ganin et al., 2018), that more explicitly leverage symmetries like equivalance classes in the environment.
A ADDITIONAL BASELINES
6 4 2 0 2 4 6 Reward Earned 0.0
0.1
0.2
0.3
0.4
0.5 0.6 P er ce nt ag e of D A G s
Q-total Q-episode Optimal
Figure 7: Reward distribution
We can also compare the performance of these agents to two standard model-free RL baselines. The Q-total agent learns a Q-value for each action across all steps for all the episodes. The Q-episode agent learns a Q-value for each action conditioned on the input at each time step [ot,at−1,rt−1], but with no LSTM memory to store previous actions and observations. Since the relationship between action and reward is random between episodes, Q-total was equivalent to selecting actions randomly, resulting in a considerably negative reward. The Q-episode agent essentially makes sure to not choose the arm that is indicated bymt to be the external intervention (which is assured to be equal to −5), and essentially chooses randomly otherwise, giving an average reward of 0.
B FORMAL DESCRIPTION OF META-LEARNING
Consider a distributionD over Markov Decision Processes (MDPs). We train an agent with memory (in our case an RNN-based agent) on this distribution. In each episode, we sample a task m∼D. At each step t within an episode, the agent sees an observation ot, executes an action at, and receives a reward rt. Both at−1 and rt−1 are given as additional inputs to the network. Thus, via the recurrence of the network, each action is a function of the entire trajectory Ht = {o0,a0,r0,...,ot−1,at−1,rt−1,ot} of the episode. Because this function is parameterized by the neural network, its complexity is limited only by the size of the network.
C ABDUCTION-ACTION-PREDICTION METHOD FOR COUNTERFACTUAL REASONING
Pearl et al. (2016)’s “abduction-action-prediction” method prescribes one method for answering counterfactual queries, by estimating the specific unobserved makeup of individual i and by transferring it to the counterfactual world. Assume, for example, the following model for G of Section 2.1: E=wAEA+η, H=wAHA+wEHE+ , where the weights wij represent the known causal effects in G and and η are terms of (e.g.) Gaussian noise that represent the unobserved randomness in the makeup of each individual9. Suppose that for individual i we observe: A= ai, E= ei, H = hi. We can answer the counterfactual question of “What if individual i had done more exercise, i.e.E=e′, instead?” by: a) Abduction: estimate the individual’s specific makeup with i=hi−wAHai−wEHei, b) Action: setE to more exercise e′, c) Prediction: predict a new value for cardiac health as h′=wAHai+wEHe′+ i.
D EXPERIMENT 4: NON-LINEAR CAUSAL GRAPHS
0.0 0.5 1.0 1.5 2.0
Optimal Cause-Effect
MAP
Passive-Interventional
Long Observational.
Observational.
Figure 8: Experiment 4 results
The purview of the previous experiments was to show a proof of concept on a simple tractable system, demonstrating that causal induction and inference can be learned and implemented via a meta-learned agent. In this experiment, we generalize some of the results to nonlinear, non-Gaussian causal graphs which are more typical of real-world causal graphs and to demonstrate that our results hold without loss of generality on such systems.
Here we investigate causal DAGs with a quadratic dependence on the parents by changing the conditional distribution to p(Xi|pa(Xi)) = N ( 1Ni ∑ j wji(Xj+X 2 j ),σ). Here, although each node is normally
9These are zero in expectation, so without access to their value for an individual we simply use G: E=wAEA, H=wAHA+wEHE to make causal predictions.
distributed given its parents, the joint distribution is not multivariate Gaussian due to the non-linearity in how the means are determined. We find that the Long-Observational achieves more reward than the Observational agent indicating that the agent is in fact learning the statistical dependencies between the nodes, within an episode. We also find that although the Active-Interventional agent is not far behind the performance of the MAP baseline, and achieves reward well above the Long-Observational10 The fact that the MAP baseline gets so close to the Optimal Cause-Effect baseline indicates that the Active agent is choosing close to optimal actions.
E EXPERIMENT 5: LARGER CAUSAL GRAPHS WITH GENERALIZATION TO NEW EQUIVALENCE CLASSES
0.0 1.0 Avg. Reward
Long-Obs. Obs.
Passive-Cond. Passive-Int. Passive-CF
(a) (b)
0.0 1.0 Avg. Reward
Random-Int.
Active-Int.
Passive-Int.
Optimal C-E
the entire equivalence class of each test graph was held out from the training set11. Performance on the test set therefore indicates generalization of the inference procedures learned to previously unseen equivalence classes of causal DAGs. For these experiments, we used graphs withN=6 nodes, because 5-node graphs have too few equivalence classes to partition in this way. All other details were the same as in the main paper.
We see in Fig. 9a that the agents learn to generalize well to these held out examples, and we find the same pattern of behavior noted in the main text where the rewards earned are ordered such that Observational agent< Passive-Conditional agent< Passive-Interventional agent< Passive-Counterfactual agent. We see additionally in Fig. 9b that the Active-Interventional agent performs at par with the Passive-Interventional agent (which is allowed to see the results of interventions on all nodes) and significantly better than an additional baseline we use here of the Random-Interventional agent whose information phase policy is to intervene on nodes at random, indicating that the intervention policy learned by the Active agent is good.
F GRAPHICAL MODELS AND BELIEF NETWORKS
Graphical models (Pearl, 1988; Bishop, 2006; Koller & Friedman, 2009; Barber, 2012; Murphy, 2012) are a marriage between graph and probability theory that allows to graphically represent and assess statistical dependence. In the following sections, we give some basic definitions and describe a method (d-separation) for graphically assessing statistical independence in belief networks.
BASIC DEFINITIONS
X2X1 X3
X4
(a)
X2X1 X3
X4
(b)
10The conditional distribution p(X1:N\j|Xj=5), and therefore Conditional agents, were non-trivial to calculate for the quadratic case.
11The hidden node was guaranteed to be a root node by rejecting all DAGs where the hidden node has parents
A directed acyclic graph (DAG) is a directed graph with no directed paths starting and ending at the same node. For example, the directed graph in Fig. 10(a) is acyclic. The addition of a link fromX4 toX1 gives rise to a cyclic graph (Fig. 10(b)).
A nodeXi with a directed link toXj is called parent ofXj. In this case,Xj is called child ofXi. A node is a collider on a specified path if it has (at least) two parents on that path. Notice that a node can be a collider on a path and a non-collider on another path. For example, in Fig. 10(a)X3 is a collider on the pathX1→X3←X2 and a non-collider on the pathX2→X3→X4. A nodeXi is an ancestor of a nodeXj if there exists a directed path fromXi toXj. In this case,Xj is a descendant ofXi. A graphical model is a graph in which nodes represent random variables and links express statistical relationships between the variables.
A belief network is a directed acyclic graphical model in which each node Xi is associated with the conditional distribution p(Xi|pa(Xi)), where pa(Xi) indicates the parents ofXi. The joint distribution of all nodes in the graph, p(X1:N), is given by the product of all conditional distributions, i.e.
p(X1:N)= N∏ i=1 p(Xi|pa(Xi)).
ASSESSING STATISTICAL INDEPENDENCE IN BELIEF NETWORKS
Given the sets of random variables X ,Y and Z, X and Y are statistically independent given Z (X ⊥Y|Z) if all paths from any element of X to any element of Y are closed (or blocked). A path is closed if at least one of the following conditions is satisfied:
(Ia) There is a non-collider on the path which belongs to the conditioning set Z. (Ib) There is a collider on the path such that neither the collider nor any of its descendants belong to
the conditioning set Z. | 1. What are the concerns regarding the task setup and modeling in the paper?
2. How does the reviewer suggest improving the work, particularly in addressing the issues mentioned?
3. What are some potential future directions for research, according to the reviewer? | Review | Review
The paper presents a meta-learning RL framework to train agents that
can learn and do causal reasoning. The paper sets up three tasks for
agents to learn to do associational, interventional, and
counterfactual reasoning. The training/testing is done on all
5-variable graphs. The authors demonstrate how the agent can maximize
their rewards, which demonstrate that the agent might have learnt to
learn some causal structure and do reasoning in the data.
Review:
I think Causality is an important area, and seeing how RL can help in
any aspect is something really worth looking into.
However, I have a few qualms about the setting and the way the tasks
are modeled.
1. Why is the task to select the node with the highest "value"
(value=expected value? the sample? what is it?) under some random
external intervention? It feels very indirect.
Why not explicitly model certain useful actions that directly query
the structure, such as:
- selecting nodes that are parents/children of a node
- evaluating p(x | y) or p(x | do(y))?
2. The way you generate test data might introduce biases:
- If you enumerate 3^(n(n-1)/2) DAGs, some of them will have loops. Do you weed them out?
Does it matter?
- How do you sample weights from {-1, 0, 1}? uniform? What happens if
wij = 0? This introduces bias in your training data. This means
your distribution is over DAGs + weights, not just DAGs.
- Your training/test split doesn't take into account certain
equivalence/symmetries that might be present in your training data,
making it hard to rule out whether your agents are in effect
memorizing training data, specially that the number of test graphs
is so tiny (300, while test could have been in the thousands too):
Example, if you graph just has one causal connection with weight = 1:
X1 -> X2; X3; X4; X5, This is clearly equivalent to X2 -> X1; X3; X4; X5.
Or the structure X1 -> X2 might be present in a larger graph, example with these two components:
X1 -> X2; X3 -> X4 -> X5;
3. Why such a low number of learning steps T (T=5 in paper) in each episode? no
experimentation over choice of T or discussion of this choice is
given. And it is mentioned in the findings, in several cases, that
the active agent is only merely comparable to the passive agent, while
one would think active would be better. If T were reasonably higher
(not too low, not too high), one would expect to see a difference.
4. Although I have concerns listed above, something about Figure 2(a)
seems to suggest that the agent is learning something. I think if
you had tried to probe into what the agent is actually learning, it
would have clarified many doubts.
However, in Figure 2(c), if the black node is -5, why is the node
below left at -2.5? The weight on the edge is 1 and other parent is
0, so -2.5 seems extremely unlikely, given that the variance is 0.1
(stdev ~ 0.3, so ~8 standard deviations away!). (Similar issue in
Figure 3c)
5. Although the random choice would result in a score of -5/4, I think
it's quite easy and trivial to beat that by just ignoring the node
that's externally intervened on and assigned -5, given it's a small
value. This probably doesn't require the agent to be able to do
"causal reasoning" ... That immediately gives you a lower bound of
0. That might be more appropriate.
If you could give a distribution of the max(mean_i(Xi)) over all
graphs (including weights in your distribution), it could give an
idea of how unlikely it is for the agent to get a high score without
actually learning the causal structure.
Suggestions for improving the work:
- Provide results on wider range of experiments (eg more even
train-test split, choice of T), or at minimum justify choices
made. And address the issues above.
- Focus on more intuitive notions that clearly require causal
knowledge, or motivate your objective very clearly to show its
sufficiency.
- Perhaps discuss simpler examples (e.g., 3 node), where it's easy to
enumerate all causal structures and group them into appropriate
equivalence classes.
- Please proof-read and make sure you've defined all terms (there are
a few, such as Xp/Xf in Expt 3, where p/f are not really defined).
- You could show a few experiments with larger N by sampling from the space of all possible
DAGs, instead of enumerating everything.
Of course, it would be great if you can probe the agent to see what it
really learnt. But I understand that could be a long-shot.
Another open problem is whether this approach can scale to larger number of
variables, in particular the learning might be very data hungry. |
ICLR | Title
Causal Reasoning from Meta-reinforcement learning
Abstract
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences. We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
N/A
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences. We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
1 INTRODUCTION
Many machine learning algorithms are rooted in discovering patterns of correlation in data. While this has been sufficient to excel in several areas (Krizhevsky et al., 2012; Cho et al., 2014), sometimes the problems we are interested in are fundamentally causal. Answering questions such as “Does smoking cause cancer?” or “Was this person denied a job due to racial discrimination?” or “Did this marketing campaign cause sales to go up?” all require an ability to reason about causes and effects and cannot be achieved by purely associative inference. Even for problems that are not obviously causal, like image classification, it has been suggested that some failure modes emerge from lack of causal understanding. Causal reasoning may be an essential component of natural intelligence and is present in human babies, rats and even birds (Leslie, 1982; Gopnik et al., 2001; 2004; Blaisdell et al., 2006; Lagnado et al., 2013). There is a rich literature on formal approaches for defining and performing causal reasoning (Pearl, 2000; Spirtes et al., 2000; Dawid, 2007; Pearl et al., 2016).
Here we investigate whether procedures for learning and using causal structure can be produced by meta-learning. The approach of meta-learning is to learn the learning (or inference) procedure itself, directly from data. We adopt the specific method of Duan et al. (2016) and Wang et al. (2016), training a recurrent neural network (RNN) through model-free reinforcement learning. We train on a large family of tasks, each underpinned by a different causal structure.
The use of meta-learning avoids the need to manually implement explicit causal reasoning methods in an algorithm, offers advantages of scalability by amortizing computations, and allows automatic incorporation of complex prior knowledge (Andrychowicz et al., 2016; Wang et al., 2016; Finn et al., 2017). Additionally, by learning end-to-end, the algorithm has the potential to find the internal representations of causal structure best suited for the types of causal inference required.
2 PROBLEM SPECIFICATION AND APPROACH
This work probed how an agent could learn to perform causal reasoning in three distinct settings – observational, interventional, and counterfactual – corresponding to different types of data available to the agent during the first phase of an episode.
In the observational setting (Experiment 1), the agent could only obtain passive observations from the environment. This type of data allows an agent to infer associations (associative reasoning) and, when the structure of the underlying causal model permits it, to estimate the effect that changing a variable in the environment has on another variable, namely to estimate causal effects (cause-effect reasoning).
In the interventional setting (Experiment 2), the agent could directly set the values of some variables in the environment. This type of data in principle allows an agent to estimate causal effects for any underlying causal model.
In the counterfactual setting (Experiment 3), the agent first had an opportunity to learn about the causal graph through interventions. At the last step of the episode, it was asked a counterfactual question of the form “What would have happened if a different intervention had been made in the previous time-step?”.
Next we will formalize these three settings and patterns of reasoning possible in each, using the graphical model framework (Pearl, 2000; Spirtes et al., 2000; Dawid, 2007)1, and introduce the meta-learning methods that we will use to train agents that are capable of such reasoning.
2.1 CAUSALITY
Causal relationships among random variables can be expressed using causal directed acyclic graphs (DAGs) (see Appendix). A causal DAG is a graphical model that captures both independence and causal relations. Each node Xi corresponds to a random variable, and the joint distribution p(X1, ... ,XN) is given by the product of conditional distributions of each node Xi given its parent nodes pa(Xi), i.e. p(X1:N≡X1,...,XN)= ∏N i=1p(Xi|pa(Xi)).
Edges carry causal semantics: if there exists a directed path fromXi toXj , thenXi is a potential cause of Xj. Directed paths are also called causal paths. The causal effect of Xi on Xj is the conditional distribution ofXj givenXi restricted to only causal paths.
G
A
E H
p(A)
p(E|A) p(H|A,E)
G→E=e
A
E H
p(A)
δE=e p(H|A,E)
An example causal DAG G is given in the figure on the left, where E represents hours of exercise in a week, H cardiac health, andA age. The causal effect ofE onH is the conditional distribution restricted to the path E→H, i.e. excluding the path E←A→H. The variable A is called a confounder, as it confounds the causal effect with non-causal statistical influence.
Simply observing cardiac health conditioning on exercise level from p(H|E) (associative reasoning) cannot answer if change in exercise levels cause changes in cardiac health (cause-effect reasoning), since there is always the possibility that correlation between the two is because of the common confounder of age.
Cause-effect Reasoning. The causal effect can be seen as the conditional distribution p→E=e(H|E = e)2 on the graph G→E=e above (right), resulting from intervening on E by replacing p(E|A) with a delta distribution δE=e (thereby removing the link from A to E) and leaving the remaining conditional distributions p(H|E,A) and p(A) unaltered. The rules of do-calculus (Pearl, 2000; Pearl et al., 2016) tell us how to compute p→E=e(H|E=e) using observations from G. In this case p→E=e(H|E=e)=∑
Ap(H|E=e,A)p(A)3. Therefore, do-calculus enables us to reason in the intervened graph G→E=e even if our observations are from G. This is the scenario captured by our observational setting outlined above. Such inferences are always possible if the confounders are observed, but in the presence of unobserved confounders, for many DAG structures the only way to compute causal effects is by collecting observations directly from G→E, i.e. by actively intervening on the world to fix the value of the variable E= e and observing the remaining variables. In our interventional setting, outlined above, the agent has access to such interventions.
1This approach typically decouples the challenges of causal induction, i.e. of inferring the structure of the causal graph from data, and that of causal reasoning on the induced graph. The formalism we describe here assumes that the structure of the causal graph is known. In our experiments however, our agents concurrently carry out causal induction.
2In the causality literature, this distribution would most often be indicated with p(H|do(E=e)). We prefer to use p→E=e(H|E=e) to highlight that intervening on E results in changing the original distribution p, by structurally altering the causal DAG.
3Notice that conditioning on E=e would instead give p(H|E=e)= ∑
Ap(H|E=e,A)p(A|E=e).
Counterfactual Reasoning. Cause-effect reasoning can be used to correctly answer predictive questions of the type “Does exercising improve cardiac health?” by accounting for causal structure and confounding. However, it cannot answer retrospective questions about what would have happened. For example, given an individual iwho has died of a heart attack, this method would not be able to answer questions of the type “What would the cardiac health of this individual have been had they done more exercise?”. This type of question requires estimating unobserved sources of noise and then reasoning about the effects of this noise under a graph conditioned on a different intervention.
2.2 META-LEARNING
Meta-learning refers to a broad range of approaches in which aspects of the learning algorithm itself are learned from the data. Many individual components of deep learning algorithms have been successfully meta-learned, including the optimizer (Andrychowicz et al., 2016), initial parameter settings (Finn et al., 2017), a metric space (Vinyals et al., 2016), and use of external memory (Santoro et al., 2016).
Following the approach of (Duan et al., 2016; Wang et al., 2016), we parameterize the entire learning algorithm as a recurrent neural network (RNN), and we train the weights of the RNN with model-free reinforcement learning (RL). The RNN is trained on a broad distribution of problems which each require learning. When trained in this way, the RNN is able to implement a learning algorithm capable of efficiently solving novel learning problems in or near the training distribution.
Learning the weights of the RNN by model-free RL can be thought of as the “outer loop” of learning. The outer loop shapes the weights of the RNN into an “inner loop” learning algorithm. This inner loop algorithm plays out in the activation dynamics of the RNN and can continue learning even when the weights of the network are frozen. The inner loop algorithm can also have very different properties from the outer loop algorithm used to train it. For example, in previous work this approach was used to negotiate the exploration-exploitation tradeoff in multi-armed bandits (Duan et al., 2016) and learn algorithms which dynamically adjust their own learning rates (Wang et al., 2016; 2018). In the present work we explore the possibility of obtaining a causally-aware inner-loop learning algorithm. See the Appendix for a more formal approach to meta-learning.
3 TASK SETUP AND AGENT ARCHITECTURE
In the experiments, in each episode the agent interacted with a different causal DAG G. G was drawn randomly from the space of possible DAGs under the constraints given in the next paragraph. Each episode consisted of T steps, and was divided into two phases: information and quiz. The information phase, corresponding to the first T−1 steps, allowed the agent to collect information by interacting with or passively observing samples from G. The agent could potentially use this information to infer the connectivity and weights of G. The quiz phase, corresponding to the final step T , required the agent to exploit the causal knowledge it collected in the information phase, to select the node with the highest value under a random external intervention.
Causal graphs, observations, and actions. We generated all graphs onN=5 nodes, with edges only in the upper triangular of the adjacency matrix (this guarantees that all the graphs obtained are DAGs), with edge weights, wji∈{−1,0,1} (uniformly sampled), and removed 300 for held-out testing. The remaining 58749 (or 3N(N−1)/2−300) were used as the training set. Each node’s value, Xi ∈R, was Gaussiandistributed. The values of parentless nodes were drawn from N (µ = 0.0,σ = 0.1). The conditional probability of a node with parents was p(Xi|pa(Xi)) = N (µ = ∑ jwjiXj,σ = 0.1), where pa(Xi) represents the parents of nodeXi in G. The values of the 4 observable nodes (the root node, was always hidden), were concatenated to create vt and provided to the agent in its observation vector, ot=[vt,mt], wheremt is a one-hot vector indicating external intervention during the quiz phase (explained below).4
In both phases, on each step, t, the agent’s action, at, was a discrete choice from the range {1...2(N−1)}. Action choices in {1...N − 1} corresponded to information actions, and choices in {N ...2(N − 1)} corresponded to quiz actions.
4While a simple domain provides the most unencumbered test for causal reasoning, we also carried out simulations with more complex causal graphs (graphs with non-linear connections, and larger graphs of size N = 6) and stronger requirements for generalization (holding-out entire equivalence classes of causal graphs from training) to demonstrate the robustness of our approach (see Appendix).
Information phase. In the information phase, an information action, at, caused an intervention on the at-th node, setting its value to Xat = 5. We choose an intervention value outside the likely range of sampled observations, to facilitate learning of the causal graph. The observation from the intervened graph, G→Xat=5, was sampled similarly to G, except the incoming edges toXat were severed, and its intervened value was used for conditioning its children’s values. The node values in G→Xat=5 were distributed as p→Xi=5(X1:N\i|Xi=5). If a quiz action was chosen during the information phase, it was ignored, the G values were sampled as if no intervention had been made, and the agent was given a penalty of rt=−5 in order to encourage it to take quiz actions at only during quiz phase. After the action was selected, an observation was provided to the agent. The default length of this phase was fixed to T =N =5 since in the noise-free limit, a minimum of T−1=4 interventions are required in general to resolve the causal structure, and score perfectly on the test phase.
Quiz phase. In the quiz phase, one non-hidden node was selected at random to be intervened on externally, Xj, and its value was set to −5. We chose an intervention value of −5 never previously observed by the agent in that episode, thus disallowing the agent from memorizing the results of interventions in the information phase to perform well on the quiz phase. The agent was informed of this by the observed mT−1 (a one-hot vector which indicated which node would be intervened on), from the final pre-quiz phase time-step, T−1. Note, mt was set to a zero-vector for steps t<T−1. A quiz action, aT , chosen by the agent indicated the node whose value would be given to the agent as a reward. In other words, the agent would receive reward, rT =XaT−(N−1). Again, if a quiz action was chosen during the information phase, the node values were not sampled and the agent was simply given a penalty of rT =−5. Active vs passive agents. Our agents had to perform two distinct tasks during the information phase: a) actively choose which nodes to set values on, and b) infer the causal DAG from its observations. We refer to this setup as the “active” condition. To control for (a), we created the “passive” condition, where the agent’s information phase actions are not learned. To provide a benchmark for how well the active agent can perform task (a), we fixed the passive agent’s intervention policy to be an exhaustive sweep through all observable nodes. This is close to optimal for this domain – in fact it is the optimal policy for noise-free conditional node values. We also compared the active agent’s performance to a baseline agent whose policy is to intervene randomly on the observable nodes in the information phase, in the Appendix.
Two kinds of learning The “inner loop” of learning (see Section 2.2) occurs within each episode where the agent is learning from the evidence it gathers during the information phase in order to perform well in the quiz phase. The same agent then enters a new episode, where it has to repeat the task on a different DAG. Test performance is reported on DAGs that the agent has never previously seen, after all the weights of the RNN have been fixed. Hence, the only transfer from training to test (or the “outer loop” of learning) is the ability to discover causal dependencies based on observations in the information phase, and to perform causal inference in the quiz phase.
Agent Architecture and Training
We used a long short-term memory (LSTM) network (Hochreiter & Schmidhuber, 1997) (with 96 hidden units) that, at each time-step t, receives a concatenated vector containing [ot,at−1,rt−1] as input, where ot is the observation5, at−1 is the previous action (as a one-hot vector) and rt−1 the reward (as a single real-value)6. The outputs, calculated as linear projections of the LSTM’s hidden state, are a set of policy logits (with dimensionality equal to the number of available actions), plus a scalar baseline. The policy logits are transformed by a softmax function, and then sampled to give a selected action.
Learning was by asynchronous advantage actor-critic (Mnih et al., 2016). In this framework, the loss function consists of three terms – the policy gradient, the baseline cost and an entropy cost. The baseline cost was weighted by 0.05 relative to the policy gradient cost. The weighting of the entropy cost was annealed over the course of training from 0.05 to 0. Optimization was done by RMSProp with =10−5, momentum = 0.9 and decay = 0.95. Learning rate was annealed from 3×10−6 to 0. For all experiments, after training, the agent was tested with the learning rate set to zero, on a held-out test set.
5’Observation’ ot refers to the reinforcement learning term, i.e. the input from the environment to the agent. This is distinct from observations in the causal sense (referred to as observational data) i.e. samples from a casual structure where there is no information about interventions that have been carried out.
6These are both set to zero for the first step in an episode.
4 EXPERIMENTS
Our three experiments (observational, interventional, and counterfactual) differed in the properties of the vt that was observed by the agent during the information phase, and thereby limited the extent of causal reasoning possible within each data setting. Our measure of performance is the reward earned in the quiz phase for held-out DAGs. Choosing a random node node in the quiz phase results in a reward of −5/4=−1.25, since one node (the externally intervened node) always has value−5 and the others have on average 0 value. By learning to simply avoid the externally intervened node, the agent can earn on average 0 reward. Consistently picking the node with the highest value in the quiz phase requires the agent to perform causal reasoning. For each agent, we take the average reward earned across 1200 episodes (300 held-out test DAGs, with 4 possible external interventions). We train 12 copies of each agent and report the average reward earned by these, with error bars showing 95% confidence intervals.
4.1 EXPERIMENT 1: OBSERVATIONAL SETTING
In Experiment 1, the agent could neither intervene to set the value of variables in the environment, nor observe any external interventions. In other words, it only received observations from G, not G→Xj (whereXj is a node that has been intervened on). This limits the extent of causal inference possible. In this experiment, we tested six agents, four of which were learned: “Observational”, “Long Observational”, “Active Conditional”, “Passive Conditional”, “Observational MAP Baseline”(not learned) and the “Optimal Associative Baseline” (not learned). We also ran two other standard RL baselines—see the Appendix for details.
Observational Agents: In the information phase, the actions of the agent were ignored7, and the observational agent always received the values of the observable nodes as sampled from the joint distribution associated with G. In addition to the default T=5 episode length, we also trained this agent with 4× longer episode length (Long Observational Agent), to measure performance increase with more observational data.
Conditional Agents: The information phase actions corresponded to observing a world in which the selected nodeXj is equal toXj=5, and the remaining nodes are sampled from the conditional distribution p(X1:N\j|Xj=5), whereX1:N\j indicates the set of all nodes exceptXj. This differs from intervening on the variableXj by setting it to the valueXj=5, since here we take a conditional sample from G rather than from G→Xj=5 (i.e. from p→Xj=5(X1:N\j|Xj=5)), and inference about the corresponding node’s parents is possible. Therefore, this agent still has access to only observational data, as with the observational agents. However, on average it receives more diagnostic information about the relation between the random variables in G, since it can observe samples where a node takes a value far outside the likely range of sampled observations. We run active and passive versions of this agent as described in Section 3
Optimal Associative Baseline: This baseline receives the true joint distribution p(X1:N) implied by the DAG in that episode, therefore it has full knowledge of the correlation structure of the environment8. It can therefore do exact associative reasoning of the form p(Xj|Xi=x), but cannot do any cause-effect reasoning of the form p→Xi=x(Xj|Xi=x). In the quiz phase, this baseline chooses the node that has the maximum value according to the true p(Xj|Xi=x) in that episode, whereXi is the node externally intervened upon, and x=−5. Observational MAP Baseline: This baseline follows the traditional method of separating causal induction and causal inference. We first carry out exact maximum a posteriori (MAP) inference over the space of DAGs in each episode (i.e. causal induction) by selecting the DAG (GMAP) of the 59049 unique possibilities that maximizes the likelihood of the data observed, v1:T , by the Observational Agent in that episode. This is equivalent to maximizing the posterior probability since the prior over graphs is uniform.
RESULTS
We focus on three key questions in this experiment: (i) Can our agents learn to do associative reasoning with observational data?, (ii) Can they learn to do cause-effect reasoning from observational data?, and (iii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?
7These agents also did not receive the out-of-phase action penalties during the information phase since their actions are totally ignored.
8Notice that the agent does not know the graphical structure, i.e. it does not know which nodes are parents of which other nodes
For (i), we see that the Observational Agents achieve reward above the random baseline (see the Appendix), and that more observations (Long Observational Agent) lead to better performance (Fig. 2a), indicating that the agent is indeed learning the statistical dependencies between the nodes. We see that the performance of the Passive-Conditional Agent is better than either of the Observational Agents, since the data it observes is very informative about the statistical dependencies in the environment. Finally, we see that the PassiveConditional Agent’s performance is comparable (in fact surpasses as discussed below) the performance of the Optimal Associative Baseline, indicating that it is able to do perfect associative inference.
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
Figure 1: Active and Passive Conditional Agents
For (ii), we see the crucial result that the Passive-Conditional Agent’s performance is significantly above the Optimal Associative Baseline, i.e. it performs better than what is possible using only correlations. We compare their performances, split by whether or the node that was intervened on in the quiz phase of the episode has a parent (Fig. 2b). If the intervened nodeXj has no parents, then G=G→Xj , and there is no advantage to being able to do cause-effect reasoning. We see indeed
that the Passive-Conditional agent performs better than the Optimal Associative Baseline only when the intervened node has parents (denoted by hatched bars in Fig. 2b), indicating that this agent is able to carry out some cause-effect reasoning, despite access to only observational data – i.e. it learns some form of do-calculus. We show the quiz phase for an example test DAG in Fig. 2c, seeing that the Optimal Associative Baseline chooses according to the node values predicted by G whereas the Passive-Conditional Agent chooses according the node values predicted by G→Xj . For (iii), we see (Fig. 2) that the Active-Conditional Agent’s performance is only marginally below the performance of the Passive-Conditional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
4.2 EXPERIMENT 2: INTERVENTIONAL SETTING
In Experiment 2, the agent receives interventional data in the information phase – it can choose to intervene on any observable node, Xj, and observe a sample from the resulting graph G→Xj . As discussed in Section 2.1, access to intervention data permits cause-effect reasoning even in the presence of unobserved confounders, a feat which is in general impossible with access only to observational data. In this experiment, we test four new agents, two of which were learned: “Active Interventional”, “Passive Interventional”, “Interventional MAP Baseline”(not learned), and “Optimal Cause-Effect Baseline” (not learned).
Interventional Agents: The information phase actions correspond to performing an intervention on the selected nodeXj and sampling from G→Xj (see Section 3 for details). We run active and passive versions of this agent as described in Section 3.
Interventional MAP Baseline: This baseline infers a DAG by maximizing the likelihood of the data observed by the Passive Interventional Agent in that episode. In the quiz phase, we predict the values of
each node according to GMAP→Xj whereXj is the node externally intervened upon (i.e. causal inference), and choose the node with the highest value.
Optimal Cause-Effect Baseline: This baseline receives the true DAG, G. In the quiz phase, it chooses the node that has the maximum value according to G→Xj , whereXj is the node externally intervened upon.
RESULTS
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
For (i) we see in Fig. 4a that the Passive-Interventional Agent’s performance is comparable to the Optimal Cause-Effect Baseline, indicating that it is able to do close to perfect cause-effect reasoning in this domain.
For (ii) we see in Fig. 4a the crucial result that the Passive-Interventional Agent’s performance is significantly better than the Passive-Conditional Agent. We compare the performances of these two agents, split by whether the node that was intervened on in the quiz phase of the episode had unobserved confounders with other variables in the graph (Fig. 4b). In confounded cases, as described in Section 2.1, cause-effect reasoning is impossible with only observational data. We see that the performance of the Passive-Interventional Agent does not vary significantly with confoundedness, whereas the performance of the Passive-Conditional Agent is significantly lower in the confounded cases. This indicates that the improvement in the performance of the agent that has access to interventional data (as compared to the agents that had access to only observational data) is largely driven by its ability to also do cause-effect reasoning in the presence of confounders. This is highlighted by Fig. 4c, which shows the quiz phase for an example DAG, where the Passive-Conditional agent is unable to resolve the confounder, but the Passive-Interventional agent can.
For (iii), we see in Fig. 3 that the Active-Interventional Agent’s performance is only marginally below the performance of the near optimal Passive-Interventional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
4.3 EXPERIMENT 3: COUNTERFACTUAL SETTING
In Experiment 3, the agent was again allowed to make interventions as in Experiment 2, but in this case the quiz phase task entailed answering a counterfactual question. We explain here what a counterfactual question in this domain looks like. Consider the conditional distribution p(Xi|pa(Xi))=N ( ∑ jwjiXj,0.1)
as described in Section 3 asXi= ∑
jwjiXj+ where is distributed asN (0.0,0.1), and represents the specific randomness introduced when taking one sample from the DAG. After observing the nodesX1:N
in the DAG in one sample, we can infer this specific randomness i for each nodeXi (i.e. abduction as described in the Appendix) and answer counterfactual questions like “What would the values of the nodes be, hadXj in that particular sample taken on a different value than what we observed?”, for any of the nodesXj. We test 2 new learned agents: “Active Counterfactual” and “Passive Counterfactual”.
Counterfactual Agents: This agent is exactly analogous to the Interventional agent, with the addition that the exogenous noise in the last information phase step t=T−1 (where sayXp=+5), is stored and the same noise is used in the quiz phase step t=T (where sayXf =−5). While the question our agents have had to answer correctly so far in order to maximize their reward in the quiz phase was “Which of the nodes X1:N\j will have the highest value whenXf is set to−5?”, in this setting, we ask “Which of the nodes X1:N\j would have had the highest value in the last step of the information phase, if instead of having Xp=+5, we hadXf =−5?”. We run active and passive versions of this agent as described in Section 3. Optimal Counterfactual Baseline: This baseline receives the true DAG and does exact abduction based on the exogenous noise observed in the penultimate step of the information phase, and combines this correctly with the appropriate interventional inference on the true DAG in the quiz phase.
RESULTS
We focus on two key questions in this experiment: (i) Can our agents learn to do counterfactual reasoning?, (ii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?
For (i), we see that the Passive-Counterfactual Agent achieves higher reward than the Passive-Interventional Agent and the Optimal Cause-Effect Baseline. To evaluate whether this difference results from the agent’s use of abduction (see the Appendix for details), we split the test set into two groups, depending on whether or not the decision for which node will have the highest value in the quiz phase is affected by exogenous noise, i.e. whether or not the node with the maximum value in the quiz phase changes if the noise is
0.0 0.2 0.4 0.6 0.8 1.0
Passive
Active
and Passive-Interventional Agents in the cases where the maximum values are distinct, however the Passive-Counterfactual Agent significantly outperforms the Passive-Interventional Agent in cases where there are degenerate maximum values.
For (ii), we see in Fig. 6 that the Active-Counterfactual Agent’s performance is only marginally below the performance of the Passive-Counterfactual agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance.
5 SUMMARY OF RESULTS
We introduced and tested a framework for learning causal reasoning in various data settings—observational, interventional, and counterfactual—using deep meta-RL. Crucially, our approach did not require explicit encoding of formal principles of causal inference. Rather, by optimizing an agent to perform a task that depended on causal structure, the agent learned implicit strategies to use the available data for causal reasoning, including drawing inferences from passive observation, actively intervening, and making counterfactual predictions. Below, we summarize the keys results from each of the three experiments.
In Section 4.1 and Fig. 2, we show that the agent learns to perform do-calculus. In Fig. 2(a) we see that, compared to the highest possible reward achievable without causal knowledge, the trained agent received more reward. This observation is corroborated by Fig. 2(b) which shows that performance increased selectively in cases where do-calculus made a prediction distinguishable from the predictions based on correlations. These are situations where the externally intervened node had a parent – meaning that the intervention resulted in a different graph.
In Section 4.2 and Fig. 4, we show that the agent learns to resolve unobserved confounders using interventions (a feat impossible with only observational data). In Fig. 4(a) we see that the agent with access to interventional data performs better than an agent with access to only observational data. Fig. 4(b) shows that the performance increase is greater in cases where the intervened node shared an unobserved parent (a confounder) with other variables in the graph. In this section we also compare the agent’s performance to a MAP estimate of the causal structure and find that the agent’s performance matches it, indicating that the agent is indeed doing close to optimal causal inference.
In Section 4.3 and Fig. 5, we show that the agent learns to use counterfactuals. In Fig. 5(a) we see that the agent with additional access to the specific randomness in the test phase performs better than an agent with access to only interventional data. In Fig. 5(b), we find that the increased performance is observed only in cases where the maximum mean value in the graph is degenerate, and optimal choice is affected by the exogenous noise – i.e. where multiple nodes have the same value on average and the specific randomness can be used to distinguish their actual values in that specific case.
6 DISCUSSION AND FUTURE WORK
This work is the first demonstration that causal reasoning can arise out of model-free reinforcement learning. This opens up the possibility of leveraging powerful learning-based methods for causal inference in complex settings. Traditional formal approaches usually decouple the two problems of causal induction (i.e. inferring the structure of the underlying model) and causal inference (i.e. estimating causal effects and answering counterfactual questions), and despite advances in both (Ortega & Stocker, 2015; Bramley et al., 2017; Parida et al., 2018; Sen et al., 2017; Forney et al., 2017; Lattimore et al., 2016), inducing models often requires assumptions that are difficult to fit to complex real-world conditions. By learning these end-to-end, our method can potentially find representations of causal structure best tuned to the specific causal inferences required. Another key advantage of our meta-RL approach is that it allows the agent to learn to interact with the environment in order to acquire necessary observations in the service of its task—i.e. to perform active learning. In our experimental domain, our agents’ active intervention policy was close to optimal, which demonstrates the promise of agents that can learn to experiment on their environment and perform rich causal reasoning on the observations.
Future work should explore agents that perform experiments to support structured exploration in RL, and optimal experiment design in complex domains where large numbers of blind interventions are prohibitive. To this end, follow-up work should focus on scaling up our approach to larger environments, with more complex causal structure and a more diverse range of tasks. Though the results here are a first step in this direction which use relatively standard deep RL components, our approach will likely benefit from more advanced architectures (e.g. Espeholt et al., 2018; Hessel et al., 2018; Hester et al., 2017) that allow longer more complex episodes, as well as models which are more explicitly compositional (e.g. Battaglia et al., 2018; Andreas et al., 2016) or have richer semantics (e.g. Ganin et al., 2018), that more explicitly leverage symmetries like equivalance classes in the environment.
A ADDITIONAL BASELINES
6 4 2 0 2 4 6 Reward Earned 0.0
0.1
0.2
0.3
0.4
0.5 0.6 P er ce nt ag e of D A G s
Q-total Q-episode Optimal
Figure 7: Reward distribution
We can also compare the performance of these agents to two standard model-free RL baselines. The Q-total agent learns a Q-value for each action across all steps for all the episodes. The Q-episode agent learns a Q-value for each action conditioned on the input at each time step [ot,at−1,rt−1], but with no LSTM memory to store previous actions and observations. Since the relationship between action and reward is random between episodes, Q-total was equivalent to selecting actions randomly, resulting in a considerably negative reward. The Q-episode agent essentially makes sure to not choose the arm that is indicated bymt to be the external intervention (which is assured to be equal to −5), and essentially chooses randomly otherwise, giving an average reward of 0.
B FORMAL DESCRIPTION OF META-LEARNING
Consider a distributionD over Markov Decision Processes (MDPs). We train an agent with memory (in our case an RNN-based agent) on this distribution. In each episode, we sample a task m∼D. At each step t within an episode, the agent sees an observation ot, executes an action at, and receives a reward rt. Both at−1 and rt−1 are given as additional inputs to the network. Thus, via the recurrence of the network, each action is a function of the entire trajectory Ht = {o0,a0,r0,...,ot−1,at−1,rt−1,ot} of the episode. Because this function is parameterized by the neural network, its complexity is limited only by the size of the network.
C ABDUCTION-ACTION-PREDICTION METHOD FOR COUNTERFACTUAL REASONING
Pearl et al. (2016)’s “abduction-action-prediction” method prescribes one method for answering counterfactual queries, by estimating the specific unobserved makeup of individual i and by transferring it to the counterfactual world. Assume, for example, the following model for G of Section 2.1: E=wAEA+η, H=wAHA+wEHE+ , where the weights wij represent the known causal effects in G and and η are terms of (e.g.) Gaussian noise that represent the unobserved randomness in the makeup of each individual9. Suppose that for individual i we observe: A= ai, E= ei, H = hi. We can answer the counterfactual question of “What if individual i had done more exercise, i.e.E=e′, instead?” by: a) Abduction: estimate the individual’s specific makeup with i=hi−wAHai−wEHei, b) Action: setE to more exercise e′, c) Prediction: predict a new value for cardiac health as h′=wAHai+wEHe′+ i.
D EXPERIMENT 4: NON-LINEAR CAUSAL GRAPHS
0.0 0.5 1.0 1.5 2.0
Optimal Cause-Effect
MAP
Passive-Interventional
Long Observational.
Observational.
Figure 8: Experiment 4 results
The purview of the previous experiments was to show a proof of concept on a simple tractable system, demonstrating that causal induction and inference can be learned and implemented via a meta-learned agent. In this experiment, we generalize some of the results to nonlinear, non-Gaussian causal graphs which are more typical of real-world causal graphs and to demonstrate that our results hold without loss of generality on such systems.
Here we investigate causal DAGs with a quadratic dependence on the parents by changing the conditional distribution to p(Xi|pa(Xi)) = N ( 1Ni ∑ j wji(Xj+X 2 j ),σ). Here, although each node is normally
9These are zero in expectation, so without access to their value for an individual we simply use G: E=wAEA, H=wAHA+wEHE to make causal predictions.
distributed given its parents, the joint distribution is not multivariate Gaussian due to the non-linearity in how the means are determined. We find that the Long-Observational achieves more reward than the Observational agent indicating that the agent is in fact learning the statistical dependencies between the nodes, within an episode. We also find that although the Active-Interventional agent is not far behind the performance of the MAP baseline, and achieves reward well above the Long-Observational10 The fact that the MAP baseline gets so close to the Optimal Cause-Effect baseline indicates that the Active agent is choosing close to optimal actions.
E EXPERIMENT 5: LARGER CAUSAL GRAPHS WITH GENERALIZATION TO NEW EQUIVALENCE CLASSES
0.0 1.0 Avg. Reward
Long-Obs. Obs.
Passive-Cond. Passive-Int. Passive-CF
(a) (b)
0.0 1.0 Avg. Reward
Random-Int.
Active-Int.
Passive-Int.
Optimal C-E
the entire equivalence class of each test graph was held out from the training set11. Performance on the test set therefore indicates generalization of the inference procedures learned to previously unseen equivalence classes of causal DAGs. For these experiments, we used graphs withN=6 nodes, because 5-node graphs have too few equivalence classes to partition in this way. All other details were the same as in the main paper.
We see in Fig. 9a that the agents learn to generalize well to these held out examples, and we find the same pattern of behavior noted in the main text where the rewards earned are ordered such that Observational agent< Passive-Conditional agent< Passive-Interventional agent< Passive-Counterfactual agent. We see additionally in Fig. 9b that the Active-Interventional agent performs at par with the Passive-Interventional agent (which is allowed to see the results of interventions on all nodes) and significantly better than an additional baseline we use here of the Random-Interventional agent whose information phase policy is to intervene on nodes at random, indicating that the intervention policy learned by the Active agent is good.
F GRAPHICAL MODELS AND BELIEF NETWORKS
Graphical models (Pearl, 1988; Bishop, 2006; Koller & Friedman, 2009; Barber, 2012; Murphy, 2012) are a marriage between graph and probability theory that allows to graphically represent and assess statistical dependence. In the following sections, we give some basic definitions and describe a method (d-separation) for graphically assessing statistical independence in belief networks.
BASIC DEFINITIONS
X2X1 X3
X4
(a)
X2X1 X3
X4
(b)
10The conditional distribution p(X1:N\j|Xj=5), and therefore Conditional agents, were non-trivial to calculate for the quadratic case.
11The hidden node was guaranteed to be a root node by rejecting all DAGs where the hidden node has parents
A directed acyclic graph (DAG) is a directed graph with no directed paths starting and ending at the same node. For example, the directed graph in Fig. 10(a) is acyclic. The addition of a link fromX4 toX1 gives rise to a cyclic graph (Fig. 10(b)).
A nodeXi with a directed link toXj is called parent ofXj. In this case,Xj is called child ofXi. A node is a collider on a specified path if it has (at least) two parents on that path. Notice that a node can be a collider on a path and a non-collider on another path. For example, in Fig. 10(a)X3 is a collider on the pathX1→X3←X2 and a non-collider on the pathX2→X3→X4. A nodeXi is an ancestor of a nodeXj if there exists a directed path fromXi toXj. In this case,Xj is a descendant ofXi. A graphical model is a graph in which nodes represent random variables and links express statistical relationships between the variables.
A belief network is a directed acyclic graphical model in which each node Xi is associated with the conditional distribution p(Xi|pa(Xi)), where pa(Xi) indicates the parents ofXi. The joint distribution of all nodes in the graph, p(X1:N), is given by the product of all conditional distributions, i.e.
p(X1:N)= N∏ i=1 p(Xi|pa(Xi)).
ASSESSING STATISTICAL INDEPENDENCE IN BELIEF NETWORKS
Given the sets of random variables X ,Y and Z, X and Y are statistically independent given Z (X ⊥Y|Z) if all paths from any element of X to any element of Y are closed (or blocked). A path is closed if at least one of the following conditions is satisfied:
(Ia) There is a non-collider on the path which belongs to the conditioning set Z. (Ib) There is a collider on the path such that neither the collider nor any of its descendants belong to
the conditioning set Z. | 1. What is the focus of the paper in terms of reinforcement learning?
2. What are the strengths of the proposed approach, particularly in the context of causal structure discovery?
3. What are the weaknesses of the paper, especially regarding the experimental setup and the choice of algorithm?
4. How does the reviewer assess the potential applicability of the proposed method to more realistic environments?
5. What suggestions does the reviewer provide for improving the paper's content and research direction? | Review | Review
This submission is an great ablation study on the capabilities of modern reinforcement learning to discover the causal structure of a synthetic environment. The study separates cases where the agents can only observe or they can also act, showing the expected gains of active intervention.
The experiments are so far synthetic, but it would be really interesting to see how the lessons learned extend to more realistic environments. It would also be very nice to have a sequence of increasingly complex synthetic environments where causal inference is the task of interest, such that we can compare the performance of different RL algorithms in this task (the authors only used one).
I would change the title to "Causal Reasoning from Reinforcement Learning", since "meta-learning" is an over-loaded term and I do not clearly see its prevalence on this submission. |
ICLR | Title
On Storage Neural Network Augmented Approximate Nearest Neighbor Search
Abstract
Large-scale approximate nearest neighbor search (ANN) has been gaining attention along with the latest machine learning researches employing ANNs. If the data is too large to fit in memory, it is necessary to search for the most similar vectors to a given query vector from the data stored in storage devices, not from that in memory. The storage device such as NAND flash memory has larger capacity than the memory device such as DRAM, but they also have larger latency to read data. Therefore, ANN methods for storage require completely different approaches from conventional in-memory ANN methods. Since the approximation that the time required for search is determined only by the amount of data fetched from storage holds under reasonable assumptions, our goal is to minimize it while maximizing recall. For partitioning-based ANNs, vectors are partitioned into clusters in the index building phase. In the search phase, some of the clusters are chosen, the vectors in the chosen clusters are fetched from storage, and the nearest vector is retrieved from the fetched vectors. Thus, the key point is to accurately select the clusters containing the ground truth nearest neighbor vectors. We accomplish this by proposing a method to predict the correct clusters by means of a neural network that is gradually refined by alternating supervised learning and duplicated cluster assignment. Compared to state-of-the-art SPANN and an exhaustive method using k-means clustering and linear search, the proposed method achieves 90% recall on SIFT1M with 80% and 58% less data fetched from storage, respectively.
1 INTRODUCTION
Large-scale Approximate Nearest Neighbor searches (ANNs) for high-dimensional data are receiving growing attentions because of their appearance in emerging directions of deep learning research. For example, in natural language processing, methods leveraging relevant documents retrieval by similar dense vector search have significantly improved the scores of open-domain question-answering tasks (Karpukhin et al., 2020). Also for language modeling tasks, Borgeaud et al. (2021) showed a model augmented by retrieval from 2 trillion tokens performs as well as 25 times larger models. In computer vision, Nakata et al. (2022) showed that image classification using ANNs has potential to alleviate catastrophic forgetting and improves accuracy in continual learning scenarios. In reinforcement learning, exploiting past experiences stored in external memory for an agent to make better decisions has been explored (Blundell et al., 2016; Pritzel et al., 2017). Recently, Goyal et al. (2022) and Humphreys et al. (2022) attempted to scale up the capacity of memory with the help of ANN and showed promising results.
ANNs are algorithms to find one or k key vectors that are the nearest to a given query vector among a large number of key vectors. Strict search is not required but higher recall with lower latency is demanded. In order to achieve this, an index is generally build by data-dependent preprocessing. Thus, an ANN method consists of the index building phase and the search phase. As the number of key vectors increases, storing all of them in memory (e.g. DRAM) becomes very expensive, and it is forced to store the vectors in storage devices such as NAND flash memory. In general, a storage device has much larger capacity per cost, but also its latency is much larger than memory. When all the key vectors are stored in memory, as seen in the most papers regarding ANN, it is effective to reduce the number of calculating distance between vectors by employing graph-based (Malkov & Yashunin, 2018; Fu et al., 2019) or partitioning-based methods (Dong et al., 2019) and/or to
reduce the time for each distance calculation by employing quantization techniques such as Product Quantization (Jégou et al., 2011a). On the other hand, when the key vectors are stored in storage rather than memory, the latency for fetching data from the storage becomes the dominant contributor in the total search latency. Therefore, the ANN method for the latter case (ANN method for storage) needs a completely different approach from the ANN method for the former case (all-in-memory ANN method). In this paper, we identify the most fundamental challenges of the ANN method for storage, and explore ways to solve them.
Since the latency for fetching data is approximately proportional to the amount of fetched data, it should be good strategy to reduce the number of vectors to be fetched during search as much as possible, while achieving high recall. Although SPANN (Chen et al., 2021), which is the state-ofthe-art ANN method for storage, is also designed with the same strategy, our investigation reveals that the characteristics of its index are still suboptimal from the viewpoint of the number of fetched vectors under a given recall. Moreover, perhaps surprisingly, an exhaustive method combining simple k-means clustering and linear search can perform better than SPANN on some dataset in terms of this metrics.
Another consideration worth noting is the exploitation of the efficient and high-throughput computation of GPUs. It is reasonable to assume that an ANN algorithm used in deep learning application runs on the same system as the deep learning algorithm runs on, and most deep learning algorithms are designed to be run on the system equipped with GPUs. Therefore, the ANN algorithm will be also run on the system with GPU.
In partitioning-based ANN methods, key vectors are partitioned into clusters, which are referred to as posting lists in SPANN paper, usually based on their proximity in the index building phase. First, in the search phase, some clusters that would contain the desired vector with high probability are chosen according to the distance between the query vector and the representative vector of each cluster. Second, the vectors in the chosen clusters are fetched from storage. Since the page size of storage devices is relatively large (e.g. 4KB), it is efficient to make a page contain vectors in the same cluster, as other ANN methods for storage (Chen et al., 2021; Jayaram Subramanya et al., 2019) also adopt. Third, by computing distances between the query and the fetched vectors, k closest vectors are identified and output. In order to achieve high recall with low latency, we need to increase the accuracy to choose the correct cluster containing the ground truth nearest vector at the first step of the search phase.
If clustering is made by k-means algorithm and a query vector is picked from the existing key vectors, the cluster whose centroid vector is the closest to the query always contains the nearest neighbor key vector by the definition of k-means algorithm. However, when a query is not exactly same as any one of key vectors, which is common case for ANN, often the cluster whose representative vector is the closest to the query does not contain the nearest neighbor vector, and this limits recall. To the best of our knowledge, this problem has not been explicitly addressed in the literatures. Our intuition to tackle with this problem is that the border lines that determine which cluster contains the nearest neighbor vector of a query are different from and more complicated than the border lines that determine the assignment of clusters to key vectors. We employ neural networks trained on given clustered key vectors to predict the correct cluster among the clusters defined by the complicated border lines. Assigning multiple clusters to a key vector is a conventional method to improve the accuracy to choose the correct cluster. Combining this duplication technique with our method could provide the additional effect of relaxing demands on the neural network by simplifying border lines. We demonstrate how our method works with visualization using 2-D toy data, and then empirically show that it is effective for realistic data as well. Our contributions include:
• We clarify that we need to reduce the amount of data fetched from storage when key vectors are sit in storage devices, because the latency for fetching data is dominant part of the search latency.
• We first explicitly point out and address the problem that, in partitioning-based ANN method, often the nearest cluster does not contain the nearest neighbor vector of a query vector and this limits the recall-latency performance.
• We propose a new ANN method combining cluster prediction with neural networks and duplicated cluster assignment, and show empirically that the proposed method improves the performance on two realistic million-scale datasets.
2 RELATED WORKS
ANN for storage. DiskANN (Jayaram Subramanya et al., 2019) is a graph-based ANN method for storage. The information of connections defining graph structure and the full precision vectors are stored on storage and the vectors compressed by Product Quantization (Jégou et al., 2011a) are stored in memory. The algorithm traverses the graph by reading the connection information only on the path from storage and computes distances between a query and the compressed vectors in memory. Although they compensate the deterioration of recall due to lossy compression by combining reranking using full precision vector data, the recall-latency performance is inferior to SPANN (Chen et al., 2021). SPANN is another method dedicated to ANN for storage and exhibits state-of-the-art performance. It employs a partitioning-based approach. By increasing the number of clusters as much as possible, it achieved to reduce the number of vectors fetched from storage under a given recall. In order to reduce the latency to choose clusters during search even when the number of clusters are large, they employ SPTAG algorithm that combines tree-based and graph-based ANNs. They also proposed an efficient duplication method aiming at increasing probability that a chosen cluster contains the ground truth key vector. However, our investigation in Section 3.2.2 shows that its performance can be worse than a naive exhaustive method on some dataset.
ANN with GPU. FAISS (Johnson et al., 2019) supports a lot of ANN algorithms accelerated by using GPU’s massively parallel computing. On-storage search is also discussed in their project page. SONG (Zhao et al., 2020) optimized the graph-based ANN algorithm for GPU. They modified the algorithm so that distance computations can be parallelized as much as possible, and showed significant speedup. However, they assume only all-in-memory scenarios.
ANN with neural networks. DSI (Tay et al., 2022) predicts the indices of the nearest neighbor key vectors directly from the query vectors with a neural network. We explore to use neural networks to predict the clusters containing the nearest neighbor vector rather than the vector indices themselves. DSI is also targeted for all-in-memory ANN. BLISS (Gupta et al., 2022) and NeuralLSH (Dong et al., 2019) are methods to improve the partitioning rule using neural networks. They apply the same rule to a query for choosing clusters as well. As depicted in Section 4.1, when the rule for partitioning keys is employed to choose clusters, often the chosen clusters don’t contain the ground truth key vector. Our method where a neural network is trained to predict the correct cluster for a given query is orthogonal and can be combined with these methods.
3 PRELIMINARIES
3.1 SYSTEM ENVIRONMENT
In this paper, we assume that the system on which our ANN algorithm runs has GPUs and storage devices in addition to CPUs and memories. The GPU provides high-throughput computing through massively parallel processing. The storage can store larger amount of data at lower cost, but has larger read latency than memory devices. Data that are commonly used for all queries, e.g., all the representative vectors of clusters, are loaded in advance on the memories from which CPU or GPU can read data with low latency and is always there during the search. On the other hand, all the key vectors are stored in the storage devices, and for simplicity, we assume that data fetched from storage for computations for a query is not cached on memory to be reused for computations for another query.
3.2 METRICS
3.2.1 THE NUMBER OF FETCHED VECTORS AS A PROXY METRICS OF LATENCY
Our goal is to minimize the average search time per query for nearest neighbor search, which we refer to as mean latency, and simultaneously to maximize the recall. Without loss of generality, the mean latency T in the systems described in the previous subsection is expressed by the following equation,
T = Ta + Tb + Tc,
where Ta is the latency for computations using data that are always sit in memory, Tb is the latency required for fetching data from storage, and Tc is the latency for computations using data fetched
from storage for each query. For example, in a partitioning-based ANN method such as SPANN, Ta is the latency for the process to determine the clusters (called as the posting lists in SPANN) to be fetched from storage, Tb is the latency for fetching the vectors in the chosen clusters from storage, and Tc is the latency for the computations to find the nearest neighbor vectors in the fetched vectors. In this paper, for simplicity, assuming that Ta Tb and Tc Tb, we employ the following approximation, T ≈ Tb. Then, since Tb is roughly proportional to the number of fetched vectors, the number of fetched vectors is an effective metrics to evaluate the mean latency.
The above assumptions are reasonable in a realistic setting. For Tc, the computing performances of CPUs equipped with vector arithmetic units and GPUs capable of massively parallel operations range from several hundred GFLOPS to more than TFLOPS. On the other hand, read bandwidth of storage devices is at best a few GB/s even when high-speed NVMe is used. This means the fetched data in Tb can be processed in less than 1/100 of Tb. Note that if the most of the process for Tc is executed in parallel with the process for Tb, for example, by performing distance calculation in the background of asynchronous storage access, the effective Tc becomes almost zero. For Ta, when an exhaustive linear search is used to choose the clusters, i.e., calculating the distance between the query and the representative vectors of all clusters in order to find the closest clusters, a 10-TFLOPS GPU can process 10 million representative vectors of 100 dimension each within a much shorter time than Tb = 1 ms. Also in the SPANN case without GPUs, since a fast algorithm that combines tree-based method and graph-based method is applied to choose the clusters, Ta is quite short even when the number of posting lists is as large as a few hundred million. As a typical example, Table 1 shows the measured Ta, Tb, and Tc on SIFT1M dataset.
3.2.2 MEMORY USAGE
Since memory usage greatly affects the latency of in-memory ANNs, VQ (Vector-Query), which is a measure of throughput normalized by the memory usage, is introduced in GRIP (Zhang & He,
2019) to compare algorithms with different memory usage as fairly as possible, and is also utilized in SPANN (Chen et al., 2021). SPANN claims superior capacity in large vector search scenarios because this VQ value is greater than that of other algorithms. Here, we consider whether VQ is really fair metrics for comparing the methods with different memory usage. In a partitioning-based ANN method for storage, memory capacity limits the number of clusters since the representative vectors of all the clusters must be kept in memory during search. Then, we investigate how the number of clusters affects the recall-latency and recall-VQ curves. Figure 1(a) shows the dependency of recall versus the number of fetched vectors when the simplest k-means and linear search method (IVFFlatIndex of FAISS) are utilized and it is clear that the number of fetched vectors significantly decreases as the number of clusters increases. As shown in Figure 1(b), even when we use VQ metrics, the VQ value under recall@1=90% greatly varies depending on the number of clusters. This indicates that VQ is not a suitable metrics for comparison between algorithms with different memory usage. Based on the above discussion, this paper evaluates ANN methods for storage using the number of fetched vectors under a given recall and a given number of clusters.
Another finding by this investigation is that the number of fetched vectors under a given recall of SPANN is not always better than that of the exhaustive method as shown in Figure 1(a).
4 PROPOSED METHOD
4.1 VISUALIZATION WITH 2-DIMENSIONAL TOY DATA
In this section, we explain the intuition behind our proposed method and visually demonstrate how it improves the accuracy to choose the correct cluster. One hundred key vectors are uniformly sampled from 2-dimensional space between −1 to 1. In the index building phase, we divide the
key vectors into four clusters. The representative vectors of the clusters are placed at (x, y) = (1/2, 1/2), (−1/2, 1/2), (−1/2,−1/2), (1/2,−1/2). Then we assign one cluster to each key vector according to the Euclidean distance, i.e., the distance of a key vector to the representative vector of the assigned cluster is smaller than that of other clusters. Figure 2(a) shows the key vectors colored by the assigned cluster. In a conventional method, when a query is given in the search phase, we choose one cluster whose representative vector is the closest to the query among the four clusters, which is the same manner as that used for assigning clusters to key vectors in the index building phase. Figure 2(b) shows 10000 queries colored by the cluster chosen for each query, and the clusters are clearly divided into four quadrants as expected. On the other hand, Figure 2(c) shows the queries colored by the correct cluster containing the nearest neighbor key vector of each query. We can see that the nearest neighbor key vector of a query vector in the first quadrant, x > 0, y > 0, can be contained in the cluster in green whose representative vector is in the second quadrant, x < 0, y > 0. As a result, the true border lines of the clusters for query are quite complex. In Figure 2(d), queries where wrong clusters are chosen are shown in gray. When the chosen cluster does not contain the nearest neighbor vector, we need to fetch vectors from another cluster to increase the recall. This phenomenon leads to the increase in the number of fetched vectors under a given recall, and the deterioration of recall-latency tradeoff.
Therefore, to improve the accuracy to choose correct cluster is the fundamental challenge. We attempt to accurately predict the complex border lines by using a neural network that is trained with the objective to choose the correct cluster. We use simple three layer MLP. Input dimension is equal to the dimension of query and key vectors, and output dimension is equal to the number of clusters, and the dimension of hidden layer is set to 128 in this experiment. The query vectors for training are sampled independently every epoch from the same distribution. The ground truth cluster is searched by exhaustive search for each training query. Using those pair samples of query and ground truth cluster as training data, we train the neural network with cross-entropy loss in supervised manner. Note that this is a data-dependent method because we look into the clustered key vectors for generating the ground truth labels. Figure 3 shows that prediction by the neural network approaches the correct border lines of clusters as training proceeds. This results indicate that we can improve the accuracy to choose correct cluster by employing a data-dependently trained neural network.
4.2 EXPERIMENT RESULTS
In this section, we describe the experiment using SIFT (Jégou et al., 2011a;b) and CLIP (Radford et al., 2021) data for demonstrating that our proposed method is useful for realistic data.
Dataset. For SIFT1M, we use one million 128-dimensional SIFT1M base data as key vectors. Another one million data are sampled from SIFT1B base data and used as query vectors for training. SIFT1B query data are used as query vectors for test. Euclidean distance is employed as metrics. For CLIP, we extracted feature vectors from 1.28 million ImageNet (Deng et al., 2009) training data with ViT B/16 model (Dosovitskiy et al., 2021). Although the dimension of the feature vector of the model is 512, we use the first 128 dimension for our experiment. We split it into 0.63 million, 0.64 million, and 0.01 million for key vectors, training query vectors, and test query vectors, respectively. Cosine similarity is employed as metrics.
Comparison with conventional methods. We compare our method with two conventional methods. The first one is the exhaustive method where key vectors are partitioned by k-means in the index building phase, and the distances of a query to the representative vectors of all the clusters partitioned by k-means are calculated and the cluster corresponding to the closest representative vector is chosen in the search phase. The second one is SPANN (Chen et al., 2021). For SPANN, we build the index by the algorithm implemented in SPANN, which includes partitioning process. Since SPANN proposes multiple cluster assignment for improving recall, we set the ReplicaCount to its default value 8, which means one key vector is contained by at most 8 clusters. As described in Section 3.2.2, we set the number of clusters for all the methods including our proposed method to 1000 for fair comparison.
Neural network structure. We employ the three-layer MLP. For fair comparison, we carefully design the neural network so as not to require more memory usage than the conventional methods.
As described in Section 3.2.2, if we employed more memory budget, we could significantly improve the recall just by increasing the number of clusters using that memory budget. Concretely, we set the size of hidden layer to 128, which is the same as the vector dimension. Then, the number of parameters of the output layer becomes the dominant among three layers and it is 128 × 1000, where 1000 is the number of clusters. The memory usage for this output layer is the same as that of the representative vectors of all the clusters which is required for the conventional methods. Regarding Tc for the neural network inference, the computing in the largest last layer is very close to that of distance calculations between query vectors and all the representative vectors, and the latency of this computation is much smaller than the latency for fetching vectors as shown in Table 1. So still the Tb is dominant even employing the neural network for search.
Training overview. We train the neural network for 150 epochs with AdamW (Loshchilov & Hutter, 2019) optimizer. The batch size is set to 1000. In order to avoid overfitting, we add some noise sampled from normal distribution every iteration to training data. Every 50 epochs, some key vectors are added to another existing cluster. We call this process as duplication.
Detail of duplication process. For a training query vector, if any of top-kd clusters predicted by the neural network does not contain the ground truth key vector, the pair of top-1 cluster and the ground truth key vector is marked as a candidate pair for duplication. After checking all the training query vectors, we additionally put the key vector into the cluster for the most frequently marked rd% pairs. kd and rd are hyperparameters and set to 4 and 20 in default, respectively.
Loss function. We compare three loss functions. The first one is naive cross-entropy loss (CE). Although multiple clusters can contain the nearest key vector after duplication, we need to pick only one cluster as a ground truth because CE can not handle multiple positive labels. We use initial ground truth cluster as only one positive all the time. The second one is the modified version of crossentropy loss (MCE) that picks one cluster where the neural network gives the largest score among the positive clusters. The third one is binary cross-entropy (BCE) loss that can handle multiple positive labels.
Results. The results are shown in Figure 4, Table 2, and Table 3. The figure includes the results of 10 trials each. The tables show the average and the standard deviation values of the 10 trials. Our proposed method achieves the smallest number of vectors fetched from storage under any recall value, which means that our method will provide the smallest mean latency when the latency for storage access is dominant. Under 90% recall on SIFT data, the number of vectors read from storage is 58% and 80% smaller than that of the exhaustive method and SPANN, respectively. Also for CLIP data, steady improvement is obtained.
4.3 ABLATION STUDY
4.3.1 EFFECT OF EACH INGREDIENT
Table 4 shows how much each ingredient of our proposed method improves the metrics. We report the average and standard deviation values of the number of fetched vectors across 10 trials for each condition. (a) to (c) compare the loss functions for training. The both MCE and CE show
good performance but CE is better in R@1≤0.95 and MCE is better in R@1=0.99. BCE loss deteriorates the performance and the increase in the variation is observed. (d) is the conventional exhaustive method using linear search for choosing clusters and employs neither neural networks nor duplication. (e) employs only neural network and (f) employs only duplication. By comparing (a,d,e,f), the both neural network and duplication contribute to improving performance. (g) shows the effect of the hyperparameter kd explained in Section 4.2. By increasing kd, the number of fetched vectors decreases. In (h), we execute duplication only once after 150-epoch training is completed. The result is worse than that in the default setting where duplication is executed every 50 epochs. This indicates that executing duplication process between training can relax the complexity of the border lines of clusters and help the neural network to fit them. (i) shows the result when we use the clustering information obtained after 150 epoch training and duplication in (a) setting, but choose the cluster to be fetched by using linear search across the updated centroid vectors of all the clusters. This shows executing search with the neural network inference is advantageous. From this experiment, we can see that although even only each ingredient of our proposed method can significantly reduce the number of fetched vectors under a given recall compared to the conventional method, further improvement is obtained by combining them.
4.3.2 BUILDING INDEX BY SPANN
In our experiment, we use k-means for partitioning, but it is an exhaustive method and can take too much time for larger dataset. In order to confirm that k-means is not a necessary component of our method, we apply the algorithm in SPANN to execute partitioning. As a result, the number of fetched vectors under 90% recall significantly improves from 30729±1749 to 7372±162. This indicates that we may utilize a fast algorithm such as SPANN for clustering instead of exhaustive k-means when our proposed method is applied to larger dataset.
5 LIMITATIONS AND FUTURE WORKS
Since the discussion in this paper assumes that the condition that the mean latency for search is determined by storage access time holds true, the discussion in this paper may be invalid if this condition is not satisfied.
For CLIP data, the improvement over the exhaustive method is steady but marginal as shown in Figure 4. The difference of the amount of improvement between SIFT and CLIP may come from the difference of how well the training data reproduce the query distribution. This means the effectiveness of the proposed method could be limited under the condition where the query distribution is close to uniform and not predictable. Although this is a common issue in almost all of the ANN methods, it remains future work to address such difficult use case.
Our proposed method has a couple of hyperparameters. Although we show some of their effect in Section 4.3, thorough optimization is a future work. It may dependent on data distribution and required recall value. However, it is not difficult to find the acceptable values for hyperparameters that provide at least better performance than the exhaustive method.
Another apparent remaining future work is to apply the proposed method to larger datasets such as billion-scale or trillion-scale ones. However, we believe that foundings and direction we reveal in this paper will be also useful for them.
6 CONCLUSION
We investigated the requirement to improve the recall and latency tradeoff of large scale approximate nearest neighbor search under the condition where the key vectors are stored in storage devices with large capacity and large read latency. We pointed out that in order to achieve it, we need to reduce the number of vectors fetched from storage devices during search. Then, it is required to choose correct clusters containing the nearest neighbor key vector to a given query vector with high accuracy. We proposed to use neural networks to predict the correct cluster. By employing our proposed method, we achieved to reduce the number of vectors to read from storage by more than 58% under 90% recall on SIFT1M data compared to the conventional methods. | 1. What is the novel contribution of the proposed approach in reducing the number of vector fetches during queries?
2. How effective is the proposed method in terms of recall and query latency compared to existing methods like SPANN and DiskKNN?
3. What is the significance of the observation that the nearest cluster does not always contain the nearest neighbor vector of a query vector?
4. Are there any limitations or trade-offs associated with the proposed approach, such as increased training time or neural network inference overhead?
5. How does the proposed method compare to other state-of-the-art methods on larger datasets like Deep1B and SPACEV1B, and SIFT 1B?
6. What is the reasoning behind using a three-layer MLP instead of a simpler or more complex neural network architecture?
7. Could the authors provide further insights into the choice of the extra set of vectors used during training, and how they affect the performance of the proposed method?
8. How robust is the proposed method against out-of-distribution queries, and what are the implications for real-world applications where queries may not always follow the same distribution as the training data? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposed a three-layer MLP-based training with an extra set of vectors to reduce the number of fetches from the storage during the query. The neural network was learned to place the vectors getting confused at the cluster boundaries correctly. The idea is novel in terms of utilizing more data during training to improve the query to the key-vectors map.
Strengths And Weaknesses
The approach reduced the number of vectors fetched from the storage by 58% under 90% recall on SIFT1M data. The proposed method is intuitive and the paper is well written.
Following are more comments and questions-
The strategy of reducing the number of vectors fetched from the storage is familiar, as projected in this paper. The existing partitioning approaches and disk-based vector storage methods like SPANN and DiskKNN have tried to achieve that. It is great that the paper reinforces this objective, but it only partially qualifies as a novel contribution.
It is mentioned that "often the nearest cluster does not contain the nearest neighbor vector of a query vector." How often can this happen? It looks true for out-of-distribution queries; does it happen for most in-distribution queries as well? More experimental evidence for this will make the claim stronger.
Experiments on larger datasets are needed. Is the
T
a
<<
T
b
and
T
c
<<
T
b
valid for larger (100M-1B scale) datasets as well?
How does it compare with SPANN on Deep1B and SPACEV1B, and SIFT 1B?
It will be nice to put the training time on the additional data required for this method.
Figure 4 demonstrates that the proposed method fetches fewer vectors from the storage but with an overhead of neural net inference. It is pointed out from the first para on page 8 that
T
b
is a more dominant factor than employing the neural net, but this does not guarantee that the difference in
T
b
between SPANN and proposed is more than the additional neural net latency. Please provide plots for Recall vs. query latency for the SPANN and the proposed work.
The baseline "Exhaustive" is confusing as it generally means to perform brute-force KNN search. It can be replaced with "Kmeans". Just a suggestion.
SPANN is worse than Kmeans regarding the number of vectors to fetch from storage. That is an interesting result. I would like to know the authors' comments on that.
For SIFT1M, it uses another set of 1M points for learning the Neural net. We can index these vectors as well. Why are we only indexing 1M points?
Clarity, Quality, Novelty And Reproducibility
The paper is written well and the work looks original. No code has been provided for the Reproducibility. |
ICLR | Title
On Storage Neural Network Augmented Approximate Nearest Neighbor Search
Abstract
Large-scale approximate nearest neighbor search (ANN) has been gaining attention along with the latest machine learning researches employing ANNs. If the data is too large to fit in memory, it is necessary to search for the most similar vectors to a given query vector from the data stored in storage devices, not from that in memory. The storage device such as NAND flash memory has larger capacity than the memory device such as DRAM, but they also have larger latency to read data. Therefore, ANN methods for storage require completely different approaches from conventional in-memory ANN methods. Since the approximation that the time required for search is determined only by the amount of data fetched from storage holds under reasonable assumptions, our goal is to minimize it while maximizing recall. For partitioning-based ANNs, vectors are partitioned into clusters in the index building phase. In the search phase, some of the clusters are chosen, the vectors in the chosen clusters are fetched from storage, and the nearest vector is retrieved from the fetched vectors. Thus, the key point is to accurately select the clusters containing the ground truth nearest neighbor vectors. We accomplish this by proposing a method to predict the correct clusters by means of a neural network that is gradually refined by alternating supervised learning and duplicated cluster assignment. Compared to state-of-the-art SPANN and an exhaustive method using k-means clustering and linear search, the proposed method achieves 90% recall on SIFT1M with 80% and 58% less data fetched from storage, respectively.
1 INTRODUCTION
Large-scale Approximate Nearest Neighbor searches (ANNs) for high-dimensional data are receiving growing attentions because of their appearance in emerging directions of deep learning research. For example, in natural language processing, methods leveraging relevant documents retrieval by similar dense vector search have significantly improved the scores of open-domain question-answering tasks (Karpukhin et al., 2020). Also for language modeling tasks, Borgeaud et al. (2021) showed a model augmented by retrieval from 2 trillion tokens performs as well as 25 times larger models. In computer vision, Nakata et al. (2022) showed that image classification using ANNs has potential to alleviate catastrophic forgetting and improves accuracy in continual learning scenarios. In reinforcement learning, exploiting past experiences stored in external memory for an agent to make better decisions has been explored (Blundell et al., 2016; Pritzel et al., 2017). Recently, Goyal et al. (2022) and Humphreys et al. (2022) attempted to scale up the capacity of memory with the help of ANN and showed promising results.
ANNs are algorithms to find one or k key vectors that are the nearest to a given query vector among a large number of key vectors. Strict search is not required but higher recall with lower latency is demanded. In order to achieve this, an index is generally build by data-dependent preprocessing. Thus, an ANN method consists of the index building phase and the search phase. As the number of key vectors increases, storing all of them in memory (e.g. DRAM) becomes very expensive, and it is forced to store the vectors in storage devices such as NAND flash memory. In general, a storage device has much larger capacity per cost, but also its latency is much larger than memory. When all the key vectors are stored in memory, as seen in the most papers regarding ANN, it is effective to reduce the number of calculating distance between vectors by employing graph-based (Malkov & Yashunin, 2018; Fu et al., 2019) or partitioning-based methods (Dong et al., 2019) and/or to
reduce the time for each distance calculation by employing quantization techniques such as Product Quantization (Jégou et al., 2011a). On the other hand, when the key vectors are stored in storage rather than memory, the latency for fetching data from the storage becomes the dominant contributor in the total search latency. Therefore, the ANN method for the latter case (ANN method for storage) needs a completely different approach from the ANN method for the former case (all-in-memory ANN method). In this paper, we identify the most fundamental challenges of the ANN method for storage, and explore ways to solve them.
Since the latency for fetching data is approximately proportional to the amount of fetched data, it should be good strategy to reduce the number of vectors to be fetched during search as much as possible, while achieving high recall. Although SPANN (Chen et al., 2021), which is the state-ofthe-art ANN method for storage, is also designed with the same strategy, our investigation reveals that the characteristics of its index are still suboptimal from the viewpoint of the number of fetched vectors under a given recall. Moreover, perhaps surprisingly, an exhaustive method combining simple k-means clustering and linear search can perform better than SPANN on some dataset in terms of this metrics.
Another consideration worth noting is the exploitation of the efficient and high-throughput computation of GPUs. It is reasonable to assume that an ANN algorithm used in deep learning application runs on the same system as the deep learning algorithm runs on, and most deep learning algorithms are designed to be run on the system equipped with GPUs. Therefore, the ANN algorithm will be also run on the system with GPU.
In partitioning-based ANN methods, key vectors are partitioned into clusters, which are referred to as posting lists in SPANN paper, usually based on their proximity in the index building phase. First, in the search phase, some clusters that would contain the desired vector with high probability are chosen according to the distance between the query vector and the representative vector of each cluster. Second, the vectors in the chosen clusters are fetched from storage. Since the page size of storage devices is relatively large (e.g. 4KB), it is efficient to make a page contain vectors in the same cluster, as other ANN methods for storage (Chen et al., 2021; Jayaram Subramanya et al., 2019) also adopt. Third, by computing distances between the query and the fetched vectors, k closest vectors are identified and output. In order to achieve high recall with low latency, we need to increase the accuracy to choose the correct cluster containing the ground truth nearest vector at the first step of the search phase.
If clustering is made by k-means algorithm and a query vector is picked from the existing key vectors, the cluster whose centroid vector is the closest to the query always contains the nearest neighbor key vector by the definition of k-means algorithm. However, when a query is not exactly same as any one of key vectors, which is common case for ANN, often the cluster whose representative vector is the closest to the query does not contain the nearest neighbor vector, and this limits recall. To the best of our knowledge, this problem has not been explicitly addressed in the literatures. Our intuition to tackle with this problem is that the border lines that determine which cluster contains the nearest neighbor vector of a query are different from and more complicated than the border lines that determine the assignment of clusters to key vectors. We employ neural networks trained on given clustered key vectors to predict the correct cluster among the clusters defined by the complicated border lines. Assigning multiple clusters to a key vector is a conventional method to improve the accuracy to choose the correct cluster. Combining this duplication technique with our method could provide the additional effect of relaxing demands on the neural network by simplifying border lines. We demonstrate how our method works with visualization using 2-D toy data, and then empirically show that it is effective for realistic data as well. Our contributions include:
• We clarify that we need to reduce the amount of data fetched from storage when key vectors are sit in storage devices, because the latency for fetching data is dominant part of the search latency.
• We first explicitly point out and address the problem that, in partitioning-based ANN method, often the nearest cluster does not contain the nearest neighbor vector of a query vector and this limits the recall-latency performance.
• We propose a new ANN method combining cluster prediction with neural networks and duplicated cluster assignment, and show empirically that the proposed method improves the performance on two realistic million-scale datasets.
2 RELATED WORKS
ANN for storage. DiskANN (Jayaram Subramanya et al., 2019) is a graph-based ANN method for storage. The information of connections defining graph structure and the full precision vectors are stored on storage and the vectors compressed by Product Quantization (Jégou et al., 2011a) are stored in memory. The algorithm traverses the graph by reading the connection information only on the path from storage and computes distances between a query and the compressed vectors in memory. Although they compensate the deterioration of recall due to lossy compression by combining reranking using full precision vector data, the recall-latency performance is inferior to SPANN (Chen et al., 2021). SPANN is another method dedicated to ANN for storage and exhibits state-of-the-art performance. It employs a partitioning-based approach. By increasing the number of clusters as much as possible, it achieved to reduce the number of vectors fetched from storage under a given recall. In order to reduce the latency to choose clusters during search even when the number of clusters are large, they employ SPTAG algorithm that combines tree-based and graph-based ANNs. They also proposed an efficient duplication method aiming at increasing probability that a chosen cluster contains the ground truth key vector. However, our investigation in Section 3.2.2 shows that its performance can be worse than a naive exhaustive method on some dataset.
ANN with GPU. FAISS (Johnson et al., 2019) supports a lot of ANN algorithms accelerated by using GPU’s massively parallel computing. On-storage search is also discussed in their project page. SONG (Zhao et al., 2020) optimized the graph-based ANN algorithm for GPU. They modified the algorithm so that distance computations can be parallelized as much as possible, and showed significant speedup. However, they assume only all-in-memory scenarios.
ANN with neural networks. DSI (Tay et al., 2022) predicts the indices of the nearest neighbor key vectors directly from the query vectors with a neural network. We explore to use neural networks to predict the clusters containing the nearest neighbor vector rather than the vector indices themselves. DSI is also targeted for all-in-memory ANN. BLISS (Gupta et al., 2022) and NeuralLSH (Dong et al., 2019) are methods to improve the partitioning rule using neural networks. They apply the same rule to a query for choosing clusters as well. As depicted in Section 4.1, when the rule for partitioning keys is employed to choose clusters, often the chosen clusters don’t contain the ground truth key vector. Our method where a neural network is trained to predict the correct cluster for a given query is orthogonal and can be combined with these methods.
3 PRELIMINARIES
3.1 SYSTEM ENVIRONMENT
In this paper, we assume that the system on which our ANN algorithm runs has GPUs and storage devices in addition to CPUs and memories. The GPU provides high-throughput computing through massively parallel processing. The storage can store larger amount of data at lower cost, but has larger read latency than memory devices. Data that are commonly used for all queries, e.g., all the representative vectors of clusters, are loaded in advance on the memories from which CPU or GPU can read data with low latency and is always there during the search. On the other hand, all the key vectors are stored in the storage devices, and for simplicity, we assume that data fetched from storage for computations for a query is not cached on memory to be reused for computations for another query.
3.2 METRICS
3.2.1 THE NUMBER OF FETCHED VECTORS AS A PROXY METRICS OF LATENCY
Our goal is to minimize the average search time per query for nearest neighbor search, which we refer to as mean latency, and simultaneously to maximize the recall. Without loss of generality, the mean latency T in the systems described in the previous subsection is expressed by the following equation,
T = Ta + Tb + Tc,
where Ta is the latency for computations using data that are always sit in memory, Tb is the latency required for fetching data from storage, and Tc is the latency for computations using data fetched
from storage for each query. For example, in a partitioning-based ANN method such as SPANN, Ta is the latency for the process to determine the clusters (called as the posting lists in SPANN) to be fetched from storage, Tb is the latency for fetching the vectors in the chosen clusters from storage, and Tc is the latency for the computations to find the nearest neighbor vectors in the fetched vectors. In this paper, for simplicity, assuming that Ta Tb and Tc Tb, we employ the following approximation, T ≈ Tb. Then, since Tb is roughly proportional to the number of fetched vectors, the number of fetched vectors is an effective metrics to evaluate the mean latency.
The above assumptions are reasonable in a realistic setting. For Tc, the computing performances of CPUs equipped with vector arithmetic units and GPUs capable of massively parallel operations range from several hundred GFLOPS to more than TFLOPS. On the other hand, read bandwidth of storage devices is at best a few GB/s even when high-speed NVMe is used. This means the fetched data in Tb can be processed in less than 1/100 of Tb. Note that if the most of the process for Tc is executed in parallel with the process for Tb, for example, by performing distance calculation in the background of asynchronous storage access, the effective Tc becomes almost zero. For Ta, when an exhaustive linear search is used to choose the clusters, i.e., calculating the distance between the query and the representative vectors of all clusters in order to find the closest clusters, a 10-TFLOPS GPU can process 10 million representative vectors of 100 dimension each within a much shorter time than Tb = 1 ms. Also in the SPANN case without GPUs, since a fast algorithm that combines tree-based method and graph-based method is applied to choose the clusters, Ta is quite short even when the number of posting lists is as large as a few hundred million. As a typical example, Table 1 shows the measured Ta, Tb, and Tc on SIFT1M dataset.
3.2.2 MEMORY USAGE
Since memory usage greatly affects the latency of in-memory ANNs, VQ (Vector-Query), which is a measure of throughput normalized by the memory usage, is introduced in GRIP (Zhang & He,
2019) to compare algorithms with different memory usage as fairly as possible, and is also utilized in SPANN (Chen et al., 2021). SPANN claims superior capacity in large vector search scenarios because this VQ value is greater than that of other algorithms. Here, we consider whether VQ is really fair metrics for comparing the methods with different memory usage. In a partitioning-based ANN method for storage, memory capacity limits the number of clusters since the representative vectors of all the clusters must be kept in memory during search. Then, we investigate how the number of clusters affects the recall-latency and recall-VQ curves. Figure 1(a) shows the dependency of recall versus the number of fetched vectors when the simplest k-means and linear search method (IVFFlatIndex of FAISS) are utilized and it is clear that the number of fetched vectors significantly decreases as the number of clusters increases. As shown in Figure 1(b), even when we use VQ metrics, the VQ value under recall@1=90% greatly varies depending on the number of clusters. This indicates that VQ is not a suitable metrics for comparison between algorithms with different memory usage. Based on the above discussion, this paper evaluates ANN methods for storage using the number of fetched vectors under a given recall and a given number of clusters.
Another finding by this investigation is that the number of fetched vectors under a given recall of SPANN is not always better than that of the exhaustive method as shown in Figure 1(a).
4 PROPOSED METHOD
4.1 VISUALIZATION WITH 2-DIMENSIONAL TOY DATA
In this section, we explain the intuition behind our proposed method and visually demonstrate how it improves the accuracy to choose the correct cluster. One hundred key vectors are uniformly sampled from 2-dimensional space between −1 to 1. In the index building phase, we divide the
key vectors into four clusters. The representative vectors of the clusters are placed at (x, y) = (1/2, 1/2), (−1/2, 1/2), (−1/2,−1/2), (1/2,−1/2). Then we assign one cluster to each key vector according to the Euclidean distance, i.e., the distance of a key vector to the representative vector of the assigned cluster is smaller than that of other clusters. Figure 2(a) shows the key vectors colored by the assigned cluster. In a conventional method, when a query is given in the search phase, we choose one cluster whose representative vector is the closest to the query among the four clusters, which is the same manner as that used for assigning clusters to key vectors in the index building phase. Figure 2(b) shows 10000 queries colored by the cluster chosen for each query, and the clusters are clearly divided into four quadrants as expected. On the other hand, Figure 2(c) shows the queries colored by the correct cluster containing the nearest neighbor key vector of each query. We can see that the nearest neighbor key vector of a query vector in the first quadrant, x > 0, y > 0, can be contained in the cluster in green whose representative vector is in the second quadrant, x < 0, y > 0. As a result, the true border lines of the clusters for query are quite complex. In Figure 2(d), queries where wrong clusters are chosen are shown in gray. When the chosen cluster does not contain the nearest neighbor vector, we need to fetch vectors from another cluster to increase the recall. This phenomenon leads to the increase in the number of fetched vectors under a given recall, and the deterioration of recall-latency tradeoff.
Therefore, to improve the accuracy to choose correct cluster is the fundamental challenge. We attempt to accurately predict the complex border lines by using a neural network that is trained with the objective to choose the correct cluster. We use simple three layer MLP. Input dimension is equal to the dimension of query and key vectors, and output dimension is equal to the number of clusters, and the dimension of hidden layer is set to 128 in this experiment. The query vectors for training are sampled independently every epoch from the same distribution. The ground truth cluster is searched by exhaustive search for each training query. Using those pair samples of query and ground truth cluster as training data, we train the neural network with cross-entropy loss in supervised manner. Note that this is a data-dependent method because we look into the clustered key vectors for generating the ground truth labels. Figure 3 shows that prediction by the neural network approaches the correct border lines of clusters as training proceeds. This results indicate that we can improve the accuracy to choose correct cluster by employing a data-dependently trained neural network.
4.2 EXPERIMENT RESULTS
In this section, we describe the experiment using SIFT (Jégou et al., 2011a;b) and CLIP (Radford et al., 2021) data for demonstrating that our proposed method is useful for realistic data.
Dataset. For SIFT1M, we use one million 128-dimensional SIFT1M base data as key vectors. Another one million data are sampled from SIFT1B base data and used as query vectors for training. SIFT1B query data are used as query vectors for test. Euclidean distance is employed as metrics. For CLIP, we extracted feature vectors from 1.28 million ImageNet (Deng et al., 2009) training data with ViT B/16 model (Dosovitskiy et al., 2021). Although the dimension of the feature vector of the model is 512, we use the first 128 dimension for our experiment. We split it into 0.63 million, 0.64 million, and 0.01 million for key vectors, training query vectors, and test query vectors, respectively. Cosine similarity is employed as metrics.
Comparison with conventional methods. We compare our method with two conventional methods. The first one is the exhaustive method where key vectors are partitioned by k-means in the index building phase, and the distances of a query to the representative vectors of all the clusters partitioned by k-means are calculated and the cluster corresponding to the closest representative vector is chosen in the search phase. The second one is SPANN (Chen et al., 2021). For SPANN, we build the index by the algorithm implemented in SPANN, which includes partitioning process. Since SPANN proposes multiple cluster assignment for improving recall, we set the ReplicaCount to its default value 8, which means one key vector is contained by at most 8 clusters. As described in Section 3.2.2, we set the number of clusters for all the methods including our proposed method to 1000 for fair comparison.
Neural network structure. We employ the three-layer MLP. For fair comparison, we carefully design the neural network so as not to require more memory usage than the conventional methods.
As described in Section 3.2.2, if we employed more memory budget, we could significantly improve the recall just by increasing the number of clusters using that memory budget. Concretely, we set the size of hidden layer to 128, which is the same as the vector dimension. Then, the number of parameters of the output layer becomes the dominant among three layers and it is 128 × 1000, where 1000 is the number of clusters. The memory usage for this output layer is the same as that of the representative vectors of all the clusters which is required for the conventional methods. Regarding Tc for the neural network inference, the computing in the largest last layer is very close to that of distance calculations between query vectors and all the representative vectors, and the latency of this computation is much smaller than the latency for fetching vectors as shown in Table 1. So still the Tb is dominant even employing the neural network for search.
Training overview. We train the neural network for 150 epochs with AdamW (Loshchilov & Hutter, 2019) optimizer. The batch size is set to 1000. In order to avoid overfitting, we add some noise sampled from normal distribution every iteration to training data. Every 50 epochs, some key vectors are added to another existing cluster. We call this process as duplication.
Detail of duplication process. For a training query vector, if any of top-kd clusters predicted by the neural network does not contain the ground truth key vector, the pair of top-1 cluster and the ground truth key vector is marked as a candidate pair for duplication. After checking all the training query vectors, we additionally put the key vector into the cluster for the most frequently marked rd% pairs. kd and rd are hyperparameters and set to 4 and 20 in default, respectively.
Loss function. We compare three loss functions. The first one is naive cross-entropy loss (CE). Although multiple clusters can contain the nearest key vector after duplication, we need to pick only one cluster as a ground truth because CE can not handle multiple positive labels. We use initial ground truth cluster as only one positive all the time. The second one is the modified version of crossentropy loss (MCE) that picks one cluster where the neural network gives the largest score among the positive clusters. The third one is binary cross-entropy (BCE) loss that can handle multiple positive labels.
Results. The results are shown in Figure 4, Table 2, and Table 3. The figure includes the results of 10 trials each. The tables show the average and the standard deviation values of the 10 trials. Our proposed method achieves the smallest number of vectors fetched from storage under any recall value, which means that our method will provide the smallest mean latency when the latency for storage access is dominant. Under 90% recall on SIFT data, the number of vectors read from storage is 58% and 80% smaller than that of the exhaustive method and SPANN, respectively. Also for CLIP data, steady improvement is obtained.
4.3 ABLATION STUDY
4.3.1 EFFECT OF EACH INGREDIENT
Table 4 shows how much each ingredient of our proposed method improves the metrics. We report the average and standard deviation values of the number of fetched vectors across 10 trials for each condition. (a) to (c) compare the loss functions for training. The both MCE and CE show
good performance but CE is better in R@1≤0.95 and MCE is better in R@1=0.99. BCE loss deteriorates the performance and the increase in the variation is observed. (d) is the conventional exhaustive method using linear search for choosing clusters and employs neither neural networks nor duplication. (e) employs only neural network and (f) employs only duplication. By comparing (a,d,e,f), the both neural network and duplication contribute to improving performance. (g) shows the effect of the hyperparameter kd explained in Section 4.2. By increasing kd, the number of fetched vectors decreases. In (h), we execute duplication only once after 150-epoch training is completed. The result is worse than that in the default setting where duplication is executed every 50 epochs. This indicates that executing duplication process between training can relax the complexity of the border lines of clusters and help the neural network to fit them. (i) shows the result when we use the clustering information obtained after 150 epoch training and duplication in (a) setting, but choose the cluster to be fetched by using linear search across the updated centroid vectors of all the clusters. This shows executing search with the neural network inference is advantageous. From this experiment, we can see that although even only each ingredient of our proposed method can significantly reduce the number of fetched vectors under a given recall compared to the conventional method, further improvement is obtained by combining them.
4.3.2 BUILDING INDEX BY SPANN
In our experiment, we use k-means for partitioning, but it is an exhaustive method and can take too much time for larger dataset. In order to confirm that k-means is not a necessary component of our method, we apply the algorithm in SPANN to execute partitioning. As a result, the number of fetched vectors under 90% recall significantly improves from 30729±1749 to 7372±162. This indicates that we may utilize a fast algorithm such as SPANN for clustering instead of exhaustive k-means when our proposed method is applied to larger dataset.
5 LIMITATIONS AND FUTURE WORKS
Since the discussion in this paper assumes that the condition that the mean latency for search is determined by storage access time holds true, the discussion in this paper may be invalid if this condition is not satisfied.
For CLIP data, the improvement over the exhaustive method is steady but marginal as shown in Figure 4. The difference of the amount of improvement between SIFT and CLIP may come from the difference of how well the training data reproduce the query distribution. This means the effectiveness of the proposed method could be limited under the condition where the query distribution is close to uniform and not predictable. Although this is a common issue in almost all of the ANN methods, it remains future work to address such difficult use case.
Our proposed method has a couple of hyperparameters. Although we show some of their effect in Section 4.3, thorough optimization is a future work. It may dependent on data distribution and required recall value. However, it is not difficult to find the acceptable values for hyperparameters that provide at least better performance than the exhaustive method.
Another apparent remaining future work is to apply the proposed method to larger datasets such as billion-scale or trillion-scale ones. However, we believe that foundings and direction we reveal in this paper will be also useful for them.
6 CONCLUSION
We investigated the requirement to improve the recall and latency tradeoff of large scale approximate nearest neighbor search under the condition where the key vectors are stored in storage devices with large capacity and large read latency. We pointed out that in order to achieve it, we need to reduce the number of vectors fetched from storage devices during search. Then, it is required to choose correct clusters containing the nearest neighbor key vector to a given query vector with high accuracy. We proposed to use neural networks to predict the correct cluster. By employing our proposed method, we achieved to reduce the number of vectors to read from storage by more than 58% under 90% recall on SIFT1M data compared to the conventional methods. | 1. What is the focus of the paper regarding approximate nearest neighbor search?
2. What are the strengths and weaknesses of the proposed method, particularly in terms of its novelty and effectiveness?
3. Do you have any concerns about the indexing and inference stage or the evaluation protocol used in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the paper, such as conducting experiments on larger-scale datasets or comparing with other ANN methods? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper concerns the approximate nearest neighbor search (ANN) problem where data is too large to fit in CPU memory. The author considers partition-based methods to tackle this problem where data are partitioned into clusters at the indexing stage. In the inference phase, vectors within a cluster are fetched from storage to compute the nearest neighbors, which is the main bottleneck. Thus, the author proposed to learn a shallow MLP to predict which clusters contain the k-nearest neighbor data points for any given query, and show that proposed technique can reduce the number of fetched vectors at various recall levels compared to baselines.
Strengths And Weaknesses
Strength
The proposed method seems legit under the “number of fetched vectors” criteria.
Weaknesses
Novel is rather weak. The idea of using ML models to predict a given query belongs to which clusters is not new. See [1] for example.
The indexing and inference stage are not clearly described. Pseudo algorithm blocks will be much appreciated. Also time and space complexity analysis are missing.
The canonical evaluation protocol in ANN is not studied, namely the Recall versus latency trade-off, which is the most crucial metric in this community.
The author should conduct experiments on more large-scale dataset (e.g., SIFT-1B, DEEP1B, etc) and compare with more large-scale ANN methods as presented in the Big-ANN-Benchmarks [2]. Otherwise, it is hard to verify the effectiveness and applicability of the proposed technique in the real large-scale regime.
For small datasets such as SIFT-1M and CLIP, authors should also compare with in-memory ANN solutions such as ScaNN, HNSW, to name just a few.
Reference
[1] A Multilabel Classification Framework for Approximate Nearest Neighbor Search (first appeared at arXiv19, later accept by NeurIPS22)
[2] https://github.com/harsha-simhadri/big-ann-benchmarks (NeurIPS21 Workshop)
Clarity, Quality, Novelty And Reproducibility
Clarity: the writing and paper quality has much room to improve. See Weaknesses-(2)
Novelty: The novelty is limited. See Weaknesses-(1)
Reproducibility: No code provided, hence no data point to verify the reproducibility |
ICLR | Title
On Storage Neural Network Augmented Approximate Nearest Neighbor Search
Abstract
Large-scale approximate nearest neighbor search (ANN) has been gaining attention along with the latest machine learning researches employing ANNs. If the data is too large to fit in memory, it is necessary to search for the most similar vectors to a given query vector from the data stored in storage devices, not from that in memory. The storage device such as NAND flash memory has larger capacity than the memory device such as DRAM, but they also have larger latency to read data. Therefore, ANN methods for storage require completely different approaches from conventional in-memory ANN methods. Since the approximation that the time required for search is determined only by the amount of data fetched from storage holds under reasonable assumptions, our goal is to minimize it while maximizing recall. For partitioning-based ANNs, vectors are partitioned into clusters in the index building phase. In the search phase, some of the clusters are chosen, the vectors in the chosen clusters are fetched from storage, and the nearest vector is retrieved from the fetched vectors. Thus, the key point is to accurately select the clusters containing the ground truth nearest neighbor vectors. We accomplish this by proposing a method to predict the correct clusters by means of a neural network that is gradually refined by alternating supervised learning and duplicated cluster assignment. Compared to state-of-the-art SPANN and an exhaustive method using k-means clustering and linear search, the proposed method achieves 90% recall on SIFT1M with 80% and 58% less data fetched from storage, respectively.
1 INTRODUCTION
Large-scale Approximate Nearest Neighbor searches (ANNs) for high-dimensional data are receiving growing attentions because of their appearance in emerging directions of deep learning research. For example, in natural language processing, methods leveraging relevant documents retrieval by similar dense vector search have significantly improved the scores of open-domain question-answering tasks (Karpukhin et al., 2020). Also for language modeling tasks, Borgeaud et al. (2021) showed a model augmented by retrieval from 2 trillion tokens performs as well as 25 times larger models. In computer vision, Nakata et al. (2022) showed that image classification using ANNs has potential to alleviate catastrophic forgetting and improves accuracy in continual learning scenarios. In reinforcement learning, exploiting past experiences stored in external memory for an agent to make better decisions has been explored (Blundell et al., 2016; Pritzel et al., 2017). Recently, Goyal et al. (2022) and Humphreys et al. (2022) attempted to scale up the capacity of memory with the help of ANN and showed promising results.
ANNs are algorithms to find one or k key vectors that are the nearest to a given query vector among a large number of key vectors. Strict search is not required but higher recall with lower latency is demanded. In order to achieve this, an index is generally build by data-dependent preprocessing. Thus, an ANN method consists of the index building phase and the search phase. As the number of key vectors increases, storing all of them in memory (e.g. DRAM) becomes very expensive, and it is forced to store the vectors in storage devices such as NAND flash memory. In general, a storage device has much larger capacity per cost, but also its latency is much larger than memory. When all the key vectors are stored in memory, as seen in the most papers regarding ANN, it is effective to reduce the number of calculating distance between vectors by employing graph-based (Malkov & Yashunin, 2018; Fu et al., 2019) or partitioning-based methods (Dong et al., 2019) and/or to
reduce the time for each distance calculation by employing quantization techniques such as Product Quantization (Jégou et al., 2011a). On the other hand, when the key vectors are stored in storage rather than memory, the latency for fetching data from the storage becomes the dominant contributor in the total search latency. Therefore, the ANN method for the latter case (ANN method for storage) needs a completely different approach from the ANN method for the former case (all-in-memory ANN method). In this paper, we identify the most fundamental challenges of the ANN method for storage, and explore ways to solve them.
Since the latency for fetching data is approximately proportional to the amount of fetched data, it should be good strategy to reduce the number of vectors to be fetched during search as much as possible, while achieving high recall. Although SPANN (Chen et al., 2021), which is the state-ofthe-art ANN method for storage, is also designed with the same strategy, our investigation reveals that the characteristics of its index are still suboptimal from the viewpoint of the number of fetched vectors under a given recall. Moreover, perhaps surprisingly, an exhaustive method combining simple k-means clustering and linear search can perform better than SPANN on some dataset in terms of this metrics.
Another consideration worth noting is the exploitation of the efficient and high-throughput computation of GPUs. It is reasonable to assume that an ANN algorithm used in deep learning application runs on the same system as the deep learning algorithm runs on, and most deep learning algorithms are designed to be run on the system equipped with GPUs. Therefore, the ANN algorithm will be also run on the system with GPU.
In partitioning-based ANN methods, key vectors are partitioned into clusters, which are referred to as posting lists in SPANN paper, usually based on their proximity in the index building phase. First, in the search phase, some clusters that would contain the desired vector with high probability are chosen according to the distance between the query vector and the representative vector of each cluster. Second, the vectors in the chosen clusters are fetched from storage. Since the page size of storage devices is relatively large (e.g. 4KB), it is efficient to make a page contain vectors in the same cluster, as other ANN methods for storage (Chen et al., 2021; Jayaram Subramanya et al., 2019) also adopt. Third, by computing distances between the query and the fetched vectors, k closest vectors are identified and output. In order to achieve high recall with low latency, we need to increase the accuracy to choose the correct cluster containing the ground truth nearest vector at the first step of the search phase.
If clustering is made by k-means algorithm and a query vector is picked from the existing key vectors, the cluster whose centroid vector is the closest to the query always contains the nearest neighbor key vector by the definition of k-means algorithm. However, when a query is not exactly same as any one of key vectors, which is common case for ANN, often the cluster whose representative vector is the closest to the query does not contain the nearest neighbor vector, and this limits recall. To the best of our knowledge, this problem has not been explicitly addressed in the literatures. Our intuition to tackle with this problem is that the border lines that determine which cluster contains the nearest neighbor vector of a query are different from and more complicated than the border lines that determine the assignment of clusters to key vectors. We employ neural networks trained on given clustered key vectors to predict the correct cluster among the clusters defined by the complicated border lines. Assigning multiple clusters to a key vector is a conventional method to improve the accuracy to choose the correct cluster. Combining this duplication technique with our method could provide the additional effect of relaxing demands on the neural network by simplifying border lines. We demonstrate how our method works with visualization using 2-D toy data, and then empirically show that it is effective for realistic data as well. Our contributions include:
• We clarify that we need to reduce the amount of data fetched from storage when key vectors are sit in storage devices, because the latency for fetching data is dominant part of the search latency.
• We first explicitly point out and address the problem that, in partitioning-based ANN method, often the nearest cluster does not contain the nearest neighbor vector of a query vector and this limits the recall-latency performance.
• We propose a new ANN method combining cluster prediction with neural networks and duplicated cluster assignment, and show empirically that the proposed method improves the performance on two realistic million-scale datasets.
2 RELATED WORKS
ANN for storage. DiskANN (Jayaram Subramanya et al., 2019) is a graph-based ANN method for storage. The information of connections defining graph structure and the full precision vectors are stored on storage and the vectors compressed by Product Quantization (Jégou et al., 2011a) are stored in memory. The algorithm traverses the graph by reading the connection information only on the path from storage and computes distances between a query and the compressed vectors in memory. Although they compensate the deterioration of recall due to lossy compression by combining reranking using full precision vector data, the recall-latency performance is inferior to SPANN (Chen et al., 2021). SPANN is another method dedicated to ANN for storage and exhibits state-of-the-art performance. It employs a partitioning-based approach. By increasing the number of clusters as much as possible, it achieved to reduce the number of vectors fetched from storage under a given recall. In order to reduce the latency to choose clusters during search even when the number of clusters are large, they employ SPTAG algorithm that combines tree-based and graph-based ANNs. They also proposed an efficient duplication method aiming at increasing probability that a chosen cluster contains the ground truth key vector. However, our investigation in Section 3.2.2 shows that its performance can be worse than a naive exhaustive method on some dataset.
ANN with GPU. FAISS (Johnson et al., 2019) supports a lot of ANN algorithms accelerated by using GPU’s massively parallel computing. On-storage search is also discussed in their project page. SONG (Zhao et al., 2020) optimized the graph-based ANN algorithm for GPU. They modified the algorithm so that distance computations can be parallelized as much as possible, and showed significant speedup. However, they assume only all-in-memory scenarios.
ANN with neural networks. DSI (Tay et al., 2022) predicts the indices of the nearest neighbor key vectors directly from the query vectors with a neural network. We explore to use neural networks to predict the clusters containing the nearest neighbor vector rather than the vector indices themselves. DSI is also targeted for all-in-memory ANN. BLISS (Gupta et al., 2022) and NeuralLSH (Dong et al., 2019) are methods to improve the partitioning rule using neural networks. They apply the same rule to a query for choosing clusters as well. As depicted in Section 4.1, when the rule for partitioning keys is employed to choose clusters, often the chosen clusters don’t contain the ground truth key vector. Our method where a neural network is trained to predict the correct cluster for a given query is orthogonal and can be combined with these methods.
3 PRELIMINARIES
3.1 SYSTEM ENVIRONMENT
In this paper, we assume that the system on which our ANN algorithm runs has GPUs and storage devices in addition to CPUs and memories. The GPU provides high-throughput computing through massively parallel processing. The storage can store larger amount of data at lower cost, but has larger read latency than memory devices. Data that are commonly used for all queries, e.g., all the representative vectors of clusters, are loaded in advance on the memories from which CPU or GPU can read data with low latency and is always there during the search. On the other hand, all the key vectors are stored in the storage devices, and for simplicity, we assume that data fetched from storage for computations for a query is not cached on memory to be reused for computations for another query.
3.2 METRICS
3.2.1 THE NUMBER OF FETCHED VECTORS AS A PROXY METRICS OF LATENCY
Our goal is to minimize the average search time per query for nearest neighbor search, which we refer to as mean latency, and simultaneously to maximize the recall. Without loss of generality, the mean latency T in the systems described in the previous subsection is expressed by the following equation,
T = Ta + Tb + Tc,
where Ta is the latency for computations using data that are always sit in memory, Tb is the latency required for fetching data from storage, and Tc is the latency for computations using data fetched
from storage for each query. For example, in a partitioning-based ANN method such as SPANN, Ta is the latency for the process to determine the clusters (called as the posting lists in SPANN) to be fetched from storage, Tb is the latency for fetching the vectors in the chosen clusters from storage, and Tc is the latency for the computations to find the nearest neighbor vectors in the fetched vectors. In this paper, for simplicity, assuming that Ta Tb and Tc Tb, we employ the following approximation, T ≈ Tb. Then, since Tb is roughly proportional to the number of fetched vectors, the number of fetched vectors is an effective metrics to evaluate the mean latency.
The above assumptions are reasonable in a realistic setting. For Tc, the computing performances of CPUs equipped with vector arithmetic units and GPUs capable of massively parallel operations range from several hundred GFLOPS to more than TFLOPS. On the other hand, read bandwidth of storage devices is at best a few GB/s even when high-speed NVMe is used. This means the fetched data in Tb can be processed in less than 1/100 of Tb. Note that if the most of the process for Tc is executed in parallel with the process for Tb, for example, by performing distance calculation in the background of asynchronous storage access, the effective Tc becomes almost zero. For Ta, when an exhaustive linear search is used to choose the clusters, i.e., calculating the distance between the query and the representative vectors of all clusters in order to find the closest clusters, a 10-TFLOPS GPU can process 10 million representative vectors of 100 dimension each within a much shorter time than Tb = 1 ms. Also in the SPANN case without GPUs, since a fast algorithm that combines tree-based method and graph-based method is applied to choose the clusters, Ta is quite short even when the number of posting lists is as large as a few hundred million. As a typical example, Table 1 shows the measured Ta, Tb, and Tc on SIFT1M dataset.
3.2.2 MEMORY USAGE
Since memory usage greatly affects the latency of in-memory ANNs, VQ (Vector-Query), which is a measure of throughput normalized by the memory usage, is introduced in GRIP (Zhang & He,
2019) to compare algorithms with different memory usage as fairly as possible, and is also utilized in SPANN (Chen et al., 2021). SPANN claims superior capacity in large vector search scenarios because this VQ value is greater than that of other algorithms. Here, we consider whether VQ is really fair metrics for comparing the methods with different memory usage. In a partitioning-based ANN method for storage, memory capacity limits the number of clusters since the representative vectors of all the clusters must be kept in memory during search. Then, we investigate how the number of clusters affects the recall-latency and recall-VQ curves. Figure 1(a) shows the dependency of recall versus the number of fetched vectors when the simplest k-means and linear search method (IVFFlatIndex of FAISS) are utilized and it is clear that the number of fetched vectors significantly decreases as the number of clusters increases. As shown in Figure 1(b), even when we use VQ metrics, the VQ value under recall@1=90% greatly varies depending on the number of clusters. This indicates that VQ is not a suitable metrics for comparison between algorithms with different memory usage. Based on the above discussion, this paper evaluates ANN methods for storage using the number of fetched vectors under a given recall and a given number of clusters.
Another finding by this investigation is that the number of fetched vectors under a given recall of SPANN is not always better than that of the exhaustive method as shown in Figure 1(a).
4 PROPOSED METHOD
4.1 VISUALIZATION WITH 2-DIMENSIONAL TOY DATA
In this section, we explain the intuition behind our proposed method and visually demonstrate how it improves the accuracy to choose the correct cluster. One hundred key vectors are uniformly sampled from 2-dimensional space between −1 to 1. In the index building phase, we divide the
key vectors into four clusters. The representative vectors of the clusters are placed at (x, y) = (1/2, 1/2), (−1/2, 1/2), (−1/2,−1/2), (1/2,−1/2). Then we assign one cluster to each key vector according to the Euclidean distance, i.e., the distance of a key vector to the representative vector of the assigned cluster is smaller than that of other clusters. Figure 2(a) shows the key vectors colored by the assigned cluster. In a conventional method, when a query is given in the search phase, we choose one cluster whose representative vector is the closest to the query among the four clusters, which is the same manner as that used for assigning clusters to key vectors in the index building phase. Figure 2(b) shows 10000 queries colored by the cluster chosen for each query, and the clusters are clearly divided into four quadrants as expected. On the other hand, Figure 2(c) shows the queries colored by the correct cluster containing the nearest neighbor key vector of each query. We can see that the nearest neighbor key vector of a query vector in the first quadrant, x > 0, y > 0, can be contained in the cluster in green whose representative vector is in the second quadrant, x < 0, y > 0. As a result, the true border lines of the clusters for query are quite complex. In Figure 2(d), queries where wrong clusters are chosen are shown in gray. When the chosen cluster does not contain the nearest neighbor vector, we need to fetch vectors from another cluster to increase the recall. This phenomenon leads to the increase in the number of fetched vectors under a given recall, and the deterioration of recall-latency tradeoff.
Therefore, to improve the accuracy to choose correct cluster is the fundamental challenge. We attempt to accurately predict the complex border lines by using a neural network that is trained with the objective to choose the correct cluster. We use simple three layer MLP. Input dimension is equal to the dimension of query and key vectors, and output dimension is equal to the number of clusters, and the dimension of hidden layer is set to 128 in this experiment. The query vectors for training are sampled independently every epoch from the same distribution. The ground truth cluster is searched by exhaustive search for each training query. Using those pair samples of query and ground truth cluster as training data, we train the neural network with cross-entropy loss in supervised manner. Note that this is a data-dependent method because we look into the clustered key vectors for generating the ground truth labels. Figure 3 shows that prediction by the neural network approaches the correct border lines of clusters as training proceeds. This results indicate that we can improve the accuracy to choose correct cluster by employing a data-dependently trained neural network.
4.2 EXPERIMENT RESULTS
In this section, we describe the experiment using SIFT (Jégou et al., 2011a;b) and CLIP (Radford et al., 2021) data for demonstrating that our proposed method is useful for realistic data.
Dataset. For SIFT1M, we use one million 128-dimensional SIFT1M base data as key vectors. Another one million data are sampled from SIFT1B base data and used as query vectors for training. SIFT1B query data are used as query vectors for test. Euclidean distance is employed as metrics. For CLIP, we extracted feature vectors from 1.28 million ImageNet (Deng et al., 2009) training data with ViT B/16 model (Dosovitskiy et al., 2021). Although the dimension of the feature vector of the model is 512, we use the first 128 dimension for our experiment. We split it into 0.63 million, 0.64 million, and 0.01 million for key vectors, training query vectors, and test query vectors, respectively. Cosine similarity is employed as metrics.
Comparison with conventional methods. We compare our method with two conventional methods. The first one is the exhaustive method where key vectors are partitioned by k-means in the index building phase, and the distances of a query to the representative vectors of all the clusters partitioned by k-means are calculated and the cluster corresponding to the closest representative vector is chosen in the search phase. The second one is SPANN (Chen et al., 2021). For SPANN, we build the index by the algorithm implemented in SPANN, which includes partitioning process. Since SPANN proposes multiple cluster assignment for improving recall, we set the ReplicaCount to its default value 8, which means one key vector is contained by at most 8 clusters. As described in Section 3.2.2, we set the number of clusters for all the methods including our proposed method to 1000 for fair comparison.
Neural network structure. We employ the three-layer MLP. For fair comparison, we carefully design the neural network so as not to require more memory usage than the conventional methods.
As described in Section 3.2.2, if we employed more memory budget, we could significantly improve the recall just by increasing the number of clusters using that memory budget. Concretely, we set the size of hidden layer to 128, which is the same as the vector dimension. Then, the number of parameters of the output layer becomes the dominant among three layers and it is 128 × 1000, where 1000 is the number of clusters. The memory usage for this output layer is the same as that of the representative vectors of all the clusters which is required for the conventional methods. Regarding Tc for the neural network inference, the computing in the largest last layer is very close to that of distance calculations between query vectors and all the representative vectors, and the latency of this computation is much smaller than the latency for fetching vectors as shown in Table 1. So still the Tb is dominant even employing the neural network for search.
Training overview. We train the neural network for 150 epochs with AdamW (Loshchilov & Hutter, 2019) optimizer. The batch size is set to 1000. In order to avoid overfitting, we add some noise sampled from normal distribution every iteration to training data. Every 50 epochs, some key vectors are added to another existing cluster. We call this process as duplication.
Detail of duplication process. For a training query vector, if any of top-kd clusters predicted by the neural network does not contain the ground truth key vector, the pair of top-1 cluster and the ground truth key vector is marked as a candidate pair for duplication. After checking all the training query vectors, we additionally put the key vector into the cluster for the most frequently marked rd% pairs. kd and rd are hyperparameters and set to 4 and 20 in default, respectively.
Loss function. We compare three loss functions. The first one is naive cross-entropy loss (CE). Although multiple clusters can contain the nearest key vector after duplication, we need to pick only one cluster as a ground truth because CE can not handle multiple positive labels. We use initial ground truth cluster as only one positive all the time. The second one is the modified version of crossentropy loss (MCE) that picks one cluster where the neural network gives the largest score among the positive clusters. The third one is binary cross-entropy (BCE) loss that can handle multiple positive labels.
Results. The results are shown in Figure 4, Table 2, and Table 3. The figure includes the results of 10 trials each. The tables show the average and the standard deviation values of the 10 trials. Our proposed method achieves the smallest number of vectors fetched from storage under any recall value, which means that our method will provide the smallest mean latency when the latency for storage access is dominant. Under 90% recall on SIFT data, the number of vectors read from storage is 58% and 80% smaller than that of the exhaustive method and SPANN, respectively. Also for CLIP data, steady improvement is obtained.
4.3 ABLATION STUDY
4.3.1 EFFECT OF EACH INGREDIENT
Table 4 shows how much each ingredient of our proposed method improves the metrics. We report the average and standard deviation values of the number of fetched vectors across 10 trials for each condition. (a) to (c) compare the loss functions for training. The both MCE and CE show
good performance but CE is better in R@1≤0.95 and MCE is better in R@1=0.99. BCE loss deteriorates the performance and the increase in the variation is observed. (d) is the conventional exhaustive method using linear search for choosing clusters and employs neither neural networks nor duplication. (e) employs only neural network and (f) employs only duplication. By comparing (a,d,e,f), the both neural network and duplication contribute to improving performance. (g) shows the effect of the hyperparameter kd explained in Section 4.2. By increasing kd, the number of fetched vectors decreases. In (h), we execute duplication only once after 150-epoch training is completed. The result is worse than that in the default setting where duplication is executed every 50 epochs. This indicates that executing duplication process between training can relax the complexity of the border lines of clusters and help the neural network to fit them. (i) shows the result when we use the clustering information obtained after 150 epoch training and duplication in (a) setting, but choose the cluster to be fetched by using linear search across the updated centroid vectors of all the clusters. This shows executing search with the neural network inference is advantageous. From this experiment, we can see that although even only each ingredient of our proposed method can significantly reduce the number of fetched vectors under a given recall compared to the conventional method, further improvement is obtained by combining them.
4.3.2 BUILDING INDEX BY SPANN
In our experiment, we use k-means for partitioning, but it is an exhaustive method and can take too much time for larger dataset. In order to confirm that k-means is not a necessary component of our method, we apply the algorithm in SPANN to execute partitioning. As a result, the number of fetched vectors under 90% recall significantly improves from 30729±1749 to 7372±162. This indicates that we may utilize a fast algorithm such as SPANN for clustering instead of exhaustive k-means when our proposed method is applied to larger dataset.
5 LIMITATIONS AND FUTURE WORKS
Since the discussion in this paper assumes that the condition that the mean latency for search is determined by storage access time holds true, the discussion in this paper may be invalid if this condition is not satisfied.
For CLIP data, the improvement over the exhaustive method is steady but marginal as shown in Figure 4. The difference of the amount of improvement between SIFT and CLIP may come from the difference of how well the training data reproduce the query distribution. This means the effectiveness of the proposed method could be limited under the condition where the query distribution is close to uniform and not predictable. Although this is a common issue in almost all of the ANN methods, it remains future work to address such difficult use case.
Our proposed method has a couple of hyperparameters. Although we show some of their effect in Section 4.3, thorough optimization is a future work. It may dependent on data distribution and required recall value. However, it is not difficult to find the acceptable values for hyperparameters that provide at least better performance than the exhaustive method.
Another apparent remaining future work is to apply the proposed method to larger datasets such as billion-scale or trillion-scale ones. However, we believe that foundings and direction we reveal in this paper will be also useful for them.
6 CONCLUSION
We investigated the requirement to improve the recall and latency tradeoff of large scale approximate nearest neighbor search under the condition where the key vectors are stored in storage devices with large capacity and large read latency. We pointed out that in order to achieve it, we need to reduce the number of vectors fetched from storage devices during search. Then, it is required to choose correct clusters containing the nearest neighbor key vector to a given query vector with high accuracy. We proposed to use neural networks to predict the correct cluster. By employing our proposed method, we achieved to reduce the number of vectors to read from storage by more than 58% under 90% recall on SIFT1M data compared to the conventional methods. | 1. What is the main contribution of the paper in improving the Pareto frontier of recall vs IO in clustering-based algorithms for nearest neighbor search?
2. What are the strengths and weaknesses of the proposed approach in addressing the identified problem?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns regarding the scale of experiments, training data requirements, and comparisons with related benchmarks?
5. How does the reviewer suggest improving the proposal, such as comparing it to DiskANN and co-learning the decision boundaries of clustering-based ANN and the embedding model itself? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Background: Clustering based algorithms are widely used for high-dimensional approx. nearest neighbor search. In these methods, points to be indexed are packed into some clustering based data structure -- each point is assigned to one or more centroids in a clustering scheme. At search time, query is compared to clusters, and then to all the index points assigned to a subset of these clusters. While such indices are scalable (upto 1B points per machine as in FAISS IVFPQ or SCANN), they suffer from a common problem -- that the nearest neighbor of the query do not necessarily belong to the cluster corresponding to the centroids that are closest to the query. This is especially true of high dimension. This limits the recall vs compute/IO efficiency of these algorithms. One such work in the area is SPANN which copes with this problem by sending index points on cluster boundaries to multiple clusters. This is used to answer queries at ~90% recall on billion-scale indices hosted on SSD with limited number of probes to SSD
This paper proposes a way of increasing the pareto-optimal frontier of recall vs IO in this clustering class of algorithms using simple 3-layer neural networks to predict which centroid to "Expand" for each query. The network is trained with lots of sample query data. The main result is that on million scale datasets, the papers demonstrates significant improvement in IO to achieve a recall target over SPANN. No data is presented for billion-scale datasets and a discussion on data requirements for training the neural network are lacking.
Strengths And Weaknesses
Stengths:
The paper correctly identifies the problem holding back clustering-based ANNS algorithms and attempts to solve it.
The proposal for addressing the problem seems reasonable, although there are some related ideas that have been tried previously (some of which are cited in the paper). The improvement over chosen baseline, if not the best baseline, is non-trivial.
Exposition is clear.
Weaknesses:
Scale of experiments: the authors set up baselines based on a method aimed at billion-scale methods. But show their evaluation on a million scale. This is really weird. Further, at million-scale the baseline they choice is really week. See below for more details.
Discussion on the amount of training data required for the method to work and the relation between the magnitude of improvement and amount of training data is lacking. 3, In general, comparisons with related benchmarks are week.
Clarity, Quality, Novelty And Reproducibility
Areas for improvement:
List prediction time of 3-layer neural network on both CPU and GPU and compare to the cluster prediction time of IVF and SPANN.
"Although they compensate the deterioration of recall due to lossy compression by combining reranking using full precision vector data, the recall-latency performance is inferior to SPANN" -- its unclear that this statement holds in a multi-threaded environment or if DiskANN is well tuned or at high recalls. In recent billion-scale ANN challenges (big-ann-benchmarks.com), DiskANN graphs have been a strong baseline for both standard and specialized hardware tracks (https://www.intel.com/content/www/us/en/developer/articles/technical/winning-neurips-billion-scale-ann-search-challenge.html#gs.g3f11m). While the SPANN paper claims better VQ than DiskANN in their abstract, Fig 7 in SPANN paper plotting VQ does not even evaluate DiskANN. Further, the latency of DiskANN in-memory is better than HNSW (see ann-benchmarks) on SIFT dataset that you measure. Therefore, it would be good to analyze the IO of your proposal against DiskANN.
There has been literature on co-learning the decision boundaries of clustering based ANN and the embedding model itself. Please consider citing and comparing. The paper you cite in this area (BLISS) does not compare to baselines appropriately (e.g. they under report QPS for HNSW by a large margin), so it would be good to find other more suitable representatives.
Areas of concern:
Please analyze the sample complexity required to achieve gains over distance based cluster prediction. In your evaluation you use almost as many sample "query points" as index points. Is this reasonable at billion scale?
If you could learn to predict the decision boundary given enough query samples, why does the alignment of query and index distribution matter (question regarding your comments in Section 5).
At a million scale, 10K comparisons is quite bad for 90% recall. Vamana or HNSW e.g., need about 500 to 1000 distance comparisons to achieve 90% recall@1 and recall@10 respectively. So its unclear what value your method is adding at this scale.
Please do extend your methods to billion scale using datasets at big-ann-benchmarks.com (there are 6 datasets, some with large number of samples). |
ICLR | Title
On Storage Neural Network Augmented Approximate Nearest Neighbor Search
Abstract
Large-scale approximate nearest neighbor search (ANN) has been gaining attention along with the latest machine learning researches employing ANNs. If the data is too large to fit in memory, it is necessary to search for the most similar vectors to a given query vector from the data stored in storage devices, not from that in memory. The storage device such as NAND flash memory has larger capacity than the memory device such as DRAM, but they also have larger latency to read data. Therefore, ANN methods for storage require completely different approaches from conventional in-memory ANN methods. Since the approximation that the time required for search is determined only by the amount of data fetched from storage holds under reasonable assumptions, our goal is to minimize it while maximizing recall. For partitioning-based ANNs, vectors are partitioned into clusters in the index building phase. In the search phase, some of the clusters are chosen, the vectors in the chosen clusters are fetched from storage, and the nearest vector is retrieved from the fetched vectors. Thus, the key point is to accurately select the clusters containing the ground truth nearest neighbor vectors. We accomplish this by proposing a method to predict the correct clusters by means of a neural network that is gradually refined by alternating supervised learning and duplicated cluster assignment. Compared to state-of-the-art SPANN and an exhaustive method using k-means clustering and linear search, the proposed method achieves 90% recall on SIFT1M with 80% and 58% less data fetched from storage, respectively.
1 INTRODUCTION
Large-scale Approximate Nearest Neighbor searches (ANNs) for high-dimensional data are receiving growing attentions because of their appearance in emerging directions of deep learning research. For example, in natural language processing, methods leveraging relevant documents retrieval by similar dense vector search have significantly improved the scores of open-domain question-answering tasks (Karpukhin et al., 2020). Also for language modeling tasks, Borgeaud et al. (2021) showed a model augmented by retrieval from 2 trillion tokens performs as well as 25 times larger models. In computer vision, Nakata et al. (2022) showed that image classification using ANNs has potential to alleviate catastrophic forgetting and improves accuracy in continual learning scenarios. In reinforcement learning, exploiting past experiences stored in external memory for an agent to make better decisions has been explored (Blundell et al., 2016; Pritzel et al., 2017). Recently, Goyal et al. (2022) and Humphreys et al. (2022) attempted to scale up the capacity of memory with the help of ANN and showed promising results.
ANNs are algorithms to find one or k key vectors that are the nearest to a given query vector among a large number of key vectors. Strict search is not required but higher recall with lower latency is demanded. In order to achieve this, an index is generally build by data-dependent preprocessing. Thus, an ANN method consists of the index building phase and the search phase. As the number of key vectors increases, storing all of them in memory (e.g. DRAM) becomes very expensive, and it is forced to store the vectors in storage devices such as NAND flash memory. In general, a storage device has much larger capacity per cost, but also its latency is much larger than memory. When all the key vectors are stored in memory, as seen in the most papers regarding ANN, it is effective to reduce the number of calculating distance between vectors by employing graph-based (Malkov & Yashunin, 2018; Fu et al., 2019) or partitioning-based methods (Dong et al., 2019) and/or to
reduce the time for each distance calculation by employing quantization techniques such as Product Quantization (Jégou et al., 2011a). On the other hand, when the key vectors are stored in storage rather than memory, the latency for fetching data from the storage becomes the dominant contributor in the total search latency. Therefore, the ANN method for the latter case (ANN method for storage) needs a completely different approach from the ANN method for the former case (all-in-memory ANN method). In this paper, we identify the most fundamental challenges of the ANN method for storage, and explore ways to solve them.
Since the latency for fetching data is approximately proportional to the amount of fetched data, it should be good strategy to reduce the number of vectors to be fetched during search as much as possible, while achieving high recall. Although SPANN (Chen et al., 2021), which is the state-ofthe-art ANN method for storage, is also designed with the same strategy, our investigation reveals that the characteristics of its index are still suboptimal from the viewpoint of the number of fetched vectors under a given recall. Moreover, perhaps surprisingly, an exhaustive method combining simple k-means clustering and linear search can perform better than SPANN on some dataset in terms of this metrics.
Another consideration worth noting is the exploitation of the efficient and high-throughput computation of GPUs. It is reasonable to assume that an ANN algorithm used in deep learning application runs on the same system as the deep learning algorithm runs on, and most deep learning algorithms are designed to be run on the system equipped with GPUs. Therefore, the ANN algorithm will be also run on the system with GPU.
In partitioning-based ANN methods, key vectors are partitioned into clusters, which are referred to as posting lists in SPANN paper, usually based on their proximity in the index building phase. First, in the search phase, some clusters that would contain the desired vector with high probability are chosen according to the distance between the query vector and the representative vector of each cluster. Second, the vectors in the chosen clusters are fetched from storage. Since the page size of storage devices is relatively large (e.g. 4KB), it is efficient to make a page contain vectors in the same cluster, as other ANN methods for storage (Chen et al., 2021; Jayaram Subramanya et al., 2019) also adopt. Third, by computing distances between the query and the fetched vectors, k closest vectors are identified and output. In order to achieve high recall with low latency, we need to increase the accuracy to choose the correct cluster containing the ground truth nearest vector at the first step of the search phase.
If clustering is made by k-means algorithm and a query vector is picked from the existing key vectors, the cluster whose centroid vector is the closest to the query always contains the nearest neighbor key vector by the definition of k-means algorithm. However, when a query is not exactly same as any one of key vectors, which is common case for ANN, often the cluster whose representative vector is the closest to the query does not contain the nearest neighbor vector, and this limits recall. To the best of our knowledge, this problem has not been explicitly addressed in the literatures. Our intuition to tackle with this problem is that the border lines that determine which cluster contains the nearest neighbor vector of a query are different from and more complicated than the border lines that determine the assignment of clusters to key vectors. We employ neural networks trained on given clustered key vectors to predict the correct cluster among the clusters defined by the complicated border lines. Assigning multiple clusters to a key vector is a conventional method to improve the accuracy to choose the correct cluster. Combining this duplication technique with our method could provide the additional effect of relaxing demands on the neural network by simplifying border lines. We demonstrate how our method works with visualization using 2-D toy data, and then empirically show that it is effective for realistic data as well. Our contributions include:
• We clarify that we need to reduce the amount of data fetched from storage when key vectors are sit in storage devices, because the latency for fetching data is dominant part of the search latency.
• We first explicitly point out and address the problem that, in partitioning-based ANN method, often the nearest cluster does not contain the nearest neighbor vector of a query vector and this limits the recall-latency performance.
• We propose a new ANN method combining cluster prediction with neural networks and duplicated cluster assignment, and show empirically that the proposed method improves the performance on two realistic million-scale datasets.
2 RELATED WORKS
ANN for storage. DiskANN (Jayaram Subramanya et al., 2019) is a graph-based ANN method for storage. The information of connections defining graph structure and the full precision vectors are stored on storage and the vectors compressed by Product Quantization (Jégou et al., 2011a) are stored in memory. The algorithm traverses the graph by reading the connection information only on the path from storage and computes distances between a query and the compressed vectors in memory. Although they compensate the deterioration of recall due to lossy compression by combining reranking using full precision vector data, the recall-latency performance is inferior to SPANN (Chen et al., 2021). SPANN is another method dedicated to ANN for storage and exhibits state-of-the-art performance. It employs a partitioning-based approach. By increasing the number of clusters as much as possible, it achieved to reduce the number of vectors fetched from storage under a given recall. In order to reduce the latency to choose clusters during search even when the number of clusters are large, they employ SPTAG algorithm that combines tree-based and graph-based ANNs. They also proposed an efficient duplication method aiming at increasing probability that a chosen cluster contains the ground truth key vector. However, our investigation in Section 3.2.2 shows that its performance can be worse than a naive exhaustive method on some dataset.
ANN with GPU. FAISS (Johnson et al., 2019) supports a lot of ANN algorithms accelerated by using GPU’s massively parallel computing. On-storage search is also discussed in their project page. SONG (Zhao et al., 2020) optimized the graph-based ANN algorithm for GPU. They modified the algorithm so that distance computations can be parallelized as much as possible, and showed significant speedup. However, they assume only all-in-memory scenarios.
ANN with neural networks. DSI (Tay et al., 2022) predicts the indices of the nearest neighbor key vectors directly from the query vectors with a neural network. We explore to use neural networks to predict the clusters containing the nearest neighbor vector rather than the vector indices themselves. DSI is also targeted for all-in-memory ANN. BLISS (Gupta et al., 2022) and NeuralLSH (Dong et al., 2019) are methods to improve the partitioning rule using neural networks. They apply the same rule to a query for choosing clusters as well. As depicted in Section 4.1, when the rule for partitioning keys is employed to choose clusters, often the chosen clusters don’t contain the ground truth key vector. Our method where a neural network is trained to predict the correct cluster for a given query is orthogonal and can be combined with these methods.
3 PRELIMINARIES
3.1 SYSTEM ENVIRONMENT
In this paper, we assume that the system on which our ANN algorithm runs has GPUs and storage devices in addition to CPUs and memories. The GPU provides high-throughput computing through massively parallel processing. The storage can store larger amount of data at lower cost, but has larger read latency than memory devices. Data that are commonly used for all queries, e.g., all the representative vectors of clusters, are loaded in advance on the memories from which CPU or GPU can read data with low latency and is always there during the search. On the other hand, all the key vectors are stored in the storage devices, and for simplicity, we assume that data fetched from storage for computations for a query is not cached on memory to be reused for computations for another query.
3.2 METRICS
3.2.1 THE NUMBER OF FETCHED VECTORS AS A PROXY METRICS OF LATENCY
Our goal is to minimize the average search time per query for nearest neighbor search, which we refer to as mean latency, and simultaneously to maximize the recall. Without loss of generality, the mean latency T in the systems described in the previous subsection is expressed by the following equation,
T = Ta + Tb + Tc,
where Ta is the latency for computations using data that are always sit in memory, Tb is the latency required for fetching data from storage, and Tc is the latency for computations using data fetched
from storage for each query. For example, in a partitioning-based ANN method such as SPANN, Ta is the latency for the process to determine the clusters (called as the posting lists in SPANN) to be fetched from storage, Tb is the latency for fetching the vectors in the chosen clusters from storage, and Tc is the latency for the computations to find the nearest neighbor vectors in the fetched vectors. In this paper, for simplicity, assuming that Ta Tb and Tc Tb, we employ the following approximation, T ≈ Tb. Then, since Tb is roughly proportional to the number of fetched vectors, the number of fetched vectors is an effective metrics to evaluate the mean latency.
The above assumptions are reasonable in a realistic setting. For Tc, the computing performances of CPUs equipped with vector arithmetic units and GPUs capable of massively parallel operations range from several hundred GFLOPS to more than TFLOPS. On the other hand, read bandwidth of storage devices is at best a few GB/s even when high-speed NVMe is used. This means the fetched data in Tb can be processed in less than 1/100 of Tb. Note that if the most of the process for Tc is executed in parallel with the process for Tb, for example, by performing distance calculation in the background of asynchronous storage access, the effective Tc becomes almost zero. For Ta, when an exhaustive linear search is used to choose the clusters, i.e., calculating the distance between the query and the representative vectors of all clusters in order to find the closest clusters, a 10-TFLOPS GPU can process 10 million representative vectors of 100 dimension each within a much shorter time than Tb = 1 ms. Also in the SPANN case without GPUs, since a fast algorithm that combines tree-based method and graph-based method is applied to choose the clusters, Ta is quite short even when the number of posting lists is as large as a few hundred million. As a typical example, Table 1 shows the measured Ta, Tb, and Tc on SIFT1M dataset.
3.2.2 MEMORY USAGE
Since memory usage greatly affects the latency of in-memory ANNs, VQ (Vector-Query), which is a measure of throughput normalized by the memory usage, is introduced in GRIP (Zhang & He,
2019) to compare algorithms with different memory usage as fairly as possible, and is also utilized in SPANN (Chen et al., 2021). SPANN claims superior capacity in large vector search scenarios because this VQ value is greater than that of other algorithms. Here, we consider whether VQ is really fair metrics for comparing the methods with different memory usage. In a partitioning-based ANN method for storage, memory capacity limits the number of clusters since the representative vectors of all the clusters must be kept in memory during search. Then, we investigate how the number of clusters affects the recall-latency and recall-VQ curves. Figure 1(a) shows the dependency of recall versus the number of fetched vectors when the simplest k-means and linear search method (IVFFlatIndex of FAISS) are utilized and it is clear that the number of fetched vectors significantly decreases as the number of clusters increases. As shown in Figure 1(b), even when we use VQ metrics, the VQ value under recall@1=90% greatly varies depending on the number of clusters. This indicates that VQ is not a suitable metrics for comparison between algorithms with different memory usage. Based on the above discussion, this paper evaluates ANN methods for storage using the number of fetched vectors under a given recall and a given number of clusters.
Another finding by this investigation is that the number of fetched vectors under a given recall of SPANN is not always better than that of the exhaustive method as shown in Figure 1(a).
4 PROPOSED METHOD
4.1 VISUALIZATION WITH 2-DIMENSIONAL TOY DATA
In this section, we explain the intuition behind our proposed method and visually demonstrate how it improves the accuracy to choose the correct cluster. One hundred key vectors are uniformly sampled from 2-dimensional space between −1 to 1. In the index building phase, we divide the
key vectors into four clusters. The representative vectors of the clusters are placed at (x, y) = (1/2, 1/2), (−1/2, 1/2), (−1/2,−1/2), (1/2,−1/2). Then we assign one cluster to each key vector according to the Euclidean distance, i.e., the distance of a key vector to the representative vector of the assigned cluster is smaller than that of other clusters. Figure 2(a) shows the key vectors colored by the assigned cluster. In a conventional method, when a query is given in the search phase, we choose one cluster whose representative vector is the closest to the query among the four clusters, which is the same manner as that used for assigning clusters to key vectors in the index building phase. Figure 2(b) shows 10000 queries colored by the cluster chosen for each query, and the clusters are clearly divided into four quadrants as expected. On the other hand, Figure 2(c) shows the queries colored by the correct cluster containing the nearest neighbor key vector of each query. We can see that the nearest neighbor key vector of a query vector in the first quadrant, x > 0, y > 0, can be contained in the cluster in green whose representative vector is in the second quadrant, x < 0, y > 0. As a result, the true border lines of the clusters for query are quite complex. In Figure 2(d), queries where wrong clusters are chosen are shown in gray. When the chosen cluster does not contain the nearest neighbor vector, we need to fetch vectors from another cluster to increase the recall. This phenomenon leads to the increase in the number of fetched vectors under a given recall, and the deterioration of recall-latency tradeoff.
Therefore, to improve the accuracy to choose correct cluster is the fundamental challenge. We attempt to accurately predict the complex border lines by using a neural network that is trained with the objective to choose the correct cluster. We use simple three layer MLP. Input dimension is equal to the dimension of query and key vectors, and output dimension is equal to the number of clusters, and the dimension of hidden layer is set to 128 in this experiment. The query vectors for training are sampled independently every epoch from the same distribution. The ground truth cluster is searched by exhaustive search for each training query. Using those pair samples of query and ground truth cluster as training data, we train the neural network with cross-entropy loss in supervised manner. Note that this is a data-dependent method because we look into the clustered key vectors for generating the ground truth labels. Figure 3 shows that prediction by the neural network approaches the correct border lines of clusters as training proceeds. This results indicate that we can improve the accuracy to choose correct cluster by employing a data-dependently trained neural network.
4.2 EXPERIMENT RESULTS
In this section, we describe the experiment using SIFT (Jégou et al., 2011a;b) and CLIP (Radford et al., 2021) data for demonstrating that our proposed method is useful for realistic data.
Dataset. For SIFT1M, we use one million 128-dimensional SIFT1M base data as key vectors. Another one million data are sampled from SIFT1B base data and used as query vectors for training. SIFT1B query data are used as query vectors for test. Euclidean distance is employed as metrics. For CLIP, we extracted feature vectors from 1.28 million ImageNet (Deng et al., 2009) training data with ViT B/16 model (Dosovitskiy et al., 2021). Although the dimension of the feature vector of the model is 512, we use the first 128 dimension for our experiment. We split it into 0.63 million, 0.64 million, and 0.01 million for key vectors, training query vectors, and test query vectors, respectively. Cosine similarity is employed as metrics.
Comparison with conventional methods. We compare our method with two conventional methods. The first one is the exhaustive method where key vectors are partitioned by k-means in the index building phase, and the distances of a query to the representative vectors of all the clusters partitioned by k-means are calculated and the cluster corresponding to the closest representative vector is chosen in the search phase. The second one is SPANN (Chen et al., 2021). For SPANN, we build the index by the algorithm implemented in SPANN, which includes partitioning process. Since SPANN proposes multiple cluster assignment for improving recall, we set the ReplicaCount to its default value 8, which means one key vector is contained by at most 8 clusters. As described in Section 3.2.2, we set the number of clusters for all the methods including our proposed method to 1000 for fair comparison.
Neural network structure. We employ the three-layer MLP. For fair comparison, we carefully design the neural network so as not to require more memory usage than the conventional methods.
As described in Section 3.2.2, if we employed more memory budget, we could significantly improve the recall just by increasing the number of clusters using that memory budget. Concretely, we set the size of hidden layer to 128, which is the same as the vector dimension. Then, the number of parameters of the output layer becomes the dominant among three layers and it is 128 × 1000, where 1000 is the number of clusters. The memory usage for this output layer is the same as that of the representative vectors of all the clusters which is required for the conventional methods. Regarding Tc for the neural network inference, the computing in the largest last layer is very close to that of distance calculations between query vectors and all the representative vectors, and the latency of this computation is much smaller than the latency for fetching vectors as shown in Table 1. So still the Tb is dominant even employing the neural network for search.
Training overview. We train the neural network for 150 epochs with AdamW (Loshchilov & Hutter, 2019) optimizer. The batch size is set to 1000. In order to avoid overfitting, we add some noise sampled from normal distribution every iteration to training data. Every 50 epochs, some key vectors are added to another existing cluster. We call this process as duplication.
Detail of duplication process. For a training query vector, if any of top-kd clusters predicted by the neural network does not contain the ground truth key vector, the pair of top-1 cluster and the ground truth key vector is marked as a candidate pair for duplication. After checking all the training query vectors, we additionally put the key vector into the cluster for the most frequently marked rd% pairs. kd and rd are hyperparameters and set to 4 and 20 in default, respectively.
Loss function. We compare three loss functions. The first one is naive cross-entropy loss (CE). Although multiple clusters can contain the nearest key vector after duplication, we need to pick only one cluster as a ground truth because CE can not handle multiple positive labels. We use initial ground truth cluster as only one positive all the time. The second one is the modified version of crossentropy loss (MCE) that picks one cluster where the neural network gives the largest score among the positive clusters. The third one is binary cross-entropy (BCE) loss that can handle multiple positive labels.
Results. The results are shown in Figure 4, Table 2, and Table 3. The figure includes the results of 10 trials each. The tables show the average and the standard deviation values of the 10 trials. Our proposed method achieves the smallest number of vectors fetched from storage under any recall value, which means that our method will provide the smallest mean latency when the latency for storage access is dominant. Under 90% recall on SIFT data, the number of vectors read from storage is 58% and 80% smaller than that of the exhaustive method and SPANN, respectively. Also for CLIP data, steady improvement is obtained.
4.3 ABLATION STUDY
4.3.1 EFFECT OF EACH INGREDIENT
Table 4 shows how much each ingredient of our proposed method improves the metrics. We report the average and standard deviation values of the number of fetched vectors across 10 trials for each condition. (a) to (c) compare the loss functions for training. The both MCE and CE show
good performance but CE is better in R@1≤0.95 and MCE is better in R@1=0.99. BCE loss deteriorates the performance and the increase in the variation is observed. (d) is the conventional exhaustive method using linear search for choosing clusters and employs neither neural networks nor duplication. (e) employs only neural network and (f) employs only duplication. By comparing (a,d,e,f), the both neural network and duplication contribute to improving performance. (g) shows the effect of the hyperparameter kd explained in Section 4.2. By increasing kd, the number of fetched vectors decreases. In (h), we execute duplication only once after 150-epoch training is completed. The result is worse than that in the default setting where duplication is executed every 50 epochs. This indicates that executing duplication process between training can relax the complexity of the border lines of clusters and help the neural network to fit them. (i) shows the result when we use the clustering information obtained after 150 epoch training and duplication in (a) setting, but choose the cluster to be fetched by using linear search across the updated centroid vectors of all the clusters. This shows executing search with the neural network inference is advantageous. From this experiment, we can see that although even only each ingredient of our proposed method can significantly reduce the number of fetched vectors under a given recall compared to the conventional method, further improvement is obtained by combining them.
4.3.2 BUILDING INDEX BY SPANN
In our experiment, we use k-means for partitioning, but it is an exhaustive method and can take too much time for larger dataset. In order to confirm that k-means is not a necessary component of our method, we apply the algorithm in SPANN to execute partitioning. As a result, the number of fetched vectors under 90% recall significantly improves from 30729±1749 to 7372±162. This indicates that we may utilize a fast algorithm such as SPANN for clustering instead of exhaustive k-means when our proposed method is applied to larger dataset.
5 LIMITATIONS AND FUTURE WORKS
Since the discussion in this paper assumes that the condition that the mean latency for search is determined by storage access time holds true, the discussion in this paper may be invalid if this condition is not satisfied.
For CLIP data, the improvement over the exhaustive method is steady but marginal as shown in Figure 4. The difference of the amount of improvement between SIFT and CLIP may come from the difference of how well the training data reproduce the query distribution. This means the effectiveness of the proposed method could be limited under the condition where the query distribution is close to uniform and not predictable. Although this is a common issue in almost all of the ANN methods, it remains future work to address such difficult use case.
Our proposed method has a couple of hyperparameters. Although we show some of their effect in Section 4.3, thorough optimization is a future work. It may dependent on data distribution and required recall value. However, it is not difficult to find the acceptable values for hyperparameters that provide at least better performance than the exhaustive method.
Another apparent remaining future work is to apply the proposed method to larger datasets such as billion-scale or trillion-scale ones. However, we believe that foundings and direction we reveal in this paper will be also useful for them.
6 CONCLUSION
We investigated the requirement to improve the recall and latency tradeoff of large scale approximate nearest neighbor search under the condition where the key vectors are stored in storage devices with large capacity and large read latency. We pointed out that in order to achieve it, we need to reduce the number of vectors fetched from storage devices during search. Then, it is required to choose correct clusters containing the nearest neighbor key vector to a given query vector with high accuracy. We proposed to use neural networks to predict the correct cluster. By employing our proposed method, we achieved to reduce the number of vectors to read from storage by more than 58% under 90% recall on SIFT1M data compared to the conventional methods. | 1. What is the focus of the paper regarding approximate nearest neighbor search?
2. What are the strengths of the proposed approach, particularly in combining cluster prediction and neural networks?
3. What are the weaknesses of the paper, especially regarding the experimental settings and scalability?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the method's ability to handle high-dimensional data?
6. Is there a need to include other competing methods, such as BLISS, for comparison?
7. Does the reviewer have any doubts about the significance of the problem addressed by the authors? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors study the approximate nearest neighbor search problem under the condition that the key vectors are stored in storage devices with large capacity and large read latency. They point out that the critical point is to accurately select the clusters containing the ground truth nearest neighbor vectors and show that the true border lines of the clusters for the query are pretty complex with a vivid example. They propose a method to predict the correct clusters by means of a neural network that is gradually refined by alternating supervised learning and duplicated cluster assignments. Experimental results confirm the superior performance of the proposed method.
Strengths And Weaknesses
The paper has the following strengths:
S1. They point out that, in the partitioning-based ANN method, often the nearest cluster does not contain the nearest neighbor vector of a query vector with a vivid example, and this problem limits the recall-latency performance.
S2. They propose an ANN method combining cluster prediction with neural networks and duplicated cluster assignment, and show empirically that the proposed method improves the performance on two realistic million-scale datasets.
The paper has the following weaknesses:
W1. The paper studies the problem of storage ANN search. However, the two datasets studied in the experiments are only of a million scale of cardinality, which does not seems to fit the problem settings. On the other hand, the size of the training corpus used in the experiments almost matches the size of the key vectors. It seems that the method requires a large training corpus, and does not scale well to large datasets. This also contradicts the setting of storage ANN search, where the key vectors are too large to fit in memory.
W2. It is not clear to me whether the data dimension has any relation to the size of the hidden layer in the neural network size. For CLIP, it is unclear to me why only 128 out of 512 dimensions are used. And is there any accuracy loss due to this setting? Can the method scale well with high dimensionality? As an ANN search method with neural networks, I think it is also necessary to include BLISS as one of the competitors and show the comparison of the training cost.
W3. The problem that the nearest cluster does not always contain the nearest neighbor vector is quite straightforward. That’s why in most ANN methods they would retrieve a few clusters instead of just one. As for the trade-off between recall and latency, I wonder if it could be solved by using the finer granularity of clusters, i.e. more clusters to solve the problem. And I suspect the proposed method also requires searching more than one cluster to achieve a certain recall level. Thus, I am not convinced that pointing this problem out is of great contribution.
Clarity, Quality, Novelty And Reproducibility
Based on the concerns about the weaknesses, it seems to me that the clarity can be improved, and the novelty is limited. |
ICLR | Title
Empirical Risk Landscape Analysis for Understanding Deep Neural Networks
Abstract
This work aims to provide comprehensive landscape analysis of empirical risk in deep neural networks (DNNs), including the convergence behavior of its gradient, its stationary points and the empirical risk itself to their corresponding population counterparts, which reveals how various network parameters determine the convergence performance. In particular, for an l-layer linear neural network consisting of di neurons in the i-th layer, we prove the gradient of its empirical risk uniformly converges to the one of its population risk, at the rate of O(r √ l √ maxi dis log(d/l)/n). Here d is the total weight dimension, s is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by r. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep nonlinear neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth l, the layer width di, the network size d, the sparsity in weight and the parameter magnitude r determine the neural network landscape.
N/A
√ l √
maxi dis log(d/l)/n). Here d is the total weight dimension, s is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by r. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep nonlinear neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth l, the layer width di, the network size d, the sparsity in weight and the parameter magnitude r determine the neural network landscape.
1 INTRODUCTION
Deep learning has achieved remarkable success in many fields, such as computer vision (Hinton et al., 2006; Szegedy et al., 2015; He et al., 2016), natural language processing (Collobert & Weston, 2008; Bakshi & Stephanopoulos, 1993), and speech recognition (Hinton et al., 2012; Graves et al., 2013). However, theoretical understanding on the properties of deep learning models still lags behind their practical achievements (Shalev-Shwartz et al., 2017; Kawaguchi, 2016) due to their high non-convexity and internal complexity. In practice, parameters of deep learning models are learned by minimizing the empirical risk via (stochastic-)gradient descent. Therefore, some recent works (Bartlett & Maass, 2003; Neyshabur et al., 2015) analyzed the convergence of the empirical risk to the population risk, which are however still far from fully understanding the landscape of the empirical risk in deep learning models. Beyond the convergence properties of the empirical risk itself, the convergence and distribution properties of its gradient and stationary points are also essential in landscape analysis. A comprehensive landscape analysis can reveal important information on the optimization behavior and practical performance of deep neural networks, and will be helpful to designing better network architectures. Thus, in this work we aim to provide comprehensive landscape analysis by looking into the gradients and stationary points of the empirical risk.
Formally, we consider a DNN model f(w;x,y) : Rd0 × Rdl → R parameterized by w ∈ Rd consisting of l layers (l ≥ 2) that is trained by minimizing the commonly used squared loss function
over sample pairs {(x,y)} ⊂ Rd0 × Rdl from an unknown distribution D, where y is the target output for the sample x. Ideally, the model can find its optimal parameter w∗ by minimizing the population risk through (stochastic-)gradient descent by backpropagation:
min w J(w) , E(x,y)∼D f(w;x,y),
where f(w;x,y) = 12‖v (l) − y‖22 is the squared loss associated to the sample (x,y) ∼ D in which v(l) is the output of the l-th layer. In practice, as the sample distribution D is usually unknown and only finite training samples { (x(i),y(i)) }n i=1
i.i.d. drawn from D are provided, the network model is usually trained by minimizing the empirical risk:
min w Ĵn(w) ,
1
n n∑ i=1 f(w;x(i),y(i)). (1)
Understanding the convergence behavior of Ĵn(w) to J(w) is critical to statistical machine learning algorithms. In this work, we aim to go further and characterize the landscape of the empirical risk Ĵn(w) of deep learning models by analyzing the convergence behavior of its gradient and stationary points to their corresponding population counterparts. We provide analysis for both multi-layer linear and nonlinear neural networks. In particular, we obtain following new results.
• We establish the uniform convergence of empirical gradient ∇wĴn(w) to its population counterpart ∇wJ(w). Specifically, when the sample size n is not less than O ( max(l3r2/(ε2s log(d/l)), s log(d/l)/l) ) , with probability at least 1 − ε the conver-
gence rate is O(r2l √ l √
maxi dis log(d/l)/n), where there are s nonzero entries in the parameterw, the output dimension of the i-th layer is di and the magnitude of the weight parameter of each layer is upper bounded by r. This result implies that as long as the training sample size n is sufficiently large, any stationary point of Ĵn(w) is also a stationary point of J(w) and vise versa, although both Ĵn(w) and J(w) are very complex.
• We then prove the exact correspondence of non-degenerate stationary points between Ĵn(w) and J(w). Indeed, the corresponding non-degenerate stationary points also uniformly converge to each other at the same convergence rate as the one revealed above with an extra factor 2/ζ. Here ζ > 0 accounts for the geometric topology of non-degenerate stationary points (see Definition 1).
Based on the above two new results, we also derive the uniform convergence of the empirical risk Ĵn(w) to its population risk J(w), which helps understand the generalization error of deep learning models and stability of their empirical risk. These analyses reveal the role of the depth l of a neural network model in determining its convergence behavior and performance. Also, the results tell that the width factor √ maxi di, the nonzero entry number s of weights, and the total network size d are also critical to the convergence and performance. In addition, controlling magnitudes of the parameters (weights) in DNNs are demonstrated to be important for performance. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of empirical gradient and stationary points in both deep linear and nonlinear neural networks.
2 RELATED WORK
To date, only a few theories have been developed for understanding DNNs which can be roughly divided into following three categories. The first category aims to analyze training error of DNNs. Baum (1988) pointed out that zero training error can be obtained when the last layer of a neural network has more units than training samples. Later, Soudry & Carmon (2016) proved that for DNNs with leaky rectified linear units (ReLU) and a single output, the training error achieves zero at any of their local minima as long as the product of the number of units in the last two layers is larger than the training sample size.
The second category of analysis works (Dauphin et al., 2014; Choromanska et al., 2015a; Kawaguchi, 2016; Tian, 2017) focus on analyzing loss surfaces of DNNs, e.g., how the stationary points are distributed. Those results are helpful to understanding performance difference of large- and small-size
networks (Choromanska et al., 2015b). Among them, Dauphin et al. (2014) experimentally verified that a large number of saddle points indeed exist for DNNs. With strong assumptions, Choromanska et al. (2015a) connected the loss function of a deep ReLU network with the spherical spin-class model and described locations of the local minima. Later, Kawaguchi (2016) proved the existence of degenerate saddle points for deep linear neural networks with squared loss function. They also showed that any local minimum is also a global minimum. By utilizing techniques from dynamical system analysis, Tian (2017) gave guarantees that for two-layer bias-free networks with ReLUs, the gradient descent algorithm with certain symmetric weight initialization can converge to the ground-truth weights globally, if the inputs follow Gaussian distribution. Recently, Nguyen & Hein (2017) proved that for a fully connected network with squared loss and analytic activation functions, almost all the local minima are globally optimal if one hidden layer has more units than training samples and the network structure after this layer is pyramidal. Besides, some recent works, e.g., (Zhang et al., 2016; 2017), tried to alleviate analysis difficulties by relaxing the involved highly nonconvex functions into ones easier.
In addition, some existing works (Bartlett & Maass, 2003; Neyshabur et al., 2015) analyze the generalization performance of a DNN model. Based on the Vapnik–Chervonenkis (VC) theory, Bartlett & Maass (2003) proved that for a feedforward neural network with one-dimensional output, the best convergence rate of the empirical risk to its population risk on the sample distribution can be bounded by its fat-shattering dimension. Recently, Neyshabur et al. (2015) adopted Rademacher complexity to analyze learning capacity of a fully-connected neural network model with ReLU activation functions and bounded inputs.
However, although gradient descent with backpropagation is the most common optimization technique for DNNs, none of existing works analyzes convergence properties of gradient and stationary points of the DNN empirical risk. For single-layer optimization problems, some previous works analyze their empirical risk but essentially differ from our analysis method. For example, Negahban et al. (2009) proved that for a regularized convex program, the minimum of the empirical risk uniformly converges to the true minimum of the population risk under certain conditions. Gonen & ShalevShwartz (2017) proved that for nonconvex problems without degenerated saddle points, the difference between empirical risk and population risk can be bounded. Unfortunately, the loss of DNNs is highly nonconvex and has degenerated saddle points (Fyodorov & Williams, 2007; Dauphin et al., 2014; Kawaguchi, 2016), thus their analysis results are not applicable. Mei et al. (2017) analyzed the convergence behavior of the empirical risk for nonconvex problems, but they only considered the single-layer nonconvex problems and their analysis demands strong sub-Gaussian and subexponential assumptions on the gradient and Hessian of the empirical risk respectively. Their analysis also assumes a linearity property on gradient which is difficult to hold or verify. In contrast, our analysis requires much milder assumptions. Besides, we prove that for deep networks which are highly nonconvex, the non-degenerate stationary points of empirical risk can uniformly converge to their corresponding stationary points of population risk at the rate of O( √ s/n) which is faster
than the rate O( √ d/n) for single-layer optimization problems in (Mei et al., 2017). Also, Mei et al. (2017) did not analyze the convergence rate of the empirical risk, stability or generalization error of DNNs as this work.
3 PRELIMINARIES
Throughout the paper, we denote matrices by boldface capital letters, e.g. A. Vectors are denoted by boldface lowercase letters, e.g. a, and scalars are denoted by lowercase letters, e.g. a. We define the r-radius ball as Bd(r) , {z ∈ Rd | ‖z‖2 ≤ r}. To explain the results, we also need the vectorization operation vec(·). It is defined as vec(A) = (A(:, 1); · · · ;A(:, t)) ∈ Rst that vectorizesA ∈ Rs×t along its columns. We use d= ∑l j=1djdj−1 to denote the total dimension of weight parameters, where dj denotes the output dimension of the j-th layer.
In this work, we consider both linear and nonlinear DNNs. Suppose both networks consist of l layers. We use u(j) and v(j) to respectively denote the input and output of the j-th layer, ∀j = 1, . . . , l. Deep linear neural networks: The function of the j-th layer is formulated as
u(j) ,W (j)v(j−1) ∈ Rdj , v(j) , u(j) ∈ Rdj , ∀j = 1, · · · , l,
where v(0) = x is the input andW (j) ∈ Rdj×dj−1 is the weight matrix of the j-th layer. Deep nonlinear neural networks: We adopt the sigmoid function as the non-linear activation function. The function within the j-th layer can be written as
u(j) ,W (j)v(j−1) ∈ Rdj , v(j) , hj(u(j)) = (σ(u(j)1 ); · · · ;σ(u (j) dj )) ∈ Rdj , ∀j = 1, · · · , l,
where u(j)i denotes the i-th entry of u (j) and σ(·) is the sigmoid function, i.e., σ(a) = 1/(1 + e−a).
Following the common practice, both DNN models adopt the squared loss function defined as f(w;x,y) = 12‖v
(l) − y‖22, where w = (w(1); · · · ;w(l)) ∈ Rd contains all the weight parameters and w(j) = vec ( W (j) ) ∈ Rdjdj−1 . Then the empirical risk Ĵn(w) is Ĵn(w) = 1 n ∑n i=1 f(w;x(i),y(i)) = 1 2n ∑n i=1 ‖v (l) (i) − y(i)‖ 2 2, where v (l) (i) is the network’s output of x(i).
4 RESULTS FOR DEEP LINEAR NEURAL NETWORKS
We first analyze linear neural network models and present following new results: (1) the uniform convergence of the empirical risk gradient to its population counterpart and (2) the convergence properties of non-degenerate stationary points of the empirical risk. As a corollary, we also derive the uniform convergence of the empirical risk to the population one, which further gives stability and generalization bounds. In the next section, we extend the analysis to non-linear neural network models.
We assume the input datum x is τ2-sub-Gaussian and has bounded magnitude, as formally stated in Assumption 1. Assumption 1. The input datum x ∈ Rd0 has zero mean and is τ2-sub-Gaussian, i.e.,
E[exp (〈λ,x〉)] ≤ exp ( 1
2 τ2‖λ‖22
) , ∀λ ∈ Rd0 .
Besides, the magnitude x is bounded as ‖x‖2 ≤ rx, where rx is a positive universal constant.
Note that any random vector z consisting of independent entries with bounded magnitude is subGaussian and satisfies Assumption 1 (Vershynin, 2012). Moreover, for such a random z, we have τ = ‖z‖∞ ≤ ‖z‖2 ≤ rx. Such an assumption on bounded magnitude generally holds for natural data, e.g., images and speech signals. Besides, we assume the weight parameters w(j) of each layer are bounded as w ∈ Ω = {w |w(j) ∈ Bdjdj−1(rj), ∀j = 1, · · · , l} where rj is a constant. For notational simplicity, we let r = maxj rj . Such an assumption is common (Xu & Mannor, 2012). Here we assume the entry value of y falls in [0, 1]. For any bounded target output y, we can always scale it to satisfy such a requirement.
The results presented for linear neural networks here can be generalized to deep ReLU neural networks by applying the results from Choromanska et al. (2015a) and Kawaguchi (2016), which transform deep ReLU neural networks into deep linear neural networks under proper assumptions.
4.1 UNIFORM CONVERGENCE OF EMPIRICAL RISK GRADIENT
We first analyze the convergence of gradients for the DNN empirical and population risks. To our best knowledge, these results are the first ones giving guarantees on gradient convergence, which help better understand the landscape of DNNs and their optimization behavior. The results are stated blow. Theorem 1. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then the empirical gradient uniformly converges to the population gradient in Euclidean norm. Specifically, there exist two universal constants cg′ and cg such that if n ≥ cg′ max(l3r2r4x/(cqs log(d/l)ε2τ4 log(1/ε)), s log(d/l)/(lτ2)) where cq = √ max0≤i≤l di, then
sup w∈Ω ∥∥∥∇Ĵn(w)−∇J(w)∥∥∥ 2 ≤ g , cgτωg √ lcq √ s log(dn/l) + log(12/ε) n
holds with probability at least 1 − ε, where s denotes the number of nonzero entries of all weight parameters and ωg = max ( τr2l−1, r2l−1, rl−1 ) .
From Theorem 1, one can observe that with an increasingly larger sample size n, the difference between empirical risk and population risk gradients decreases monotonically at the rate of O(1/ √ n) (up to a log factor). Theorem 1 also characterizes how the depth l contributes to obtaining small difference between the empirical and population risk gradients. Specifically, a deeper neural network needs more training samples to mitigate the difference. Also, due to the factor d, training a network of larger size using gradient descent also requires more training samples. We observe a factor of√
maxi di (i.e. cq), which prefers a DNN architecture of balanced layer sizes (without extremely wide layers). This result also matches the trend and empirical performance in deep learning applications advocating deep but thin networks (He et al., 2016; Szegedy et al., 2015).
By observing Theorem 1, imposing certain regularizations on the weight parameters is useful. For example, reducing the number of nonzero entries s encourages sparsity regularization like ‖w‖1. The results also suggest not choosing large-magnitude weights w in order for a smaller factor r by adopting regularization like ‖w‖22. Theorem 1 also reveals the point derived from optimizing that the empirical and population risks have similar properties when the sample size n is sufficiently large. For example, an /2-stationary point w̃ of Ĵn(w) is also an -stationary point of J(w) with probability 1−ε if n ≥ c (τωg/ )2lcqs log(d/l) with c being a constant. Here -stationary point for a function F means the point w satisfying ‖∇wF ‖2 ≤ . Understanding such properties is useful, since in practice one usually computes an -stationary point of Ĵn(w). These results guarantee the computed point is at most a 2 -stationary point of J(w) and is thus close to the optimum.
4.2 UNIFORM CONVERGENCE OF STATIONARY POINTS
We then proceed to analyze the distribution and convergence properties of stationary points of the DNN empirical risk. Here we consider non-degenerate stationary points which are geometrically isolated and thus unique in local regions. Since degenerate stationary points are not unique in a local region, we cannot expect to establish one-to-one corresponding relationship (see below) between them in empirical risk and population risk. Definition 1. (Non-degenerate stationary points) (Gromoll & Meyer, 1969) If a stationary point w is said to be a non-degenerate stationary point of J(w), then it satisfies
inf i ∣∣λi (∇2J(w))∣∣ ≥ ζ, where λi ( ∇2J(w) ) denotes the i-th eigenvalue of the Hessian∇2J(w) and ζ is a positive constant.
Non-degenerate stationary points include local minima/maxima and non-degenerate saddle points, while degenerate stationary points refer to degenerate saddle points. Then we introduce the index of non-degenerate stationary points which can characterize their geometric properties. Definition 2. (Index of non-degenerate stationary points) (Dubrovin et al., 2012) The index of a symmetric non-degenerate matrix is the number of its negative eigenvalues, and the index of a non-degenerate stationary pointw of a smooth function F is simply the index of its Hessian∇2F (w).
Suppose that J(w) has m non-degenerate stationary points that are denoted as {w(1), w(2), · · · ,w(m)}. We prove following convergence behavior of these stationary points. Theorem 2. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then if n ≥ ch max(l3r2r4x/(cqs log(d/l)ε2τ4 log(1/ε)), s log(d/l)/ζ2) where ch is a constant, for k ∈ {1, · · · ,m}, there exists a non-degenerate stationary point w(k)n of Ĵn(w) which corresponds to the non-degenerate stationary point w(k) of J(w) with probability at least 1− ε. In addition, w(k)n and w(k) have the same non-degenerate index and they satisfy
‖w(k)n −w(k)‖2 ≤ 2cgτωg ζ
√ lcq
√ s log(dn/l) + log(12/ε)
n , (k = 1, · · · ,m)
with probability at least 1− ε, where the parameters cq , ωg , and cg are given in Theorem 1.
Theorem 2 guarantees the one-to-one correspondence between the non-degenerate stationary points of the empirical risk Ĵn(w) and the popular risk J(w). The distances of the corresponding pairs
become smaller as n increases. In addition, the corresponding pairs have the same non-degenerate index. This implies that the corresponding stationary points have the same geometric properties, such as whether they are saddle points. Accordingly, we can develop more efficient algorithms, e.g. escaping saddle points (Ge et al., 2015), since Dauphin et al. (2014) empirically proved that saddle points are usually surrounded by high error plateaus. Also when n is sufficiently large, the properties of stationary points of Ĵn(w) are similar to the points of the population risk J(w) in the sense that they have exactly matching local minima/maxima and non-degenerate saddle points. By comparing Theorems 1 and 2, we find that the requirement for sample number in Theorem 2 is more restrict, since establishing exact one-to-one correspondence between the non-degenerate stationary points of Ĵn(w) and J(w) and bounding their uniform convergence rate to each other are more challenging. From Theorems 1 and 2, we also notice that the uniform convergence rate of non-degenerate stationary points has an extra factor 1/ζ. This is because bounding stationary points needs to access not only the gradient itself but also the Hessian matrix. See more details in proof.
Kawaguchi (2016) pointed out that degenerate stationary points indeed exist for DNNs. However, since degenerate stationary points are not isolated, such as forming flat regions, it is hard to establish the unique correspondence for them as for non-degenerate ones. Fortunately, by Theorem 1, the gradients at these points of Ĵn(w) and J(w) are close. This implies that a degenerate stationary point of J(w) will also give a near-zero gradient for Ĵn(w), i.e., it is also a stationary point for Ĵn(w).
In the proof, we consider the essential multi-layer architecture of the deep linear network, and do not transform it into a linear regression model and directly apply existing results (see Loh & Wainwright (2015) and Negahban et al. (2011)). This is because we care more about deep ReLU networks which cannot be reduced in this way. Our proof technique is more suitable for analyzing the multi-layer neural networks which paves a way for analyzing deep ReLU networks. Also such an analysis technique can reveal the role of network parameters (dimension, norm, etc.) of each weight matrix in the results which may benefit the design of networks. Besides, the obtained results are more consistent with those for deep nonlinear networks (see Sec. 5).
4.3 UNIFORM CONVERGENCE, STABILITY AND GENERALIZATION OF EMPIRICAL RISK
Based on the above results, we can derive the uniform convergence of empirical risk to population risk easily. In this subsection, we first give the uniform convergence rate of empirical risk for deep linear neural networks in Theorem 3, and then use this result to derive the stability and generalization bounds for DNNs in Corollary 1. Theorem 3. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then there exist two universal constants cf ′ and cf such that if n ≥ cf ′ max(l3r4x/(dls log(d/l)ε2τ4 log(1/ε)), s log(d/l)/(τ2dl)), then
sup w∈Ω ∣∣∣Ĵn(w)− J(w)∣∣∣ ≤ f , cfτ max(√dlτr2l, rl)√s log(dn/l) + log(8/ε) n
(2)
holds with probability at least 1− ε. Here l is the number of layers in the neural network, n is the sample size and dl is the dimension of the final layer.
From Theorem 3, when n→ +∞, we have |Ĵn(w)− J(w)| → 0. According to the definition of uniform convergence (Vapnik & Vapnik, 1998; Shalev-Shwartz et al., 2010), under the distribution D, the empirical risk of a deep linear neural network converges to its population risk uniformly at the rate of O(1/ √ n). Theorem 3 also explains the roles of the depth l, the network size d, and the number of nonzero weight parameters s in a DNN model.
Based on VC-dimension techniques, Bartlett & Maass (2003) proved that for a feedforward neural network with polynomial activation functions and one-dimensional output, with probability at least 1 − ε the convergence bound satisfies |Ĵn(w) − inff J(w)| ≤ O( √
(γ log2(n) + log(1/ε))/n). Here γ is the shattered parameter and can be as large as the VC-dimension of the network model, i.e. at the order ofO(ld log(d)+l2d) (Bartlett & Maass, 2003). Note that Bartlett & Maass (2003) did not reveal the role of the magnitude of weight in their results. In contrast, our uniform convergence bound is supw∈Ω |Ĵn(w)−J(w)| ≤ O( √ (s log(dn/l) + log(1/ε))/n). So our convergence rate is tighter.
Neyshabur et al. (2015) proved that the Rademacher complexity of a fully-connected neural network model with ReLU activation functions and one-dimensional output is O ( rl/ √ n )
(see Corollary 2 in (Neyshabur et al., 2015)). Then by applying Rademacher complexity based argument (ShalevShwartz & Ben-David, 2014a), we have | supf (Ĵn(w)−J(w))| ≤ O((rl + √ log(1/ε))/ √ n) with probability at least 1− ε where the loss function is the training error g = 1(v(l) 6=y) in which v(l) is the output of the l-th layer in the network model f(w;x,y). The convergence rate in our theorem is O(r2l √ (s log(d/l) + log(1/ε))/n) and has the same convergence speed O(1/ √ n) w.r.t. sample number n. Note that our convergence rate involves r2l since we use squared loss instead of the training error in (Neyshabur et al., 2015). The extra parameters s and d are involved since we consider the parameter space rather than the function hypothesis f in (Neyshabur et al., 2015), which helps people more transparently understand the roles of the network parameters. Besides, the Rademacher complexity cannot be applied to analyzing convergence properties of the empirical risk gradient and stationary points as our techniques.
Based on Theorem 3, we proceed to analyze the stability property of the empirical risk and the convergence rate of the generalization error in expectation. Let S = {(x(1),y(1)), · · · , (x(n),y(n))} denote the sample set in which the samples are i.i.d. drawn from D. When the optimal solution wn to problem (1) is computed by deterministic algorithms, the generalization error is defined as g = Ĵn(w
n)− J(wn). But one usually employs randomized algorithms, e.g. stochastic gradient descent (SGD), for computing wn. In this case, stability and generalization error in expectation defined in Definition 3 are more applicable.
Definition 3. (Stability and generalization in expectation) (Vapnik & Vapnik, 1998; Shalev-Shwartz et al., 2010; Gonen & Shalev-Shwartz, 2017) Assume a randomized algorithm A is employed, ((x′(1),y ′ (1)), · · · , (x ′ (n),y ′ (n))) ∼ D and wn = argminw Ĵn(w) is the empirical risk minimizer
(ERM). For every j ∈ [n], suppose wj∗ = argminw 1n−1 ∑ i 6=j fi(w;x(i),y(i)). We say that the
ERM is on average stable with stability rate k under distribution D if ∣∣∣ES∼D,A,(x′
(j) ,y′ (j) )∼D
1 n ∑n j=1 [ fj(w j ∗;x ′ (j),y ′ (j))− fj(w n;x′(j),y ′ (j)) ]∣∣∣ ≤ k. The ERM is said to have generalization
error with convergence rate k′ under distribution D if we have ∣∣∣ES∼D,A (J(wn)− Ĵn(wn))∣∣∣ ≤
k′ .
Stability measures the sensibility of the empirical risk to the input and generalization error measures the effectiveness of ERM on new data. Generalization error in expectation is especially important for applying DNNs considering their internal randomness, e.g. from SGD optimization. Now we present the results on stability and generalization performance of deep linear neural networks.
Corollary 1. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then with probability at least 1− ε, both the stability rate and the generalization error rate of ERM of a deep linear neural network are at least f :∣∣∣ES∼D,A,(x′
(j) ,y′ (j) )∼D
1
n n∑ j=1 ( f∗j − fj ) ∣∣∣ ≤ f and ∣∣∣ES∼D,A (J(wn)− Ĵn(wn)) ∣∣∣ ≤ f , where f∗j and fj respectively denote fj(w j ∗;x ′ (j),y ′ (j)) and fj(w n;x′(j),y ′ (j)), and f is defined in Eqn. (2).
According to Corollary 1, both the stability rate and the convergence rate of generalization error are O( f ). This result indicates that deep learning empirical risk is stable and its output is robust to small perturbation over the training data. When n is sufficiently large, small generalization error of DNNs is guaranteed.
5 RESULTS FOR DEEP NONLINEAR NEURAL NETWORKS
In the above section, we analyze the empirical risk optimization landscape for deep linear neural network models. In this section, we extend our analysis to deep nonlinear neural networks which adopt the sigmoid activation function. Our analysis techniques are also applicable to other third-order
differentiable activation functions, e.g., tanh function with different convergence rate. Here we assume the input data are i.i.d. Gaussian variables.
Assumption 2. The input datum x is a vector of i.i.d. Gaussian variables from N (0, τ2).
Since for any input, the sigmoid function always maps it to the range [0, 1]. Thus, we do not require the input x to have bounded magnitude. Such an assumption is common. For instance, Tian (2017) and Soudry & Hoffer (2017) both assumed that the entries in the input vector are from Gaussian distribution. We also assumew ∈ Ω as in (Xu & Mannor, 2012). Here we also assume that the entry value of the target output y falls in [0, 1]. Similar to the analysis of deep linear neural networks, here we also aim to characterize the empirical risk gradient, stationary points and empirical risk for deep nonlinear neural networks.
5.1 UNIFORM CONVERGENCE OF GRADIENT AND STATIONARY POINTS
Here we analyze convergence properties of gradients of the empirical risk for deep nonlinear neural networks.
Theorem 4. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are sigmoid functions. Then the empirical gradient uniformly converges to the population gradient in Euclidean norm. Specifically, there are two universal constants cy and cy′ such that if n ≥ cy′cdl3r2/(s log(d)τ2ε2 log(1/ε)) where cd=max0≤i≤l di, then with probability at least 1− ε
sup w∈Ω ∥∥∥∇Ĵn(w)−∇J(w)∥∥∥ 2 ≤ l , τ √ 512 729 cyl(l + 2) (lcr + 1) cdcr √ s log(dn/l) + log(4/ε) n ,
where cr = max(r2/16, ( r2/16 )l−1 ), and s denotes the nonzero entry number of all weights.
Similar to deep linear neural networks, the layer number l, width di, number of nonzero parameter entries s, network size d and magnitude of weights are all critical to the convergence rate. Also, since there is a factor maxi di in the convergence rate, it is better to avoid choosing an extremely wide layer. Interestingly, when analyzing the representation ability of deep learning, Eldan & Shamir (2016) also suggested non-extreme-wide layers, though the conclusion was derived from a different perspective. By comparing Theorems 1 and 4, one can observe that there is a factor (1/16)l−1 in the convergence rate in Theorem 4. This is because the convergence rate accesses the Lipschitz constant and when we bound it, sigmoid activation function brings the factor 1/16 for each layer.
Now we analyze the non-degenerate stationary points of the empirical risk for deep nonlinear neural networks. Here we also assume that the population risk has m non-degenerate stationary points denoted by {w(1),w(2), · · · ,w(m)}. Theorem 5. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are sigmoid functions. Then if n ≥ cs max ( cdl
3r2/(s log(d)τ2ε2 log(1/ε)) , s log(d/l)/ζ2 ) where cs is a constant, for k ∈ {1, · · · ,m}, there exists a non-degenerate stationary pointw(k)n of Ĵn(w) which corresponds to the non-degenerate stationary pointw(k) of J(w) with probability at least 1− ε. Moreover, w(k)n and w(k) have the same non-degenerate index and they obey∥∥∥w(k)n −w(k)∥∥∥
2 ≤ 2τ ζ
√ 512
729 cyl(l + 2) (lcr + 1) cdcr
√ s log(dn/l) + log(4/ε)
n , (k = 1, · · · ,m)
with probability at least 1− ε, where cy , cd and cr are the same parameters in Theorem 4.
According to Theorem 5, there is one-to-one correspondence between the non-degenerate stationary points of Ĵn(w) and J(w). Also the corresponding pair has the same non-degenerate index, implying they have exactly matching local minima/maxima and non-degenerate saddle points. When n is sufficiently large, the non-degenerate stationary pointw(k)n in Ĵn(w) is very close to its corresponding non-degenerate stationary point w(k) in J(w). As for the degenerate stationary points, Theorem 4 guarantees the gradients at these points of J(w) and Ĵn(w) are very close to each other.
5.2 UNIFORM CONVERGENCE, STABILITY AND GENERALIZATION OF EMPIRICAL RISK
Here we first give the uniform convergence analysis of the empirical risk and then analyze its stability and generalization.
Theorem 6. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are the sigmoid functions. If n ≥ 18l2r2/(s log(d)τ2ε2 log(1/ε)), then
sup w∈Ω ∣∣∣Ĵn(w)− J(w)∣∣∣ ≤ n , τ√9 8 cycd (1 + cr(l − 1)) √ s log(nd/l) + log(4/ε) n (3)
holds with probability at least 1−ε, where cy , cd and cr are given in Theorem 4.
From Theorem 6, we obtain that under the distribution D, the empirical risk of a deep nonlinear neural network converges at the rate of O(1/ √ n) (up to a log factor). Theorem 6 also gives similar results as Theorem 3, including the inclination of regularization penalty on weight and suggestion on non-extreme-wide layers. Similar to linear networks, our risk convergence rate is also tighter than the convergence rate on the networks with polynomial activation functions and one-dimensional output in (Bartlett & Maass, 2003) since ours is at the order ofO( √ (l − 1)(s log(dn/l) + log(1/ε))/n), while
the later is O( √
(γ log2(n) + log(1/ε))/n) where γ is at the order of O(ld log(d) + l2d) (Bartlett & Maass, 2003).
We then establish the stability property and the generalization error of the empirical risk for nonlinear neural networks. By Theorem 6, we can obtain the following results.
Corollary 2. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are sigmoid functions. Then with probability at least 1− ε, we have∣∣∣ES∼D,A,(x′
(j) ,y′ (j) )∼D
1
n n∑ j=1 ( f∗j − fj ) ∣∣∣ ≤ n and ∣∣∣ES∼D,A (J(wn)− Ĵn(wn)) ∣∣∣ ≤ n, where n is defined in Eqn. (3). The notations f∗j and fj here are the same in Corollary 1.
By Corollary 2, we know that both the stability convergence rate and the convergence rate of generalization error are O(1/ √ n). This result accords with Theorems 8 and 9 in (Shalev-Shwartz et al., 2010) which impliesO(1/ √ n) is the bottleneck of the stability and generalization convergence rate for generic learning algorithms. From this result, we have that if n is sufficiently large, the empirical risk can be expected to be very stable. This also dispels misgivings of the random selection of training samples in practice. Such a result indicates that the deep nonlinear neural network can offer good performance on testing data if it achieves small training error.
6 PROOF ROADMAP
Here we briefly introduce our proof roadmap. Due to space limitation, all the proofs of Theorems 1 ∼ 6 and Corollaries 1 and 2 as well as technical lemmas are deferred to the supplementary material. The proofs of Theorems 1 and 4 are similar but essentially differ in some techniques for bounding probability due to their different assumptions. For explanation simplicity, we define four events: E = {supw∈Ω ‖∇Ĵn(w) − ∇J(w)‖2 > t}, E1 = {supw∈Ω ‖ 1n ∑n i=1 ( ∇f(w,x(i))−∇f(wkw ,x(i)) ) ‖2 > t/3}, E2 = {supwikw∈Ni, i∈[l]
‖ 1n ∑n i=1∇f(wkw ,x(i)) − E∇f(wkw ,x)‖2 > t/3}, and E3 = {supw∈Ω ‖E∇f(wkw ,x) −E∇f(w,x)‖2> t/3}, where wkw = [w1kw ;w 2 kw ; · · · ;wlkw ] is constructed by selecting w i kw ∈ Rdidi−1 from didi−1 /d-net Ni such that ‖w −wkw‖2 ≤ . Note that in Theorems 1 and 4, t is respectively set to g and l. Then we have P(E) ≤ P(E1) + P(E2) + P(E3). So we only need to separately bound P(E1), P(E2) and P(E3). For P(E1) and P(E3), we use the gradient Lipschitz constant and the properties of -net to prove P(E1) ≤ ε/2 and P(E3) = 0, while bounding P(E2) needs more efforts. Here based on the assumptions, we prove that P(E2) has sub-exponential tail associated to the sample number n and the networks parameters, and it satisfies P(E2) ≤ ε/2 with proper conditions. Finally, combining the bounds of the three terms, we obtain the desired results.
To prove Theorems 2 and 5, we first prove the uniform convergence of the empirical Hessian to its population Hessian. Then, we define such a set D = {w ∈ Ω : ‖∇J(w)‖2 < and infi ∣∣λi (∇2J(w))∣∣ ≥ ζ}. In this way, D can be decomposed into countably components, with each component containing either exactly one or zero non-degenerate stationary point. For each component, the uniform convergence of gradient and the results in differential topology guarantee that if J(w) has no stationary points, then Ĵn(w) also has no stationary points and vise versa. Similarly, for each component, the uniform convergence of Hessian and the results in differential topology guarantee that if J(w) has a unique non-degenerate stationary point, then Ĵn(w) also has a unique non-degenerate stationary point with the same index. After establishing exact correspondence between the non-degenerate stationary points of empirical risk and population risk, we use the uniform convergence of gradient and Hessian to bound the distance between the corresponding pairs.
We adopt a similar strategy to prove Theorems 3 and 6. Specifically, we divide the event supw∈Ω|Ĵn(w)−∇J(w)| > t into E1, E2 and E3 which have the same forms as their counterparts in the proofs of Theorem 1 with the gradient replaced by the loss function. To prove P(E1) ≤ ε/2 and P(E3) = 0, we can use the Lipschitz constant of the loss function and the -net properties. The remaining is to prove P(E2). We also prove that it has sub-exponential tail associated to the sample number n and the networks parameters and it obeys P(E2) ≤ ε/2 with proper conditions. Then we utilize the uniform convergence of Ĵn(w) to prove the stability and generalization bounds of Ĵn(w) (i.e. Corollaries 1 and 2).
7 CONCLUSION
In this work, we provided theoretical analysis on the landscape of empirical risk optimization for deep linear/nonlinear neural networks with (stochastic-)gradient descent, including the properties of the gradient and stationary points of empirical risk as well as the uniform convergence, stability, and generalization of the empirical risk itself. To our best knowledge, most of the results are new to deep learning community. These results also reveal that the depth l, the nonzero entry number s of all weights, the network size d and the width of a network are critical to the convergence rates. We also prove that the weight parameter magnitude is important to the convergence rate. Indeed, small magnitude of the weights is suggested. All the results are consistent with the widely used network architectures in practice.
ACKNOWLEDGMENT
This work is partially supported by National University of Singapore startup grant R-263-000-C08133, Ministry of Education of Singapore AcRF Tier One grant R-263-000-C21-112, NUS IDS R-263-000-C67-646 and ECRA R-263-000-C87-133.
REFERENCES R. Alessandro. Lecture notes of advanced statistical theory I, CMU. http://www.stat.cmu.edu/ ~arinaldo/36755/F16/Scribed_Lectures/LEC0914.pdf, 2016.
B. Bakshi and G. Stephanopoulos. Wave-net: A multiresolution, hierarchical neural network with localized learning. AIChE Journal, 39(1):57–81, 1993.
P. Bartlett and W. Maass. Vapnik-chervonenkis dimension of neural nets. The handbook of brain theory and neural networks, pp. 1188–1192, 2003.
E. Baum. On the capabilities of multilayer perceptrons. Journal of complexity, 4(3):193–215, 1988.
A. Choromanska, M. Henaff, M. Mathieu, G. Arous, and Y. LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015a.
A. Choromanska, Y. LeCun, and G. Arous. Open problem: The landscape of the loss surfaces of multilayer networks. In COLT, pp. 1756–1760, 2015b.
R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, pp. 160–167, 2008.
Y. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In NIPS, pp. 2933–2941, 2014.
B. Dubrovin, A. Fomenko, and S. Novikov. Modern geometry—methods and applications: Part II: The geometry and topology of manifolds, volume 104. Springer Science & Business Media, 2012.
R. Eldan and O. Shamir. The power of depth for feedforward neural networks. In COLT, pp. 907–940, 2016.
Y. Fyodorov and I. Williams. Replica symmetry breaking condition exposed by random matrix calculation of landscape complexity. Journal of Statistical Physics, 129(5-6):1081–1116, 2007.
R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In COLT, pp. 797–842, 2015.
A. Gonen and S. Shalev-Shwartz. Fast rates for empirical risk minimization of strict saddle problems. COLT, 2017.
A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, pp. 6645–6649, 2013.
D. Gromoll and W. Meyer. On differentiable functions with isolated critical points. Topology, 8(4):361–369, 1969.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016.
G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7): 1527–1554, 2006.
G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.
K. Kawaguchi. Deep learning without poor local minima. In NIPS, pp. 1097–1105, 2016.
P. Loh and M. J. Wainwright. Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. JMLR, 16(Mar):559–616, 2015.
S. Mei, Y. Bai, and A. Montanari. The landscape of empirical risk for non-convex losses. Annals of Statistics, 2017.
S. Negahban, B. Yu, M. Wainwright, and P. Ravikumar. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. In NIPS, pp. 1348–1356, 2009.
S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. In NIPS, 2011.
B. Neyshabur, R. Tomioka, and N. Srebro. Norm-based capacity control in neural networks. In COLT, pp. 1376–1401, 2015.
Q. Nguyen and M. Hein. The loss surface of deep and wide neural networks. In ICML, 2017.
P. Rigollet. Statistic s997 lecture notes, MIT mathematics. MIT OpenCourseWare, pp. 23–24, 2015.
M. Rudelson and R. Vershynin. Hanson-wright inequality and sub-gaussian concentration. Electronic Communications in Probability, 18(82):1–9, 2013.
S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge Univ. Press, Cambridge, pp. 375–382, 2014a.
S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014b.
S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform convergence. JMLR, 11:2635–2670, 2010.
S. Shalev-Shwartz, O. Shamir, and S. Shammah. Failures of deep learning. ICML, 2017.
D. Soudry and Y. Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016.
D. Soudry and E. Hoffer. Exponentially vanishing sub-optimal local minima in multilayer neural networks. arXiv preprint arXiv:1702.05777, 2017.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pp. 1–9, 2015.
Y. Tian. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. ICML, 2017.
V. N. Vapnik and V. Vapnik. Statistical learning theory, volume 1. Wiley New York, 1998.
R. Vershynin. Introduction to the non-asymptotic analysis of random matrices, compressed sensing. Cambridge Univ. Press, Cambridge, pp. 210–268, 2012.
H. Xu and S. Mannor. Robustness and generalization. Machine Learning, 86(3):391–423, 2012.
Y. Zhang, J. Lee, and M. Jordan. `1-regularized neural networks are improperly learnable in polynomial time. In ICML, pp. 993–1001, 2016.
Y. Zhang, P. Liang, and M. Wainwright. Convexified convolutional neural networks. ICML, 2017.
SUPPLEMENTARY MATERIAL OF EMPIRICAL RISK LANDSCAPE ANALYSIS FOR UNDERSTANDING DEEP NEURAL NETWORKS
A STRUCTURE OF THIS DOCUMENT
This document gives some other necessary notations and preliminaries for our analysis in Sec. B. Then we prove Theorems 1∼ 3 and Corollary 1 for deep linear neural networks in Sec. C. Then we present the proofs of Theorems 4 ∼ 6 and Corollary 2 for deep nonlinear neural networks in Sec. D. In both Sec. C and D, we first present the technical lemmas for proving our final results and subsequently present the proofs of these lemmas. Then we utilize these technical lemmas to prove our desired results. Finally, we give the proofs of other auxiliary lemmas.
B NOTATIONS AND PRELIMINARY TOOLS
Beyond the notations introduced in the manuscript, we need some other notations used in this document. Then we introduce several lemmas that will be used later.
B.1 NOTATIONS
Throughout this document, we use 〈·, ·〉 to denote the inner product. A⊗C denotes the Kronecker product betweenA and C. Note thatA and C inA⊗C can be matrices or vectors. For a matrix A ∈ Rn1×n2 , we use ‖A‖F = √∑ i,jA 2 ij to denote its Frobenius norm, whereAij is the (i, j)-th entry ofA. We use ‖A‖op = maxi |λi(A)| to denote the operation norm of a matrixA ∈ Rn1×n1 , where λi(A) denotes the i-th eigenvalue of the matrixA. For a 3-way tensor A ∈ Rn1×n2×n3 , its operation norm is computed as
‖A‖op = sup ‖λ‖2≤1
〈 λ⊗ 3 ,A 〉 = ∑ i,j,k Aijkλiλjλk,
where Aijk denotes the (i, j, k)-th entry of A. Also we denote the vectorization ofW (j) (the weight matrix of the j-th layer) as w(j) = vec ( W (j) ) ∈ Rdjdj−1 .
We denote Ik as the identity matrix of size k × k.
For notational simplicity, we further define e , v(l) − y as the output error vector. Then the squared loss is defined as f(w;x,y) = 12‖e‖ 2 2, where w = (w(1); · · · ;w(l)) ∈ Rd contains all the weight parameters.
B.2 TECHNICAL LEMMAS
We first introduce Lemmas 1 and 2 which are respectively used for bounding the `2-norm of a vector and the operation norm of a matrix. Then we introduce Lemmas 3 and 4 which discuss the topology of functions. In Lemma 5, we give the relationship between the stability and generalization of empirical risk. Lemma 1. (Vershynin, 2012) For any vector x ∈ Rd, its `2-norm can be bounded as
‖x‖2 ≤ 1
1− sup λ∈λ
〈λ,x〉 .
where λ = {λ1, . . . ,λkw} be an -covering net of Bd(1). Lemma 2. (Vershynin, 2012) For any symmetric matrix X ∈ Rd×d, its operator norm can be bounded as
‖X‖op ≤ 1
1− 2 sup λ∈λ
|〈λ,Xλ〉| .
where λ = {λ1, . . . ,λkw} be an -covering net of Bd(1).
Lemma 3. (Mei et al., 2017) Let D ⊆ Rd be a compact set with a C2 boundary ∂D, and f, g : A → R be C2 functions defined on an open set A, with D ⊆ A. Assume that for all w ∈ ∂D and all t ∈ [0, 1], t∇f(w) + (1 − t)∇g(w) 6= 0. Finally, assume that the Hessian ∇2f(w) is non-degenerate and has index equal to r for all w ∈ D. Then the following properties hold:
(1) If g has no critical point in D, then f has no critical point in D.
(2) If g has a unique critical pointw in D that is non-degenerate with an index of r, then f also has a unique critical point w′ in D with the index equal to r.
Lemma 4. (Mei et al., 2017) Suppose that F (w) : Θ→ R is a C2 function where w ∈ Θ. Assume that {w(1), . . . , w(m)} is its non-degenerate critical points and let D = {w ∈ Θ : ‖∇F (w)‖2 < and infi ∣∣λi (∇2F (w))∣∣ ≥ ζ}. Then D can be decomposed into (at most) countably components, with each component containing either exactly one critical point, or no critical point. Concretely, there exist disjoint open sets {Dk}k∈N, with Dk possibly empty for k ≥ m+ 1, such that
D = ∪∞k=1Dk .
Furthermore, w(k) ∈ Dk for 1 ≤ k ≤ m and each Di, k ≥ m+ 1 contains no stationary points. Lemma 5. (Shalev-Shwartz & Ben-David, 2014b; Gonen & Shalev-Shwartz, 2017) Assume that D is a sample distribution and randomized algorithm A is employed for optimization. Suppose that ((x′(1),y ′ (1)), · · · , (x ′ (n),y ′ (n))) ∼ D and wn = argminw Ĵn(w). For every j ∈ {1, · · · , n}, suppose wj∗ = argminw 1
n−1 ∑ i 6=j fi(w;x(i),y(i)). For arbitrary distribution D, we have∣∣∣∣∣∣ES∼D,A,(x′(j),y′(j))∼D 1n n∑ j=1 ( f∗j − fj )∣∣∣∣∣∣ = ∣∣∣∣∣ES∼D,A (J(wn)− Ĵn(wn))
∣∣∣∣∣. where f∗j and fj respectively denote fj(w j ∗;x ′ (j),y ′ (j)) and fj(w n;x′(j),y ′ (j)).
C PROOFS FOR DEEP LINEAR NEURAL NETWORKS
In this section, we first present the technical lemmas in Sec. C.1 and then we give the proofs of these lemmas in Sec. C.2. Next, we utilize these lemmas to prove the results in Theorems 1∼ 3 and Corollary 1 in Sec. C.3. Finally, we give the proofs of other lemmas in Sec. C.4.
C.1 TECHNICAL LEMMAS
Here we present the technical lemmas for proving our desired results. For brevity, we also define Bj:s as follows:
Bs:t ,W (s)W (s−1) · · ·W (t) ∈ Rds×dt−1 , (s ≥ t); Bs:t , I, (s < t). (4)
Lemma 6. Assume that the activation functions in the deep neural network f(w,x) are linear functions. Then the gradient of f(w,x) with respect to w(j) can be written as
∇w(j)f(w,x) = ( (Bj−1:1x)⊗BTl:j+1 ) e, (j = 1, · · · , l),
where ⊗ denotes the Kronecke product. Then we can compute the Hessian matrix as follows:
∇2f(w,x) = ∇w(1) ( ∇w(1)f(w,x) ) · · · ∇w(1) ( ∇w(l)f(w,x) ) ∇w(2) ( ∇w(1)f(w,x) ) · · · ∇w(2) ( ∇w(l)f(w,x) ) ... . . . ...
∇w(l) ( ∇w(1)f(w,x) ) · · · ∇w(l) ( ∇w(l)f(w,x)
) ,
whereQst , ∇w(s) ( ∇w(t)f(w,x) ) is defined as
Qst= ( BTt−1:s+1 ) ⊗ ( Bs−1:1xe TBTl:t+1 ) + ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) , if s<t,( Bs−1:1xx TBs−1:1 ) ⊗ ( Bl:s+1 TBl:s+1 ) , if s= t,(
BTl:s+1ex TBTt−1:1 ) ⊗Bs−1:t+1+ ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) , if s>t.
Lemma 7. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then for any t > 0, the objective f(w,x) obeys
P
( 1
n n∑ i=1 ( f(w,x(i))−E(f(w,x(i))) ) >t
) ≤ 2 exp −cf ′nmin t2 ω2f max ( dlω2fτ 4, τ2 ) , t ω2fτ 2 , where cf ′ is a positive constant and ωf = rl. Lemma 8. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then for any t > 0 and arbitrary unit vector λ ∈ Sd−1, the gradient∇f(w,x) obeys
P
( 1
n n∑ i=1 (〈 λ,∇wf(w,x(i))−E∇wf(w,x(i)) 〉) >t
)
≤ 3 exp ( −cg′nmin ( t2
lmax (ωgτ2, ωgτ4, ωg′τ2) , t√ lωg max (τ, τ2)
)) ,
where cg′ is a constant; ωg = cqr2(2l−1) and ωg′ = cqr2(l−1) in which cq = √
max0≤i≤l di. Lemma 9. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then for any t > 0 and arbitrary unit vector λ ∈ Sd−1, the Hessian∇2f(w,x) obeys
P
( 1
n n∑ i=1 (〈 λ, (∇2wf(w,x(i))− E∇2wf(w,x(i)))λ 〉) > t
)
≤ 5 exp ( −ch′nmin ( t2
τ2l2 max (ωg, ωgτ2, ωh) , t √ ωglmax (τ, τ2)
)) ,
where ωg = r4(l−1) and ωh = r2(l−2). Lemma 10. Suppose the activation functions in deep neural network are linear functions. Then for any w ∈ Bd(r) and x ∈ Bd0(rx), we have
‖∇wf(w,x)‖2 ≤ √ αg, where αg = ctlr4xr 4l−2.
in which ct is a constant. Further, for any w ∈ Bd(r) and x ∈ Bd0(rx), we also have∥∥∇2f(w,x)∥∥op ≤ ∥∥∇2f(w,x)∥∥F ≤ l√αl, where αl , ct′r4xr4l−2. in which ct′ is a constant. With the same condition, we can bound the operation norm of ∇3f(w,x). That is, there exists a universal constant αp such that
∥∥∇3f(w,x)∥∥op ≤ αp. Lemma 11. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then there exist two universal constant cg and ch such that the sample Hessian converges uniformly to the population Hessian in operator norm. Specifically, there exit two universal constants ch1 and ch2 such that if n ≥ ch2 max( α2pr 2
τ2l2ω2hε 2s log(d/l) , s log(d/l)/(lτ2)), then
sup w∈Ω ∥∥∥∇2Ĵn(w)−∇2J(w)∥∥∥op≤ch1τ lωh √ d log(nl)+log(20/ε) n
holds with probability at least 1− ε, where ωh = max ( τr2(l−1), r2(l−2), rl−2 ) .
C.2 PROOFS OF TECHNICAL LEMMAS
To prove the above lemmas, we first introduce some useful results. Lemma 12. (Rudelson & Vershynin, 2013) Assume that x = (x1;x2; · · · ;xk) ∈ Rk is a random vector with independent components xi which have zero mean and are independent τ2i -sub-Gaussian variables. Here maxi τ2i ≤ τ2. LetA be an k × k matrix. Then we have
E exp λ ∑ i,j:i 6=j Aijxixj − E( ∑ i,j:i 6=j Aijxixj) ≤ exp (2τ2λ2‖A‖2F ) , |λ| ≤ 1/(2τ‖A‖2).
Lemma 13. Assume that x = (x1;x2; · · · ;xk) ∈ Rk is a random vector with independent components xi which have zero mean and are independent τ2i -sub-Gaussian variables. Here maxi τ 2 i ≤ τ2. Let a be an n-dimensional vector. Then we have
E exp ( λ ( k∑ i=1 aix 2 i − E ( k∑ i=1 aix 2 i ))) ≤ E exp ( 128λ2τ4 ( k∑ i=1 a2i )) , |λ| ≤ 1 τ2 maxi ai .
Lemma 14. ForBj:t defined in Eqn. (4), we have the following properties:
‖Bs:t‖op ≤ ‖Bs:t‖F ≤ ωr and ‖Bl:1‖op ≤ ‖Bl:1‖F ≤ ωf , where ωr = rs−t+1 ≤ max ( r, rl ) and ωf = rl.
Lemma 13 is useful for bounding probability. The two inequalities in Lemma 14 can be obtained by using ‖w(j)‖2 ≤ r (∀j = 1, · · · , l). We defer the proofs of Lemmas 13 and 14 to Sec. C.4.2.
C.2.1 PROOF OF LEMMA 6
Proof. When the activation functions are linear functions, we can easily compute the gradient of f(w,x) with respect to w(j):
∇w(j)f(w,x) = ( (Bj−1:1x)⊗BTl:j+1 ) e, (j = 1, · · · , l),
where ⊗ denotes the Kronecker product. Now we consider the computation of the Hessian matrix. For brevity, letQs = ( (Bs−1:1x)⊗BTl:s+1 ) . Then we can compute∇2w(s)f(w,x) as follows:
∇2w(s)f(w,x) = ∂2f(w,x)
∂wT(s)∂w(s) =
∂2f(w,x)
∂wT(s)∂w(s) = ∂(Qse) ∂wT(s) = ∂vec (Qse) ∂wT(s)
= ∂vec
( QsBl:s+1W (t)Bs−1:1x )
∂wT(s) = ∂ ( (Bs−1:1x) T ⊗ (QsBl:s+1) ) vec ( W (s) ) ∂wT(s)
=(Bs−1:1x) T ⊗ (( (Bs−1:1x)⊗BTl:s+1 ) Bl:s+1 ) ¬ =(Bs−1:1x) T ⊗ ( (Bs−1:1x)⊗ ( BTl:s+1Bl:s+1
)) = ( (Bs−1:1x) T ⊗ (Bs−1:1x) ) ⊗ ( BTl:s+1Bl:s+1
) ® = ( (Bs−1:1x)(Bs−1:1x) T ) ⊗ ( BTl:s+1Bl:s+1 ) ,
where ¬ holds since Bj−1:1x is a vector and for any vector x, we have (x⊗A)B = x⊗ (AB). holds because for any four matrices Z1 ∼ Z3 of proper sizes, we have (Z1 ⊗ Z2) ⊗ Z3 = Z1 ⊗ (Z2 ⊗ Z3). ® holds because for any two matrices z1, z2 of proper sizes, we have z1zT2 = z1 ⊗ zT2 = zT2 ⊗ z1. Then, we consider the case s > t:
∇w(t) ( ∇w(s)f(w,x) ) = ∂2f(w,x)
∂wT(t)∂w(s) =
∂2f(w,x)
∂wT(t)∂w(s) = ∂(Qse) ∂wT(t) = ∂vec (Qse) ∂wT(t)
= ∂vec
( QsBl:t+1W (t)Bt−1:1x )
∂wT(t) + ∂vec
(( (Bs−1:1x)⊗BTl:s+1 ) e )
∂wT(t) .
Notice, here we just think thatQs in the ∂vec(QsBl:t+1W (t)Bt−1:1x)
∂wT (t)
is a constant matrix and is not
related toW (t). Similarly, we also take e in ∂vec(((Bs−1:1x)⊗BTl:s+1)e)
∂wT (t)
as a constant vector. Since
we have ∂vec ( QsBl:t+1W (t)Bt−1:1x )
∂wT(t) = ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) ,
we only need to consider
∂vec (( (Bs−1:1x)⊗BTl:s+1 ) e )
∂wT(t) = ∂vec
( (Bs−1:1x)⊗ ( BTl:s+1e )) ∂wT(t)
= ∂vec
( (Bs−1:1x) ( BTl:s+1e )T) ∂wT(t)
= ∂vec
( Bs−1:t+1W (t) ( Bt−1:1xe TBl:s+1 ))
∂wTt = ∂ ( Bt−1:1xe TBl:s+1 )T ⊗Bs−1:t+1vec (W (t)) ∂wTt
= ( Bt−1:1xe TBl:s+1 )T ⊗Bs−1:t+1.
Therefore, for s > t, by combining the above two terms, we can obtain ∇w(t) ( ∇w(s)f(w,x) ) = ( BTl:s+1ex TBTt−1:1 ) ⊗Bs−1:t+1+ ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) .
Then, by similar method, we can compute the Hessian for the case s < t as follows: ∇w(t) ( ∇w(s)f(w,x) ) = ( BTt−1:s+1 ) ⊗ ( Bs−1:1xe TBTl:t+1 ) + ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) .
The proof is completed.
C.2.2 PROOF OF LEMMA 7
Proof. We first prove that v(l), which is defined in Eqn. (5), is sub-Gaussian.
v(l) = W (l) · · ·W (1)x = Bl:1x. (5)
Then by the convexity in λ of exp(λt) and Lemma 14, we can obtain
E ( exp (〈 λ,v(l) − E(v(l)) 〉)) =E (exp (〈λ,Bl:1x− EBl:1x〉))
≤E ( exp (〈 BTl:1λ,x 〉)) ≤ exp ( ‖BTl:1λ‖22τ2
2 ) ¬ ≤ exp ( ω2fτ
2‖λ‖22 2
) ,
(6)
where ¬ uses the conclusion that ‖Bl:1‖op ≤ ‖Bl:1‖F ≤ ωf in Lemma 14. This means that v(l) is centered and is ω2fτ 2-sub-Gaussian. Accordingly, we can obtain that the k-th entry of v(l) is also zkτ 2-sub-Gaussian, where zk is a universal positive constant. Note that maxk zk ≤ ω2f . Let v (l) i
denotes the output of the i-th sample x(i). By Lemma 13, we have that for s > 0,
P
( 1
n n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 ) > t 2 ) = P ( s n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 ) > nst 2 ) ¬ ≤ exp ( −snt
2
) E ( s
n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 )) ≤ exp ( −snt
2 ) n∏ i=1 E ( s ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 )) ® ≤ exp ( −snt
2 ) n∏ i=1 exp ( 128dls 2ω4fτ 4 ) |s| ≤ 1 ω2fτ 2
¯ ≤ exp ( −c′nmin ( t2
dlω4fτ 4 ,
t
ω2fτ 2
)) .
Note that ¬ holds because of Chebyshev’s inequality. holds since x(i) are independent. ® is established by applying Lemma 13. We have ¯ by optimizing s. Since v(l) is sub-Gaussian, we have
P
( 1
n n∑ i=1 ( yTv (l) i − Ey Tv (l) i ) > t 2 ) ≤P ( s n∑ i=1 ( yTv (l) i − Ey Tv (l) i ) > nst 2 )
≤ exp ( −nst
2
) E exp ( s
n∑ i=1 ( yTv (l) i − Ey Tv (l) i
))
≤ exp ( −nst
2 ) n∏ i=1 E exp ( s ( yTv (l) i − Ey Tv (l) i )) ¬ ≤ exp ( −nst
2 ) n∏ i=1 exp ( ω2fτ 2s2‖y‖22 2 ) ≤ exp ( − nt 2
8ω2fτ 2‖y‖22
) ,
where ¬ holds because of Eqn. (6) and we have since we optimize s.
Since the loss function f(w,x) is defined as f(w,x) = ‖v(l) − y‖22, we have f(w,x)− E(f(w,x))=‖v(l) − y‖22−E(‖v(l)−y‖22)= ( ‖v(l)‖22−E‖v(l)‖22 ) + ( yTv(l)−EyTv(l) ) .
Therefore, we have
P
( 1
n n∑ i=1 ( f(w,x(i))− E(f(w,x(i))) ) > t
)
≤P
( 1
n n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 | 1. What is the focus of the paper regarding deep neural networks?
2. What are the puzzling aspects of the results for deep linear neural networks?
3. What is the assumption made by the authors regarding the samples?
4. Can the results for linear regression be compared to the results in Section 4? | Review | Review
This paper studies empirical risk in deep neural networks. Results are provided in Section 4 for linear networks and in Section 5 for nonlinear networks.
Results for deep linear neural networks are puzzling. Whatever the number of layers, a deep linear NN is simply a matrix multiplication and minimizing the MSE is simply a linear regression. So results in Section 4 are just results for linear regression and I do not understand why the number of layers come into play?
Also this is never explicitly mentioned in the paper, I guess the authors make an assumption that the samples (x_i,y_i) are drawn i.i.d. from a given distribution D. In such a case, I am sure results on the population risk minimization can be found for linear regression and should be compare to results in Section 4. |
ICLR | Title
Empirical Risk Landscape Analysis for Understanding Deep Neural Networks
Abstract
This work aims to provide comprehensive landscape analysis of empirical risk in deep neural networks (DNNs), including the convergence behavior of its gradient, its stationary points and the empirical risk itself to their corresponding population counterparts, which reveals how various network parameters determine the convergence performance. In particular, for an l-layer linear neural network consisting of di neurons in the i-th layer, we prove the gradient of its empirical risk uniformly converges to the one of its population risk, at the rate of O(r √ l √ maxi dis log(d/l)/n). Here d is the total weight dimension, s is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by r. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep nonlinear neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth l, the layer width di, the network size d, the sparsity in weight and the parameter magnitude r determine the neural network landscape.
N/A
√ l √
maxi dis log(d/l)/n). Here d is the total weight dimension, s is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by r. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep nonlinear neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth l, the layer width di, the network size d, the sparsity in weight and the parameter magnitude r determine the neural network landscape.
1 INTRODUCTION
Deep learning has achieved remarkable success in many fields, such as computer vision (Hinton et al., 2006; Szegedy et al., 2015; He et al., 2016), natural language processing (Collobert & Weston, 2008; Bakshi & Stephanopoulos, 1993), and speech recognition (Hinton et al., 2012; Graves et al., 2013). However, theoretical understanding on the properties of deep learning models still lags behind their practical achievements (Shalev-Shwartz et al., 2017; Kawaguchi, 2016) due to their high non-convexity and internal complexity. In practice, parameters of deep learning models are learned by minimizing the empirical risk via (stochastic-)gradient descent. Therefore, some recent works (Bartlett & Maass, 2003; Neyshabur et al., 2015) analyzed the convergence of the empirical risk to the population risk, which are however still far from fully understanding the landscape of the empirical risk in deep learning models. Beyond the convergence properties of the empirical risk itself, the convergence and distribution properties of its gradient and stationary points are also essential in landscape analysis. A comprehensive landscape analysis can reveal important information on the optimization behavior and practical performance of deep neural networks, and will be helpful to designing better network architectures. Thus, in this work we aim to provide comprehensive landscape analysis by looking into the gradients and stationary points of the empirical risk.
Formally, we consider a DNN model f(w;x,y) : Rd0 × Rdl → R parameterized by w ∈ Rd consisting of l layers (l ≥ 2) that is trained by minimizing the commonly used squared loss function
over sample pairs {(x,y)} ⊂ Rd0 × Rdl from an unknown distribution D, where y is the target output for the sample x. Ideally, the model can find its optimal parameter w∗ by minimizing the population risk through (stochastic-)gradient descent by backpropagation:
min w J(w) , E(x,y)∼D f(w;x,y),
where f(w;x,y) = 12‖v (l) − y‖22 is the squared loss associated to the sample (x,y) ∼ D in which v(l) is the output of the l-th layer. In practice, as the sample distribution D is usually unknown and only finite training samples { (x(i),y(i)) }n i=1
i.i.d. drawn from D are provided, the network model is usually trained by minimizing the empirical risk:
min w Ĵn(w) ,
1
n n∑ i=1 f(w;x(i),y(i)). (1)
Understanding the convergence behavior of Ĵn(w) to J(w) is critical to statistical machine learning algorithms. In this work, we aim to go further and characterize the landscape of the empirical risk Ĵn(w) of deep learning models by analyzing the convergence behavior of its gradient and stationary points to their corresponding population counterparts. We provide analysis for both multi-layer linear and nonlinear neural networks. In particular, we obtain following new results.
• We establish the uniform convergence of empirical gradient ∇wĴn(w) to its population counterpart ∇wJ(w). Specifically, when the sample size n is not less than O ( max(l3r2/(ε2s log(d/l)), s log(d/l)/l) ) , with probability at least 1 − ε the conver-
gence rate is O(r2l √ l √
maxi dis log(d/l)/n), where there are s nonzero entries in the parameterw, the output dimension of the i-th layer is di and the magnitude of the weight parameter of each layer is upper bounded by r. This result implies that as long as the training sample size n is sufficiently large, any stationary point of Ĵn(w) is also a stationary point of J(w) and vise versa, although both Ĵn(w) and J(w) are very complex.
• We then prove the exact correspondence of non-degenerate stationary points between Ĵn(w) and J(w). Indeed, the corresponding non-degenerate stationary points also uniformly converge to each other at the same convergence rate as the one revealed above with an extra factor 2/ζ. Here ζ > 0 accounts for the geometric topology of non-degenerate stationary points (see Definition 1).
Based on the above two new results, we also derive the uniform convergence of the empirical risk Ĵn(w) to its population risk J(w), which helps understand the generalization error of deep learning models and stability of their empirical risk. These analyses reveal the role of the depth l of a neural network model in determining its convergence behavior and performance. Also, the results tell that the width factor √ maxi di, the nonzero entry number s of weights, and the total network size d are also critical to the convergence and performance. In addition, controlling magnitudes of the parameters (weights) in DNNs are demonstrated to be important for performance. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of empirical gradient and stationary points in both deep linear and nonlinear neural networks.
2 RELATED WORK
To date, only a few theories have been developed for understanding DNNs which can be roughly divided into following three categories. The first category aims to analyze training error of DNNs. Baum (1988) pointed out that zero training error can be obtained when the last layer of a neural network has more units than training samples. Later, Soudry & Carmon (2016) proved that for DNNs with leaky rectified linear units (ReLU) and a single output, the training error achieves zero at any of their local minima as long as the product of the number of units in the last two layers is larger than the training sample size.
The second category of analysis works (Dauphin et al., 2014; Choromanska et al., 2015a; Kawaguchi, 2016; Tian, 2017) focus on analyzing loss surfaces of DNNs, e.g., how the stationary points are distributed. Those results are helpful to understanding performance difference of large- and small-size
networks (Choromanska et al., 2015b). Among them, Dauphin et al. (2014) experimentally verified that a large number of saddle points indeed exist for DNNs. With strong assumptions, Choromanska et al. (2015a) connected the loss function of a deep ReLU network with the spherical spin-class model and described locations of the local minima. Later, Kawaguchi (2016) proved the existence of degenerate saddle points for deep linear neural networks with squared loss function. They also showed that any local minimum is also a global minimum. By utilizing techniques from dynamical system analysis, Tian (2017) gave guarantees that for two-layer bias-free networks with ReLUs, the gradient descent algorithm with certain symmetric weight initialization can converge to the ground-truth weights globally, if the inputs follow Gaussian distribution. Recently, Nguyen & Hein (2017) proved that for a fully connected network with squared loss and analytic activation functions, almost all the local minima are globally optimal if one hidden layer has more units than training samples and the network structure after this layer is pyramidal. Besides, some recent works, e.g., (Zhang et al., 2016; 2017), tried to alleviate analysis difficulties by relaxing the involved highly nonconvex functions into ones easier.
In addition, some existing works (Bartlett & Maass, 2003; Neyshabur et al., 2015) analyze the generalization performance of a DNN model. Based on the Vapnik–Chervonenkis (VC) theory, Bartlett & Maass (2003) proved that for a feedforward neural network with one-dimensional output, the best convergence rate of the empirical risk to its population risk on the sample distribution can be bounded by its fat-shattering dimension. Recently, Neyshabur et al. (2015) adopted Rademacher complexity to analyze learning capacity of a fully-connected neural network model with ReLU activation functions and bounded inputs.
However, although gradient descent with backpropagation is the most common optimization technique for DNNs, none of existing works analyzes convergence properties of gradient and stationary points of the DNN empirical risk. For single-layer optimization problems, some previous works analyze their empirical risk but essentially differ from our analysis method. For example, Negahban et al. (2009) proved that for a regularized convex program, the minimum of the empirical risk uniformly converges to the true minimum of the population risk under certain conditions. Gonen & ShalevShwartz (2017) proved that for nonconvex problems without degenerated saddle points, the difference between empirical risk and population risk can be bounded. Unfortunately, the loss of DNNs is highly nonconvex and has degenerated saddle points (Fyodorov & Williams, 2007; Dauphin et al., 2014; Kawaguchi, 2016), thus their analysis results are not applicable. Mei et al. (2017) analyzed the convergence behavior of the empirical risk for nonconvex problems, but they only considered the single-layer nonconvex problems and their analysis demands strong sub-Gaussian and subexponential assumptions on the gradient and Hessian of the empirical risk respectively. Their analysis also assumes a linearity property on gradient which is difficult to hold or verify. In contrast, our analysis requires much milder assumptions. Besides, we prove that for deep networks which are highly nonconvex, the non-degenerate stationary points of empirical risk can uniformly converge to their corresponding stationary points of population risk at the rate of O( √ s/n) which is faster
than the rate O( √ d/n) for single-layer optimization problems in (Mei et al., 2017). Also, Mei et al. (2017) did not analyze the convergence rate of the empirical risk, stability or generalization error of DNNs as this work.
3 PRELIMINARIES
Throughout the paper, we denote matrices by boldface capital letters, e.g. A. Vectors are denoted by boldface lowercase letters, e.g. a, and scalars are denoted by lowercase letters, e.g. a. We define the r-radius ball as Bd(r) , {z ∈ Rd | ‖z‖2 ≤ r}. To explain the results, we also need the vectorization operation vec(·). It is defined as vec(A) = (A(:, 1); · · · ;A(:, t)) ∈ Rst that vectorizesA ∈ Rs×t along its columns. We use d= ∑l j=1djdj−1 to denote the total dimension of weight parameters, where dj denotes the output dimension of the j-th layer.
In this work, we consider both linear and nonlinear DNNs. Suppose both networks consist of l layers. We use u(j) and v(j) to respectively denote the input and output of the j-th layer, ∀j = 1, . . . , l. Deep linear neural networks: The function of the j-th layer is formulated as
u(j) ,W (j)v(j−1) ∈ Rdj , v(j) , u(j) ∈ Rdj , ∀j = 1, · · · , l,
where v(0) = x is the input andW (j) ∈ Rdj×dj−1 is the weight matrix of the j-th layer. Deep nonlinear neural networks: We adopt the sigmoid function as the non-linear activation function. The function within the j-th layer can be written as
u(j) ,W (j)v(j−1) ∈ Rdj , v(j) , hj(u(j)) = (σ(u(j)1 ); · · · ;σ(u (j) dj )) ∈ Rdj , ∀j = 1, · · · , l,
where u(j)i denotes the i-th entry of u (j) and σ(·) is the sigmoid function, i.e., σ(a) = 1/(1 + e−a).
Following the common practice, both DNN models adopt the squared loss function defined as f(w;x,y) = 12‖v
(l) − y‖22, where w = (w(1); · · · ;w(l)) ∈ Rd contains all the weight parameters and w(j) = vec ( W (j) ) ∈ Rdjdj−1 . Then the empirical risk Ĵn(w) is Ĵn(w) = 1 n ∑n i=1 f(w;x(i),y(i)) = 1 2n ∑n i=1 ‖v (l) (i) − y(i)‖ 2 2, where v (l) (i) is the network’s output of x(i).
4 RESULTS FOR DEEP LINEAR NEURAL NETWORKS
We first analyze linear neural network models and present following new results: (1) the uniform convergence of the empirical risk gradient to its population counterpart and (2) the convergence properties of non-degenerate stationary points of the empirical risk. As a corollary, we also derive the uniform convergence of the empirical risk to the population one, which further gives stability and generalization bounds. In the next section, we extend the analysis to non-linear neural network models.
We assume the input datum x is τ2-sub-Gaussian and has bounded magnitude, as formally stated in Assumption 1. Assumption 1. The input datum x ∈ Rd0 has zero mean and is τ2-sub-Gaussian, i.e.,
E[exp (〈λ,x〉)] ≤ exp ( 1
2 τ2‖λ‖22
) , ∀λ ∈ Rd0 .
Besides, the magnitude x is bounded as ‖x‖2 ≤ rx, where rx is a positive universal constant.
Note that any random vector z consisting of independent entries with bounded magnitude is subGaussian and satisfies Assumption 1 (Vershynin, 2012). Moreover, for such a random z, we have τ = ‖z‖∞ ≤ ‖z‖2 ≤ rx. Such an assumption on bounded magnitude generally holds for natural data, e.g., images and speech signals. Besides, we assume the weight parameters w(j) of each layer are bounded as w ∈ Ω = {w |w(j) ∈ Bdjdj−1(rj), ∀j = 1, · · · , l} where rj is a constant. For notational simplicity, we let r = maxj rj . Such an assumption is common (Xu & Mannor, 2012). Here we assume the entry value of y falls in [0, 1]. For any bounded target output y, we can always scale it to satisfy such a requirement.
The results presented for linear neural networks here can be generalized to deep ReLU neural networks by applying the results from Choromanska et al. (2015a) and Kawaguchi (2016), which transform deep ReLU neural networks into deep linear neural networks under proper assumptions.
4.1 UNIFORM CONVERGENCE OF EMPIRICAL RISK GRADIENT
We first analyze the convergence of gradients for the DNN empirical and population risks. To our best knowledge, these results are the first ones giving guarantees on gradient convergence, which help better understand the landscape of DNNs and their optimization behavior. The results are stated blow. Theorem 1. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then the empirical gradient uniformly converges to the population gradient in Euclidean norm. Specifically, there exist two universal constants cg′ and cg such that if n ≥ cg′ max(l3r2r4x/(cqs log(d/l)ε2τ4 log(1/ε)), s log(d/l)/(lτ2)) where cq = √ max0≤i≤l di, then
sup w∈Ω ∥∥∥∇Ĵn(w)−∇J(w)∥∥∥ 2 ≤ g , cgτωg √ lcq √ s log(dn/l) + log(12/ε) n
holds with probability at least 1 − ε, where s denotes the number of nonzero entries of all weight parameters and ωg = max ( τr2l−1, r2l−1, rl−1 ) .
From Theorem 1, one can observe that with an increasingly larger sample size n, the difference between empirical risk and population risk gradients decreases monotonically at the rate of O(1/ √ n) (up to a log factor). Theorem 1 also characterizes how the depth l contributes to obtaining small difference between the empirical and population risk gradients. Specifically, a deeper neural network needs more training samples to mitigate the difference. Also, due to the factor d, training a network of larger size using gradient descent also requires more training samples. We observe a factor of√
maxi di (i.e. cq), which prefers a DNN architecture of balanced layer sizes (without extremely wide layers). This result also matches the trend and empirical performance in deep learning applications advocating deep but thin networks (He et al., 2016; Szegedy et al., 2015).
By observing Theorem 1, imposing certain regularizations on the weight parameters is useful. For example, reducing the number of nonzero entries s encourages sparsity regularization like ‖w‖1. The results also suggest not choosing large-magnitude weights w in order for a smaller factor r by adopting regularization like ‖w‖22. Theorem 1 also reveals the point derived from optimizing that the empirical and population risks have similar properties when the sample size n is sufficiently large. For example, an /2-stationary point w̃ of Ĵn(w) is also an -stationary point of J(w) with probability 1−ε if n ≥ c (τωg/ )2lcqs log(d/l) with c being a constant. Here -stationary point for a function F means the point w satisfying ‖∇wF ‖2 ≤ . Understanding such properties is useful, since in practice one usually computes an -stationary point of Ĵn(w). These results guarantee the computed point is at most a 2 -stationary point of J(w) and is thus close to the optimum.
4.2 UNIFORM CONVERGENCE OF STATIONARY POINTS
We then proceed to analyze the distribution and convergence properties of stationary points of the DNN empirical risk. Here we consider non-degenerate stationary points which are geometrically isolated and thus unique in local regions. Since degenerate stationary points are not unique in a local region, we cannot expect to establish one-to-one corresponding relationship (see below) between them in empirical risk and population risk. Definition 1. (Non-degenerate stationary points) (Gromoll & Meyer, 1969) If a stationary point w is said to be a non-degenerate stationary point of J(w), then it satisfies
inf i ∣∣λi (∇2J(w))∣∣ ≥ ζ, where λi ( ∇2J(w) ) denotes the i-th eigenvalue of the Hessian∇2J(w) and ζ is a positive constant.
Non-degenerate stationary points include local minima/maxima and non-degenerate saddle points, while degenerate stationary points refer to degenerate saddle points. Then we introduce the index of non-degenerate stationary points which can characterize their geometric properties. Definition 2. (Index of non-degenerate stationary points) (Dubrovin et al., 2012) The index of a symmetric non-degenerate matrix is the number of its negative eigenvalues, and the index of a non-degenerate stationary pointw of a smooth function F is simply the index of its Hessian∇2F (w).
Suppose that J(w) has m non-degenerate stationary points that are denoted as {w(1), w(2), · · · ,w(m)}. We prove following convergence behavior of these stationary points. Theorem 2. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then if n ≥ ch max(l3r2r4x/(cqs log(d/l)ε2τ4 log(1/ε)), s log(d/l)/ζ2) where ch is a constant, for k ∈ {1, · · · ,m}, there exists a non-degenerate stationary point w(k)n of Ĵn(w) which corresponds to the non-degenerate stationary point w(k) of J(w) with probability at least 1− ε. In addition, w(k)n and w(k) have the same non-degenerate index and they satisfy
‖w(k)n −w(k)‖2 ≤ 2cgτωg ζ
√ lcq
√ s log(dn/l) + log(12/ε)
n , (k = 1, · · · ,m)
with probability at least 1− ε, where the parameters cq , ωg , and cg are given in Theorem 1.
Theorem 2 guarantees the one-to-one correspondence between the non-degenerate stationary points of the empirical risk Ĵn(w) and the popular risk J(w). The distances of the corresponding pairs
become smaller as n increases. In addition, the corresponding pairs have the same non-degenerate index. This implies that the corresponding stationary points have the same geometric properties, such as whether they are saddle points. Accordingly, we can develop more efficient algorithms, e.g. escaping saddle points (Ge et al., 2015), since Dauphin et al. (2014) empirically proved that saddle points are usually surrounded by high error plateaus. Also when n is sufficiently large, the properties of stationary points of Ĵn(w) are similar to the points of the population risk J(w) in the sense that they have exactly matching local minima/maxima and non-degenerate saddle points. By comparing Theorems 1 and 2, we find that the requirement for sample number in Theorem 2 is more restrict, since establishing exact one-to-one correspondence between the non-degenerate stationary points of Ĵn(w) and J(w) and bounding their uniform convergence rate to each other are more challenging. From Theorems 1 and 2, we also notice that the uniform convergence rate of non-degenerate stationary points has an extra factor 1/ζ. This is because bounding stationary points needs to access not only the gradient itself but also the Hessian matrix. See more details in proof.
Kawaguchi (2016) pointed out that degenerate stationary points indeed exist for DNNs. However, since degenerate stationary points are not isolated, such as forming flat regions, it is hard to establish the unique correspondence for them as for non-degenerate ones. Fortunately, by Theorem 1, the gradients at these points of Ĵn(w) and J(w) are close. This implies that a degenerate stationary point of J(w) will also give a near-zero gradient for Ĵn(w), i.e., it is also a stationary point for Ĵn(w).
In the proof, we consider the essential multi-layer architecture of the deep linear network, and do not transform it into a linear regression model and directly apply existing results (see Loh & Wainwright (2015) and Negahban et al. (2011)). This is because we care more about deep ReLU networks which cannot be reduced in this way. Our proof technique is more suitable for analyzing the multi-layer neural networks which paves a way for analyzing deep ReLU networks. Also such an analysis technique can reveal the role of network parameters (dimension, norm, etc.) of each weight matrix in the results which may benefit the design of networks. Besides, the obtained results are more consistent with those for deep nonlinear networks (see Sec. 5).
4.3 UNIFORM CONVERGENCE, STABILITY AND GENERALIZATION OF EMPIRICAL RISK
Based on the above results, we can derive the uniform convergence of empirical risk to population risk easily. In this subsection, we first give the uniform convergence rate of empirical risk for deep linear neural networks in Theorem 3, and then use this result to derive the stability and generalization bounds for DNNs in Corollary 1. Theorem 3. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then there exist two universal constants cf ′ and cf such that if n ≥ cf ′ max(l3r4x/(dls log(d/l)ε2τ4 log(1/ε)), s log(d/l)/(τ2dl)), then
sup w∈Ω ∣∣∣Ĵn(w)− J(w)∣∣∣ ≤ f , cfτ max(√dlτr2l, rl)√s log(dn/l) + log(8/ε) n
(2)
holds with probability at least 1− ε. Here l is the number of layers in the neural network, n is the sample size and dl is the dimension of the final layer.
From Theorem 3, when n→ +∞, we have |Ĵn(w)− J(w)| → 0. According to the definition of uniform convergence (Vapnik & Vapnik, 1998; Shalev-Shwartz et al., 2010), under the distribution D, the empirical risk of a deep linear neural network converges to its population risk uniformly at the rate of O(1/ √ n). Theorem 3 also explains the roles of the depth l, the network size d, and the number of nonzero weight parameters s in a DNN model.
Based on VC-dimension techniques, Bartlett & Maass (2003) proved that for a feedforward neural network with polynomial activation functions and one-dimensional output, with probability at least 1 − ε the convergence bound satisfies |Ĵn(w) − inff J(w)| ≤ O( √
(γ log2(n) + log(1/ε))/n). Here γ is the shattered parameter and can be as large as the VC-dimension of the network model, i.e. at the order ofO(ld log(d)+l2d) (Bartlett & Maass, 2003). Note that Bartlett & Maass (2003) did not reveal the role of the magnitude of weight in their results. In contrast, our uniform convergence bound is supw∈Ω |Ĵn(w)−J(w)| ≤ O( √ (s log(dn/l) + log(1/ε))/n). So our convergence rate is tighter.
Neyshabur et al. (2015) proved that the Rademacher complexity of a fully-connected neural network model with ReLU activation functions and one-dimensional output is O ( rl/ √ n )
(see Corollary 2 in (Neyshabur et al., 2015)). Then by applying Rademacher complexity based argument (ShalevShwartz & Ben-David, 2014a), we have | supf (Ĵn(w)−J(w))| ≤ O((rl + √ log(1/ε))/ √ n) with probability at least 1− ε where the loss function is the training error g = 1(v(l) 6=y) in which v(l) is the output of the l-th layer in the network model f(w;x,y). The convergence rate in our theorem is O(r2l √ (s log(d/l) + log(1/ε))/n) and has the same convergence speed O(1/ √ n) w.r.t. sample number n. Note that our convergence rate involves r2l since we use squared loss instead of the training error in (Neyshabur et al., 2015). The extra parameters s and d are involved since we consider the parameter space rather than the function hypothesis f in (Neyshabur et al., 2015), which helps people more transparently understand the roles of the network parameters. Besides, the Rademacher complexity cannot be applied to analyzing convergence properties of the empirical risk gradient and stationary points as our techniques.
Based on Theorem 3, we proceed to analyze the stability property of the empirical risk and the convergence rate of the generalization error in expectation. Let S = {(x(1),y(1)), · · · , (x(n),y(n))} denote the sample set in which the samples are i.i.d. drawn from D. When the optimal solution wn to problem (1) is computed by deterministic algorithms, the generalization error is defined as g = Ĵn(w
n)− J(wn). But one usually employs randomized algorithms, e.g. stochastic gradient descent (SGD), for computing wn. In this case, stability and generalization error in expectation defined in Definition 3 are more applicable.
Definition 3. (Stability and generalization in expectation) (Vapnik & Vapnik, 1998; Shalev-Shwartz et al., 2010; Gonen & Shalev-Shwartz, 2017) Assume a randomized algorithm A is employed, ((x′(1),y ′ (1)), · · · , (x ′ (n),y ′ (n))) ∼ D and wn = argminw Ĵn(w) is the empirical risk minimizer
(ERM). For every j ∈ [n], suppose wj∗ = argminw 1n−1 ∑ i 6=j fi(w;x(i),y(i)). We say that the
ERM is on average stable with stability rate k under distribution D if ∣∣∣ES∼D,A,(x′
(j) ,y′ (j) )∼D
1 n ∑n j=1 [ fj(w j ∗;x ′ (j),y ′ (j))− fj(w n;x′(j),y ′ (j)) ]∣∣∣ ≤ k. The ERM is said to have generalization
error with convergence rate k′ under distribution D if we have ∣∣∣ES∼D,A (J(wn)− Ĵn(wn))∣∣∣ ≤
k′ .
Stability measures the sensibility of the empirical risk to the input and generalization error measures the effectiveness of ERM on new data. Generalization error in expectation is especially important for applying DNNs considering their internal randomness, e.g. from SGD optimization. Now we present the results on stability and generalization performance of deep linear neural networks.
Corollary 1. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then with probability at least 1− ε, both the stability rate and the generalization error rate of ERM of a deep linear neural network are at least f :∣∣∣ES∼D,A,(x′
(j) ,y′ (j) )∼D
1
n n∑ j=1 ( f∗j − fj ) ∣∣∣ ≤ f and ∣∣∣ES∼D,A (J(wn)− Ĵn(wn)) ∣∣∣ ≤ f , where f∗j and fj respectively denote fj(w j ∗;x ′ (j),y ′ (j)) and fj(w n;x′(j),y ′ (j)), and f is defined in Eqn. (2).
According to Corollary 1, both the stability rate and the convergence rate of generalization error are O( f ). This result indicates that deep learning empirical risk is stable and its output is robust to small perturbation over the training data. When n is sufficiently large, small generalization error of DNNs is guaranteed.
5 RESULTS FOR DEEP NONLINEAR NEURAL NETWORKS
In the above section, we analyze the empirical risk optimization landscape for deep linear neural network models. In this section, we extend our analysis to deep nonlinear neural networks which adopt the sigmoid activation function. Our analysis techniques are also applicable to other third-order
differentiable activation functions, e.g., tanh function with different convergence rate. Here we assume the input data are i.i.d. Gaussian variables.
Assumption 2. The input datum x is a vector of i.i.d. Gaussian variables from N (0, τ2).
Since for any input, the sigmoid function always maps it to the range [0, 1]. Thus, we do not require the input x to have bounded magnitude. Such an assumption is common. For instance, Tian (2017) and Soudry & Hoffer (2017) both assumed that the entries in the input vector are from Gaussian distribution. We also assumew ∈ Ω as in (Xu & Mannor, 2012). Here we also assume that the entry value of the target output y falls in [0, 1]. Similar to the analysis of deep linear neural networks, here we also aim to characterize the empirical risk gradient, stationary points and empirical risk for deep nonlinear neural networks.
5.1 UNIFORM CONVERGENCE OF GRADIENT AND STATIONARY POINTS
Here we analyze convergence properties of gradients of the empirical risk for deep nonlinear neural networks.
Theorem 4. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are sigmoid functions. Then the empirical gradient uniformly converges to the population gradient in Euclidean norm. Specifically, there are two universal constants cy and cy′ such that if n ≥ cy′cdl3r2/(s log(d)τ2ε2 log(1/ε)) where cd=max0≤i≤l di, then with probability at least 1− ε
sup w∈Ω ∥∥∥∇Ĵn(w)−∇J(w)∥∥∥ 2 ≤ l , τ √ 512 729 cyl(l + 2) (lcr + 1) cdcr √ s log(dn/l) + log(4/ε) n ,
where cr = max(r2/16, ( r2/16 )l−1 ), and s denotes the nonzero entry number of all weights.
Similar to deep linear neural networks, the layer number l, width di, number of nonzero parameter entries s, network size d and magnitude of weights are all critical to the convergence rate. Also, since there is a factor maxi di in the convergence rate, it is better to avoid choosing an extremely wide layer. Interestingly, when analyzing the representation ability of deep learning, Eldan & Shamir (2016) also suggested non-extreme-wide layers, though the conclusion was derived from a different perspective. By comparing Theorems 1 and 4, one can observe that there is a factor (1/16)l−1 in the convergence rate in Theorem 4. This is because the convergence rate accesses the Lipschitz constant and when we bound it, sigmoid activation function brings the factor 1/16 for each layer.
Now we analyze the non-degenerate stationary points of the empirical risk for deep nonlinear neural networks. Here we also assume that the population risk has m non-degenerate stationary points denoted by {w(1),w(2), · · · ,w(m)}. Theorem 5. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are sigmoid functions. Then if n ≥ cs max ( cdl
3r2/(s log(d)τ2ε2 log(1/ε)) , s log(d/l)/ζ2 ) where cs is a constant, for k ∈ {1, · · · ,m}, there exists a non-degenerate stationary pointw(k)n of Ĵn(w) which corresponds to the non-degenerate stationary pointw(k) of J(w) with probability at least 1− ε. Moreover, w(k)n and w(k) have the same non-degenerate index and they obey∥∥∥w(k)n −w(k)∥∥∥
2 ≤ 2τ ζ
√ 512
729 cyl(l + 2) (lcr + 1) cdcr
√ s log(dn/l) + log(4/ε)
n , (k = 1, · · · ,m)
with probability at least 1− ε, where cy , cd and cr are the same parameters in Theorem 4.
According to Theorem 5, there is one-to-one correspondence between the non-degenerate stationary points of Ĵn(w) and J(w). Also the corresponding pair has the same non-degenerate index, implying they have exactly matching local minima/maxima and non-degenerate saddle points. When n is sufficiently large, the non-degenerate stationary pointw(k)n in Ĵn(w) is very close to its corresponding non-degenerate stationary point w(k) in J(w). As for the degenerate stationary points, Theorem 4 guarantees the gradients at these points of J(w) and Ĵn(w) are very close to each other.
5.2 UNIFORM CONVERGENCE, STABILITY AND GENERALIZATION OF EMPIRICAL RISK
Here we first give the uniform convergence analysis of the empirical risk and then analyze its stability and generalization.
Theorem 6. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are the sigmoid functions. If n ≥ 18l2r2/(s log(d)τ2ε2 log(1/ε)), then
sup w∈Ω ∣∣∣Ĵn(w)− J(w)∣∣∣ ≤ n , τ√9 8 cycd (1 + cr(l − 1)) √ s log(nd/l) + log(4/ε) n (3)
holds with probability at least 1−ε, where cy , cd and cr are given in Theorem 4.
From Theorem 6, we obtain that under the distribution D, the empirical risk of a deep nonlinear neural network converges at the rate of O(1/ √ n) (up to a log factor). Theorem 6 also gives similar results as Theorem 3, including the inclination of regularization penalty on weight and suggestion on non-extreme-wide layers. Similar to linear networks, our risk convergence rate is also tighter than the convergence rate on the networks with polynomial activation functions and one-dimensional output in (Bartlett & Maass, 2003) since ours is at the order ofO( √ (l − 1)(s log(dn/l) + log(1/ε))/n), while
the later is O( √
(γ log2(n) + log(1/ε))/n) where γ is at the order of O(ld log(d) + l2d) (Bartlett & Maass, 2003).
We then establish the stability property and the generalization error of the empirical risk for nonlinear neural networks. By Theorem 6, we can obtain the following results.
Corollary 2. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are sigmoid functions. Then with probability at least 1− ε, we have∣∣∣ES∼D,A,(x′
(j) ,y′ (j) )∼D
1
n n∑ j=1 ( f∗j − fj ) ∣∣∣ ≤ n and ∣∣∣ES∼D,A (J(wn)− Ĵn(wn)) ∣∣∣ ≤ n, where n is defined in Eqn. (3). The notations f∗j and fj here are the same in Corollary 1.
By Corollary 2, we know that both the stability convergence rate and the convergence rate of generalization error are O(1/ √ n). This result accords with Theorems 8 and 9 in (Shalev-Shwartz et al., 2010) which impliesO(1/ √ n) is the bottleneck of the stability and generalization convergence rate for generic learning algorithms. From this result, we have that if n is sufficiently large, the empirical risk can be expected to be very stable. This also dispels misgivings of the random selection of training samples in practice. Such a result indicates that the deep nonlinear neural network can offer good performance on testing data if it achieves small training error.
6 PROOF ROADMAP
Here we briefly introduce our proof roadmap. Due to space limitation, all the proofs of Theorems 1 ∼ 6 and Corollaries 1 and 2 as well as technical lemmas are deferred to the supplementary material. The proofs of Theorems 1 and 4 are similar but essentially differ in some techniques for bounding probability due to their different assumptions. For explanation simplicity, we define four events: E = {supw∈Ω ‖∇Ĵn(w) − ∇J(w)‖2 > t}, E1 = {supw∈Ω ‖ 1n ∑n i=1 ( ∇f(w,x(i))−∇f(wkw ,x(i)) ) ‖2 > t/3}, E2 = {supwikw∈Ni, i∈[l]
‖ 1n ∑n i=1∇f(wkw ,x(i)) − E∇f(wkw ,x)‖2 > t/3}, and E3 = {supw∈Ω ‖E∇f(wkw ,x) −E∇f(w,x)‖2> t/3}, where wkw = [w1kw ;w 2 kw ; · · · ;wlkw ] is constructed by selecting w i kw ∈ Rdidi−1 from didi−1 /d-net Ni such that ‖w −wkw‖2 ≤ . Note that in Theorems 1 and 4, t is respectively set to g and l. Then we have P(E) ≤ P(E1) + P(E2) + P(E3). So we only need to separately bound P(E1), P(E2) and P(E3). For P(E1) and P(E3), we use the gradient Lipschitz constant and the properties of -net to prove P(E1) ≤ ε/2 and P(E3) = 0, while bounding P(E2) needs more efforts. Here based on the assumptions, we prove that P(E2) has sub-exponential tail associated to the sample number n and the networks parameters, and it satisfies P(E2) ≤ ε/2 with proper conditions. Finally, combining the bounds of the three terms, we obtain the desired results.
To prove Theorems 2 and 5, we first prove the uniform convergence of the empirical Hessian to its population Hessian. Then, we define such a set D = {w ∈ Ω : ‖∇J(w)‖2 < and infi ∣∣λi (∇2J(w))∣∣ ≥ ζ}. In this way, D can be decomposed into countably components, with each component containing either exactly one or zero non-degenerate stationary point. For each component, the uniform convergence of gradient and the results in differential topology guarantee that if J(w) has no stationary points, then Ĵn(w) also has no stationary points and vise versa. Similarly, for each component, the uniform convergence of Hessian and the results in differential topology guarantee that if J(w) has a unique non-degenerate stationary point, then Ĵn(w) also has a unique non-degenerate stationary point with the same index. After establishing exact correspondence between the non-degenerate stationary points of empirical risk and population risk, we use the uniform convergence of gradient and Hessian to bound the distance between the corresponding pairs.
We adopt a similar strategy to prove Theorems 3 and 6. Specifically, we divide the event supw∈Ω|Ĵn(w)−∇J(w)| > t into E1, E2 and E3 which have the same forms as their counterparts in the proofs of Theorem 1 with the gradient replaced by the loss function. To prove P(E1) ≤ ε/2 and P(E3) = 0, we can use the Lipschitz constant of the loss function and the -net properties. The remaining is to prove P(E2). We also prove that it has sub-exponential tail associated to the sample number n and the networks parameters and it obeys P(E2) ≤ ε/2 with proper conditions. Then we utilize the uniform convergence of Ĵn(w) to prove the stability and generalization bounds of Ĵn(w) (i.e. Corollaries 1 and 2).
7 CONCLUSION
In this work, we provided theoretical analysis on the landscape of empirical risk optimization for deep linear/nonlinear neural networks with (stochastic-)gradient descent, including the properties of the gradient and stationary points of empirical risk as well as the uniform convergence, stability, and generalization of the empirical risk itself. To our best knowledge, most of the results are new to deep learning community. These results also reveal that the depth l, the nonzero entry number s of all weights, the network size d and the width of a network are critical to the convergence rates. We also prove that the weight parameter magnitude is important to the convergence rate. Indeed, small magnitude of the weights is suggested. All the results are consistent with the widely used network architectures in practice.
ACKNOWLEDGMENT
This work is partially supported by National University of Singapore startup grant R-263-000-C08133, Ministry of Education of Singapore AcRF Tier One grant R-263-000-C21-112, NUS IDS R-263-000-C67-646 and ECRA R-263-000-C87-133.
REFERENCES R. Alessandro. Lecture notes of advanced statistical theory I, CMU. http://www.stat.cmu.edu/ ~arinaldo/36755/F16/Scribed_Lectures/LEC0914.pdf, 2016.
B. Bakshi and G. Stephanopoulos. Wave-net: A multiresolution, hierarchical neural network with localized learning. AIChE Journal, 39(1):57–81, 1993.
P. Bartlett and W. Maass. Vapnik-chervonenkis dimension of neural nets. The handbook of brain theory and neural networks, pp. 1188–1192, 2003.
E. Baum. On the capabilities of multilayer perceptrons. Journal of complexity, 4(3):193–215, 1988.
A. Choromanska, M. Henaff, M. Mathieu, G. Arous, and Y. LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015a.
A. Choromanska, Y. LeCun, and G. Arous. Open problem: The landscape of the loss surfaces of multilayer networks. In COLT, pp. 1756–1760, 2015b.
R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, pp. 160–167, 2008.
Y. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In NIPS, pp. 2933–2941, 2014.
B. Dubrovin, A. Fomenko, and S. Novikov. Modern geometry—methods and applications: Part II: The geometry and topology of manifolds, volume 104. Springer Science & Business Media, 2012.
R. Eldan and O. Shamir. The power of depth for feedforward neural networks. In COLT, pp. 907–940, 2016.
Y. Fyodorov and I. Williams. Replica symmetry breaking condition exposed by random matrix calculation of landscape complexity. Journal of Statistical Physics, 129(5-6):1081–1116, 2007.
R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In COLT, pp. 797–842, 2015.
A. Gonen and S. Shalev-Shwartz. Fast rates for empirical risk minimization of strict saddle problems. COLT, 2017.
A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, pp. 6645–6649, 2013.
D. Gromoll and W. Meyer. On differentiable functions with isolated critical points. Topology, 8(4):361–369, 1969.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016.
G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7): 1527–1554, 2006.
G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.
K. Kawaguchi. Deep learning without poor local minima. In NIPS, pp. 1097–1105, 2016.
P. Loh and M. J. Wainwright. Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. JMLR, 16(Mar):559–616, 2015.
S. Mei, Y. Bai, and A. Montanari. The landscape of empirical risk for non-convex losses. Annals of Statistics, 2017.
S. Negahban, B. Yu, M. Wainwright, and P. Ravikumar. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. In NIPS, pp. 1348–1356, 2009.
S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. In NIPS, 2011.
B. Neyshabur, R. Tomioka, and N. Srebro. Norm-based capacity control in neural networks. In COLT, pp. 1376–1401, 2015.
Q. Nguyen and M. Hein. The loss surface of deep and wide neural networks. In ICML, 2017.
P. Rigollet. Statistic s997 lecture notes, MIT mathematics. MIT OpenCourseWare, pp. 23–24, 2015.
M. Rudelson and R. Vershynin. Hanson-wright inequality and sub-gaussian concentration. Electronic Communications in Probability, 18(82):1–9, 2013.
S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge Univ. Press, Cambridge, pp. 375–382, 2014a.
S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014b.
S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform convergence. JMLR, 11:2635–2670, 2010.
S. Shalev-Shwartz, O. Shamir, and S. Shammah. Failures of deep learning. ICML, 2017.
D. Soudry and Y. Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016.
D. Soudry and E. Hoffer. Exponentially vanishing sub-optimal local minima in multilayer neural networks. arXiv preprint arXiv:1702.05777, 2017.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pp. 1–9, 2015.
Y. Tian. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. ICML, 2017.
V. N. Vapnik and V. Vapnik. Statistical learning theory, volume 1. Wiley New York, 1998.
R. Vershynin. Introduction to the non-asymptotic analysis of random matrices, compressed sensing. Cambridge Univ. Press, Cambridge, pp. 210–268, 2012.
H. Xu and S. Mannor. Robustness and generalization. Machine Learning, 86(3):391–423, 2012.
Y. Zhang, J. Lee, and M. Jordan. `1-regularized neural networks are improperly learnable in polynomial time. In ICML, pp. 993–1001, 2016.
Y. Zhang, P. Liang, and M. Wainwright. Convexified convolutional neural networks. ICML, 2017.
SUPPLEMENTARY MATERIAL OF EMPIRICAL RISK LANDSCAPE ANALYSIS FOR UNDERSTANDING DEEP NEURAL NETWORKS
A STRUCTURE OF THIS DOCUMENT
This document gives some other necessary notations and preliminaries for our analysis in Sec. B. Then we prove Theorems 1∼ 3 and Corollary 1 for deep linear neural networks in Sec. C. Then we present the proofs of Theorems 4 ∼ 6 and Corollary 2 for deep nonlinear neural networks in Sec. D. In both Sec. C and D, we first present the technical lemmas for proving our final results and subsequently present the proofs of these lemmas. Then we utilize these technical lemmas to prove our desired results. Finally, we give the proofs of other auxiliary lemmas.
B NOTATIONS AND PRELIMINARY TOOLS
Beyond the notations introduced in the manuscript, we need some other notations used in this document. Then we introduce several lemmas that will be used later.
B.1 NOTATIONS
Throughout this document, we use 〈·, ·〉 to denote the inner product. A⊗C denotes the Kronecker product betweenA and C. Note thatA and C inA⊗C can be matrices or vectors. For a matrix A ∈ Rn1×n2 , we use ‖A‖F = √∑ i,jA 2 ij to denote its Frobenius norm, whereAij is the (i, j)-th entry ofA. We use ‖A‖op = maxi |λi(A)| to denote the operation norm of a matrixA ∈ Rn1×n1 , where λi(A) denotes the i-th eigenvalue of the matrixA. For a 3-way tensor A ∈ Rn1×n2×n3 , its operation norm is computed as
‖A‖op = sup ‖λ‖2≤1
〈 λ⊗ 3 ,A 〉 = ∑ i,j,k Aijkλiλjλk,
where Aijk denotes the (i, j, k)-th entry of A. Also we denote the vectorization ofW (j) (the weight matrix of the j-th layer) as w(j) = vec ( W (j) ) ∈ Rdjdj−1 .
We denote Ik as the identity matrix of size k × k.
For notational simplicity, we further define e , v(l) − y as the output error vector. Then the squared loss is defined as f(w;x,y) = 12‖e‖ 2 2, where w = (w(1); · · · ;w(l)) ∈ Rd contains all the weight parameters.
B.2 TECHNICAL LEMMAS
We first introduce Lemmas 1 and 2 which are respectively used for bounding the `2-norm of a vector and the operation norm of a matrix. Then we introduce Lemmas 3 and 4 which discuss the topology of functions. In Lemma 5, we give the relationship between the stability and generalization of empirical risk. Lemma 1. (Vershynin, 2012) For any vector x ∈ Rd, its `2-norm can be bounded as
‖x‖2 ≤ 1
1− sup λ∈λ
〈λ,x〉 .
where λ = {λ1, . . . ,λkw} be an -covering net of Bd(1). Lemma 2. (Vershynin, 2012) For any symmetric matrix X ∈ Rd×d, its operator norm can be bounded as
‖X‖op ≤ 1
1− 2 sup λ∈λ
|〈λ,Xλ〉| .
where λ = {λ1, . . . ,λkw} be an -covering net of Bd(1).
Lemma 3. (Mei et al., 2017) Let D ⊆ Rd be a compact set with a C2 boundary ∂D, and f, g : A → R be C2 functions defined on an open set A, with D ⊆ A. Assume that for all w ∈ ∂D and all t ∈ [0, 1], t∇f(w) + (1 − t)∇g(w) 6= 0. Finally, assume that the Hessian ∇2f(w) is non-degenerate and has index equal to r for all w ∈ D. Then the following properties hold:
(1) If g has no critical point in D, then f has no critical point in D.
(2) If g has a unique critical pointw in D that is non-degenerate with an index of r, then f also has a unique critical point w′ in D with the index equal to r.
Lemma 4. (Mei et al., 2017) Suppose that F (w) : Θ→ R is a C2 function where w ∈ Θ. Assume that {w(1), . . . , w(m)} is its non-degenerate critical points and let D = {w ∈ Θ : ‖∇F (w)‖2 < and infi ∣∣λi (∇2F (w))∣∣ ≥ ζ}. Then D can be decomposed into (at most) countably components, with each component containing either exactly one critical point, or no critical point. Concretely, there exist disjoint open sets {Dk}k∈N, with Dk possibly empty for k ≥ m+ 1, such that
D = ∪∞k=1Dk .
Furthermore, w(k) ∈ Dk for 1 ≤ k ≤ m and each Di, k ≥ m+ 1 contains no stationary points. Lemma 5. (Shalev-Shwartz & Ben-David, 2014b; Gonen & Shalev-Shwartz, 2017) Assume that D is a sample distribution and randomized algorithm A is employed for optimization. Suppose that ((x′(1),y ′ (1)), · · · , (x ′ (n),y ′ (n))) ∼ D and wn = argminw Ĵn(w). For every j ∈ {1, · · · , n}, suppose wj∗ = argminw 1
n−1 ∑ i 6=j fi(w;x(i),y(i)). For arbitrary distribution D, we have∣∣∣∣∣∣ES∼D,A,(x′(j),y′(j))∼D 1n n∑ j=1 ( f∗j − fj )∣∣∣∣∣∣ = ∣∣∣∣∣ES∼D,A (J(wn)− Ĵn(wn))
∣∣∣∣∣. where f∗j and fj respectively denote fj(w j ∗;x ′ (j),y ′ (j)) and fj(w n;x′(j),y ′ (j)).
C PROOFS FOR DEEP LINEAR NEURAL NETWORKS
In this section, we first present the technical lemmas in Sec. C.1 and then we give the proofs of these lemmas in Sec. C.2. Next, we utilize these lemmas to prove the results in Theorems 1∼ 3 and Corollary 1 in Sec. C.3. Finally, we give the proofs of other lemmas in Sec. C.4.
C.1 TECHNICAL LEMMAS
Here we present the technical lemmas for proving our desired results. For brevity, we also define Bj:s as follows:
Bs:t ,W (s)W (s−1) · · ·W (t) ∈ Rds×dt−1 , (s ≥ t); Bs:t , I, (s < t). (4)
Lemma 6. Assume that the activation functions in the deep neural network f(w,x) are linear functions. Then the gradient of f(w,x) with respect to w(j) can be written as
∇w(j)f(w,x) = ( (Bj−1:1x)⊗BTl:j+1 ) e, (j = 1, · · · , l),
where ⊗ denotes the Kronecke product. Then we can compute the Hessian matrix as follows:
∇2f(w,x) = ∇w(1) ( ∇w(1)f(w,x) ) · · · ∇w(1) ( ∇w(l)f(w,x) ) ∇w(2) ( ∇w(1)f(w,x) ) · · · ∇w(2) ( ∇w(l)f(w,x) ) ... . . . ...
∇w(l) ( ∇w(1)f(w,x) ) · · · ∇w(l) ( ∇w(l)f(w,x)
) ,
whereQst , ∇w(s) ( ∇w(t)f(w,x) ) is defined as
Qst= ( BTt−1:s+1 ) ⊗ ( Bs−1:1xe TBTl:t+1 ) + ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) , if s<t,( Bs−1:1xx TBs−1:1 ) ⊗ ( Bl:s+1 TBl:s+1 ) , if s= t,(
BTl:s+1ex TBTt−1:1 ) ⊗Bs−1:t+1+ ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) , if s>t.
Lemma 7. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then for any t > 0, the objective f(w,x) obeys
P
( 1
n n∑ i=1 ( f(w,x(i))−E(f(w,x(i))) ) >t
) ≤ 2 exp −cf ′nmin t2 ω2f max ( dlω2fτ 4, τ2 ) , t ω2fτ 2 , where cf ′ is a positive constant and ωf = rl. Lemma 8. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then for any t > 0 and arbitrary unit vector λ ∈ Sd−1, the gradient∇f(w,x) obeys
P
( 1
n n∑ i=1 (〈 λ,∇wf(w,x(i))−E∇wf(w,x(i)) 〉) >t
)
≤ 3 exp ( −cg′nmin ( t2
lmax (ωgτ2, ωgτ4, ωg′τ2) , t√ lωg max (τ, τ2)
)) ,
where cg′ is a constant; ωg = cqr2(2l−1) and ωg′ = cqr2(l−1) in which cq = √
max0≤i≤l di. Lemma 9. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then for any t > 0 and arbitrary unit vector λ ∈ Sd−1, the Hessian∇2f(w,x) obeys
P
( 1
n n∑ i=1 (〈 λ, (∇2wf(w,x(i))− E∇2wf(w,x(i)))λ 〉) > t
)
≤ 5 exp ( −ch′nmin ( t2
τ2l2 max (ωg, ωgτ2, ωh) , t √ ωglmax (τ, τ2)
)) ,
where ωg = r4(l−1) and ωh = r2(l−2). Lemma 10. Suppose the activation functions in deep neural network are linear functions. Then for any w ∈ Bd(r) and x ∈ Bd0(rx), we have
‖∇wf(w,x)‖2 ≤ √ αg, where αg = ctlr4xr 4l−2.
in which ct is a constant. Further, for any w ∈ Bd(r) and x ∈ Bd0(rx), we also have∥∥∇2f(w,x)∥∥op ≤ ∥∥∇2f(w,x)∥∥F ≤ l√αl, where αl , ct′r4xr4l−2. in which ct′ is a constant. With the same condition, we can bound the operation norm of ∇3f(w,x). That is, there exists a universal constant αp such that
∥∥∇3f(w,x)∥∥op ≤ αp. Lemma 11. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then there exist two universal constant cg and ch such that the sample Hessian converges uniformly to the population Hessian in operator norm. Specifically, there exit two universal constants ch1 and ch2 such that if n ≥ ch2 max( α2pr 2
τ2l2ω2hε 2s log(d/l) , s log(d/l)/(lτ2)), then
sup w∈Ω ∥∥∥∇2Ĵn(w)−∇2J(w)∥∥∥op≤ch1τ lωh √ d log(nl)+log(20/ε) n
holds with probability at least 1− ε, where ωh = max ( τr2(l−1), r2(l−2), rl−2 ) .
C.2 PROOFS OF TECHNICAL LEMMAS
To prove the above lemmas, we first introduce some useful results. Lemma 12. (Rudelson & Vershynin, 2013) Assume that x = (x1;x2; · · · ;xk) ∈ Rk is a random vector with independent components xi which have zero mean and are independent τ2i -sub-Gaussian variables. Here maxi τ2i ≤ τ2. LetA be an k × k matrix. Then we have
E exp λ ∑ i,j:i 6=j Aijxixj − E( ∑ i,j:i 6=j Aijxixj) ≤ exp (2τ2λ2‖A‖2F ) , |λ| ≤ 1/(2τ‖A‖2).
Lemma 13. Assume that x = (x1;x2; · · · ;xk) ∈ Rk is a random vector with independent components xi which have zero mean and are independent τ2i -sub-Gaussian variables. Here maxi τ 2 i ≤ τ2. Let a be an n-dimensional vector. Then we have
E exp ( λ ( k∑ i=1 aix 2 i − E ( k∑ i=1 aix 2 i ))) ≤ E exp ( 128λ2τ4 ( k∑ i=1 a2i )) , |λ| ≤ 1 τ2 maxi ai .
Lemma 14. ForBj:t defined in Eqn. (4), we have the following properties:
‖Bs:t‖op ≤ ‖Bs:t‖F ≤ ωr and ‖Bl:1‖op ≤ ‖Bl:1‖F ≤ ωf , where ωr = rs−t+1 ≤ max ( r, rl ) and ωf = rl.
Lemma 13 is useful for bounding probability. The two inequalities in Lemma 14 can be obtained by using ‖w(j)‖2 ≤ r (∀j = 1, · · · , l). We defer the proofs of Lemmas 13 and 14 to Sec. C.4.2.
C.2.1 PROOF OF LEMMA 6
Proof. When the activation functions are linear functions, we can easily compute the gradient of f(w,x) with respect to w(j):
∇w(j)f(w,x) = ( (Bj−1:1x)⊗BTl:j+1 ) e, (j = 1, · · · , l),
where ⊗ denotes the Kronecker product. Now we consider the computation of the Hessian matrix. For brevity, letQs = ( (Bs−1:1x)⊗BTl:s+1 ) . Then we can compute∇2w(s)f(w,x) as follows:
∇2w(s)f(w,x) = ∂2f(w,x)
∂wT(s)∂w(s) =
∂2f(w,x)
∂wT(s)∂w(s) = ∂(Qse) ∂wT(s) = ∂vec (Qse) ∂wT(s)
= ∂vec
( QsBl:s+1W (t)Bs−1:1x )
∂wT(s) = ∂ ( (Bs−1:1x) T ⊗ (QsBl:s+1) ) vec ( W (s) ) ∂wT(s)
=(Bs−1:1x) T ⊗ (( (Bs−1:1x)⊗BTl:s+1 ) Bl:s+1 ) ¬ =(Bs−1:1x) T ⊗ ( (Bs−1:1x)⊗ ( BTl:s+1Bl:s+1
)) = ( (Bs−1:1x) T ⊗ (Bs−1:1x) ) ⊗ ( BTl:s+1Bl:s+1
) ® = ( (Bs−1:1x)(Bs−1:1x) T ) ⊗ ( BTl:s+1Bl:s+1 ) ,
where ¬ holds since Bj−1:1x is a vector and for any vector x, we have (x⊗A)B = x⊗ (AB). holds because for any four matrices Z1 ∼ Z3 of proper sizes, we have (Z1 ⊗ Z2) ⊗ Z3 = Z1 ⊗ (Z2 ⊗ Z3). ® holds because for any two matrices z1, z2 of proper sizes, we have z1zT2 = z1 ⊗ zT2 = zT2 ⊗ z1. Then, we consider the case s > t:
∇w(t) ( ∇w(s)f(w,x) ) = ∂2f(w,x)
∂wT(t)∂w(s) =
∂2f(w,x)
∂wT(t)∂w(s) = ∂(Qse) ∂wT(t) = ∂vec (Qse) ∂wT(t)
= ∂vec
( QsBl:t+1W (t)Bt−1:1x )
∂wT(t) + ∂vec
(( (Bs−1:1x)⊗BTl:s+1 ) e )
∂wT(t) .
Notice, here we just think thatQs in the ∂vec(QsBl:t+1W (t)Bt−1:1x)
∂wT (t)
is a constant matrix and is not
related toW (t). Similarly, we also take e in ∂vec(((Bs−1:1x)⊗BTl:s+1)e)
∂wT (t)
as a constant vector. Since
we have ∂vec ( QsBl:t+1W (t)Bt−1:1x )
∂wT(t) = ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) ,
we only need to consider
∂vec (( (Bs−1:1x)⊗BTl:s+1 ) e )
∂wT(t) = ∂vec
( (Bs−1:1x)⊗ ( BTl:s+1e )) ∂wT(t)
= ∂vec
( (Bs−1:1x) ( BTl:s+1e )T) ∂wT(t)
= ∂vec
( Bs−1:t+1W (t) ( Bt−1:1xe TBl:s+1 ))
∂wTt = ∂ ( Bt−1:1xe TBl:s+1 )T ⊗Bs−1:t+1vec (W (t)) ∂wTt
= ( Bt−1:1xe TBl:s+1 )T ⊗Bs−1:t+1.
Therefore, for s > t, by combining the above two terms, we can obtain ∇w(t) ( ∇w(s)f(w,x) ) = ( BTl:s+1ex TBTt−1:1 ) ⊗Bs−1:t+1+ ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) .
Then, by similar method, we can compute the Hessian for the case s < t as follows: ∇w(t) ( ∇w(s)f(w,x) ) = ( BTt−1:s+1 ) ⊗ ( Bs−1:1xe TBTl:t+1 ) + ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) .
The proof is completed.
C.2.2 PROOF OF LEMMA 7
Proof. We first prove that v(l), which is defined in Eqn. (5), is sub-Gaussian.
v(l) = W (l) · · ·W (1)x = Bl:1x. (5)
Then by the convexity in λ of exp(λt) and Lemma 14, we can obtain
E ( exp (〈 λ,v(l) − E(v(l)) 〉)) =E (exp (〈λ,Bl:1x− EBl:1x〉))
≤E ( exp (〈 BTl:1λ,x 〉)) ≤ exp ( ‖BTl:1λ‖22τ2
2 ) ¬ ≤ exp ( ω2fτ
2‖λ‖22 2
) ,
(6)
where ¬ uses the conclusion that ‖Bl:1‖op ≤ ‖Bl:1‖F ≤ ωf in Lemma 14. This means that v(l) is centered and is ω2fτ 2-sub-Gaussian. Accordingly, we can obtain that the k-th entry of v(l) is also zkτ 2-sub-Gaussian, where zk is a universal positive constant. Note that maxk zk ≤ ω2f . Let v (l) i
denotes the output of the i-th sample x(i). By Lemma 13, we have that for s > 0,
P
( 1
n n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 ) > t 2 ) = P ( s n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 ) > nst 2 ) ¬ ≤ exp ( −snt
2
) E ( s
n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 )) ≤ exp ( −snt
2 ) n∏ i=1 E ( s ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 )) ® ≤ exp ( −snt
2 ) n∏ i=1 exp ( 128dls 2ω4fτ 4 ) |s| ≤ 1 ω2fτ 2
¯ ≤ exp ( −c′nmin ( t2
dlω4fτ 4 ,
t
ω2fτ 2
)) .
Note that ¬ holds because of Chebyshev’s inequality. holds since x(i) are independent. ® is established by applying Lemma 13. We have ¯ by optimizing s. Since v(l) is sub-Gaussian, we have
P
( 1
n n∑ i=1 ( yTv (l) i − Ey Tv (l) i ) > t 2 ) ≤P ( s n∑ i=1 ( yTv (l) i − Ey Tv (l) i ) > nst 2 )
≤ exp ( −nst
2
) E exp ( s
n∑ i=1 ( yTv (l) i − Ey Tv (l) i
))
≤ exp ( −nst
2 ) n∏ i=1 E exp ( s ( yTv (l) i − Ey Tv (l) i )) ¬ ≤ exp ( −nst
2 ) n∏ i=1 exp ( ω2fτ 2s2‖y‖22 2 ) ≤ exp ( − nt 2
8ω2fτ 2‖y‖22
) ,
where ¬ holds because of Eqn. (6) and we have since we optimize s.
Since the loss function f(w,x) is defined as f(w,x) = ‖v(l) − y‖22, we have f(w,x)− E(f(w,x))=‖v(l) − y‖22−E(‖v(l)−y‖22)= ( ‖v(l)‖22−E‖v(l)‖22 ) + ( yTv(l)−EyTv(l) ) .
Therefore, we have
P
( 1
n n∑ i=1 ( f(w,x(i))− E(f(w,x(i))) ) > t
)
≤P
( 1
n n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 | 1. What is the focus of the paper regarding deep neural networks?
2. What are the strengths of the paper in terms of its contributions and novelty?
3. Do you have any concerns or questions about the assumptions made in the paper?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any limitations or weaknesses in the paper's analysis or conclusions? | Review | Review
This paper provides the analysis of empirical risk landscape for GENERAL deep neural networks (DNNs). Assumptions are comparable to existing results for OVERSIMPLIFED shallow neural networks. The main results analyzed: 1) Correspondence of non-degenerate stationary points between empirical risk and the population counterparts. 2) Uniform convergence of the empirical risk to population risk. 3) Generalization bound based on stability. The theory is first developed for linear DNNs and then generalized to nonlinear DNNs with sigmoid activations.
Here are two detailed comments:
1) For deep linear networks with squared loss, Kawaguchi 2016 has shown that the global optima are the only non-degerenate stationary points. Thus, the obtained non-degerenate stationary deep linear network should be equivalent to the linear regression model Y=XW. Should the risk bound only depends on the dimensions of the matrix W?
2) The comparison with Bartlett & Maass’s (BM) work is a bit unfair, because their result holds for polynomial activations while this paper handles linear activations. Thus, the authors need to refine BM's result for comparison. |
ICLR | Title
Empirical Risk Landscape Analysis for Understanding Deep Neural Networks
Abstract
This work aims to provide comprehensive landscape analysis of empirical risk in deep neural networks (DNNs), including the convergence behavior of its gradient, its stationary points and the empirical risk itself to their corresponding population counterparts, which reveals how various network parameters determine the convergence performance. In particular, for an l-layer linear neural network consisting of di neurons in the i-th layer, we prove the gradient of its empirical risk uniformly converges to the one of its population risk, at the rate of O(r √ l √ maxi dis log(d/l)/n). Here d is the total weight dimension, s is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by r. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep nonlinear neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth l, the layer width di, the network size d, the sparsity in weight and the parameter magnitude r determine the neural network landscape.
N/A
√ l √
maxi dis log(d/l)/n). Here d is the total weight dimension, s is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by r. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep nonlinear neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth l, the layer width di, the network size d, the sparsity in weight and the parameter magnitude r determine the neural network landscape.
1 INTRODUCTION
Deep learning has achieved remarkable success in many fields, such as computer vision (Hinton et al., 2006; Szegedy et al., 2015; He et al., 2016), natural language processing (Collobert & Weston, 2008; Bakshi & Stephanopoulos, 1993), and speech recognition (Hinton et al., 2012; Graves et al., 2013). However, theoretical understanding on the properties of deep learning models still lags behind their practical achievements (Shalev-Shwartz et al., 2017; Kawaguchi, 2016) due to their high non-convexity and internal complexity. In practice, parameters of deep learning models are learned by minimizing the empirical risk via (stochastic-)gradient descent. Therefore, some recent works (Bartlett & Maass, 2003; Neyshabur et al., 2015) analyzed the convergence of the empirical risk to the population risk, which are however still far from fully understanding the landscape of the empirical risk in deep learning models. Beyond the convergence properties of the empirical risk itself, the convergence and distribution properties of its gradient and stationary points are also essential in landscape analysis. A comprehensive landscape analysis can reveal important information on the optimization behavior and practical performance of deep neural networks, and will be helpful to designing better network architectures. Thus, in this work we aim to provide comprehensive landscape analysis by looking into the gradients and stationary points of the empirical risk.
Formally, we consider a DNN model f(w;x,y) : Rd0 × Rdl → R parameterized by w ∈ Rd consisting of l layers (l ≥ 2) that is trained by minimizing the commonly used squared loss function
over sample pairs {(x,y)} ⊂ Rd0 × Rdl from an unknown distribution D, where y is the target output for the sample x. Ideally, the model can find its optimal parameter w∗ by minimizing the population risk through (stochastic-)gradient descent by backpropagation:
min w J(w) , E(x,y)∼D f(w;x,y),
where f(w;x,y) = 12‖v (l) − y‖22 is the squared loss associated to the sample (x,y) ∼ D in which v(l) is the output of the l-th layer. In practice, as the sample distribution D is usually unknown and only finite training samples { (x(i),y(i)) }n i=1
i.i.d. drawn from D are provided, the network model is usually trained by minimizing the empirical risk:
min w Ĵn(w) ,
1
n n∑ i=1 f(w;x(i),y(i)). (1)
Understanding the convergence behavior of Ĵn(w) to J(w) is critical to statistical machine learning algorithms. In this work, we aim to go further and characterize the landscape of the empirical risk Ĵn(w) of deep learning models by analyzing the convergence behavior of its gradient and stationary points to their corresponding population counterparts. We provide analysis for both multi-layer linear and nonlinear neural networks. In particular, we obtain following new results.
• We establish the uniform convergence of empirical gradient ∇wĴn(w) to its population counterpart ∇wJ(w). Specifically, when the sample size n is not less than O ( max(l3r2/(ε2s log(d/l)), s log(d/l)/l) ) , with probability at least 1 − ε the conver-
gence rate is O(r2l √ l √
maxi dis log(d/l)/n), where there are s nonzero entries in the parameterw, the output dimension of the i-th layer is di and the magnitude of the weight parameter of each layer is upper bounded by r. This result implies that as long as the training sample size n is sufficiently large, any stationary point of Ĵn(w) is also a stationary point of J(w) and vise versa, although both Ĵn(w) and J(w) are very complex.
• We then prove the exact correspondence of non-degenerate stationary points between Ĵn(w) and J(w). Indeed, the corresponding non-degenerate stationary points also uniformly converge to each other at the same convergence rate as the one revealed above with an extra factor 2/ζ. Here ζ > 0 accounts for the geometric topology of non-degenerate stationary points (see Definition 1).
Based on the above two new results, we also derive the uniform convergence of the empirical risk Ĵn(w) to its population risk J(w), which helps understand the generalization error of deep learning models and stability of their empirical risk. These analyses reveal the role of the depth l of a neural network model in determining its convergence behavior and performance. Also, the results tell that the width factor √ maxi di, the nonzero entry number s of weights, and the total network size d are also critical to the convergence and performance. In addition, controlling magnitudes of the parameters (weights) in DNNs are demonstrated to be important for performance. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of empirical gradient and stationary points in both deep linear and nonlinear neural networks.
2 RELATED WORK
To date, only a few theories have been developed for understanding DNNs which can be roughly divided into following three categories. The first category aims to analyze training error of DNNs. Baum (1988) pointed out that zero training error can be obtained when the last layer of a neural network has more units than training samples. Later, Soudry & Carmon (2016) proved that for DNNs with leaky rectified linear units (ReLU) and a single output, the training error achieves zero at any of their local minima as long as the product of the number of units in the last two layers is larger than the training sample size.
The second category of analysis works (Dauphin et al., 2014; Choromanska et al., 2015a; Kawaguchi, 2016; Tian, 2017) focus on analyzing loss surfaces of DNNs, e.g., how the stationary points are distributed. Those results are helpful to understanding performance difference of large- and small-size
networks (Choromanska et al., 2015b). Among them, Dauphin et al. (2014) experimentally verified that a large number of saddle points indeed exist for DNNs. With strong assumptions, Choromanska et al. (2015a) connected the loss function of a deep ReLU network with the spherical spin-class model and described locations of the local minima. Later, Kawaguchi (2016) proved the existence of degenerate saddle points for deep linear neural networks with squared loss function. They also showed that any local minimum is also a global minimum. By utilizing techniques from dynamical system analysis, Tian (2017) gave guarantees that for two-layer bias-free networks with ReLUs, the gradient descent algorithm with certain symmetric weight initialization can converge to the ground-truth weights globally, if the inputs follow Gaussian distribution. Recently, Nguyen & Hein (2017) proved that for a fully connected network with squared loss and analytic activation functions, almost all the local minima are globally optimal if one hidden layer has more units than training samples and the network structure after this layer is pyramidal. Besides, some recent works, e.g., (Zhang et al., 2016; 2017), tried to alleviate analysis difficulties by relaxing the involved highly nonconvex functions into ones easier.
In addition, some existing works (Bartlett & Maass, 2003; Neyshabur et al., 2015) analyze the generalization performance of a DNN model. Based on the Vapnik–Chervonenkis (VC) theory, Bartlett & Maass (2003) proved that for a feedforward neural network with one-dimensional output, the best convergence rate of the empirical risk to its population risk on the sample distribution can be bounded by its fat-shattering dimension. Recently, Neyshabur et al. (2015) adopted Rademacher complexity to analyze learning capacity of a fully-connected neural network model with ReLU activation functions and bounded inputs.
However, although gradient descent with backpropagation is the most common optimization technique for DNNs, none of existing works analyzes convergence properties of gradient and stationary points of the DNN empirical risk. For single-layer optimization problems, some previous works analyze their empirical risk but essentially differ from our analysis method. For example, Negahban et al. (2009) proved that for a regularized convex program, the minimum of the empirical risk uniformly converges to the true minimum of the population risk under certain conditions. Gonen & ShalevShwartz (2017) proved that for nonconvex problems without degenerated saddle points, the difference between empirical risk and population risk can be bounded. Unfortunately, the loss of DNNs is highly nonconvex and has degenerated saddle points (Fyodorov & Williams, 2007; Dauphin et al., 2014; Kawaguchi, 2016), thus their analysis results are not applicable. Mei et al. (2017) analyzed the convergence behavior of the empirical risk for nonconvex problems, but they only considered the single-layer nonconvex problems and their analysis demands strong sub-Gaussian and subexponential assumptions on the gradient and Hessian of the empirical risk respectively. Their analysis also assumes a linearity property on gradient which is difficult to hold or verify. In contrast, our analysis requires much milder assumptions. Besides, we prove that for deep networks which are highly nonconvex, the non-degenerate stationary points of empirical risk can uniformly converge to their corresponding stationary points of population risk at the rate of O( √ s/n) which is faster
than the rate O( √ d/n) for single-layer optimization problems in (Mei et al., 2017). Also, Mei et al. (2017) did not analyze the convergence rate of the empirical risk, stability or generalization error of DNNs as this work.
3 PRELIMINARIES
Throughout the paper, we denote matrices by boldface capital letters, e.g. A. Vectors are denoted by boldface lowercase letters, e.g. a, and scalars are denoted by lowercase letters, e.g. a. We define the r-radius ball as Bd(r) , {z ∈ Rd | ‖z‖2 ≤ r}. To explain the results, we also need the vectorization operation vec(·). It is defined as vec(A) = (A(:, 1); · · · ;A(:, t)) ∈ Rst that vectorizesA ∈ Rs×t along its columns. We use d= ∑l j=1djdj−1 to denote the total dimension of weight parameters, where dj denotes the output dimension of the j-th layer.
In this work, we consider both linear and nonlinear DNNs. Suppose both networks consist of l layers. We use u(j) and v(j) to respectively denote the input and output of the j-th layer, ∀j = 1, . . . , l. Deep linear neural networks: The function of the j-th layer is formulated as
u(j) ,W (j)v(j−1) ∈ Rdj , v(j) , u(j) ∈ Rdj , ∀j = 1, · · · , l,
where v(0) = x is the input andW (j) ∈ Rdj×dj−1 is the weight matrix of the j-th layer. Deep nonlinear neural networks: We adopt the sigmoid function as the non-linear activation function. The function within the j-th layer can be written as
u(j) ,W (j)v(j−1) ∈ Rdj , v(j) , hj(u(j)) = (σ(u(j)1 ); · · · ;σ(u (j) dj )) ∈ Rdj , ∀j = 1, · · · , l,
where u(j)i denotes the i-th entry of u (j) and σ(·) is the sigmoid function, i.e., σ(a) = 1/(1 + e−a).
Following the common practice, both DNN models adopt the squared loss function defined as f(w;x,y) = 12‖v
(l) − y‖22, where w = (w(1); · · · ;w(l)) ∈ Rd contains all the weight parameters and w(j) = vec ( W (j) ) ∈ Rdjdj−1 . Then the empirical risk Ĵn(w) is Ĵn(w) = 1 n ∑n i=1 f(w;x(i),y(i)) = 1 2n ∑n i=1 ‖v (l) (i) − y(i)‖ 2 2, where v (l) (i) is the network’s output of x(i).
4 RESULTS FOR DEEP LINEAR NEURAL NETWORKS
We first analyze linear neural network models and present following new results: (1) the uniform convergence of the empirical risk gradient to its population counterpart and (2) the convergence properties of non-degenerate stationary points of the empirical risk. As a corollary, we also derive the uniform convergence of the empirical risk to the population one, which further gives stability and generalization bounds. In the next section, we extend the analysis to non-linear neural network models.
We assume the input datum x is τ2-sub-Gaussian and has bounded magnitude, as formally stated in Assumption 1. Assumption 1. The input datum x ∈ Rd0 has zero mean and is τ2-sub-Gaussian, i.e.,
E[exp (〈λ,x〉)] ≤ exp ( 1
2 τ2‖λ‖22
) , ∀λ ∈ Rd0 .
Besides, the magnitude x is bounded as ‖x‖2 ≤ rx, where rx is a positive universal constant.
Note that any random vector z consisting of independent entries with bounded magnitude is subGaussian and satisfies Assumption 1 (Vershynin, 2012). Moreover, for such a random z, we have τ = ‖z‖∞ ≤ ‖z‖2 ≤ rx. Such an assumption on bounded magnitude generally holds for natural data, e.g., images and speech signals. Besides, we assume the weight parameters w(j) of each layer are bounded as w ∈ Ω = {w |w(j) ∈ Bdjdj−1(rj), ∀j = 1, · · · , l} where rj is a constant. For notational simplicity, we let r = maxj rj . Such an assumption is common (Xu & Mannor, 2012). Here we assume the entry value of y falls in [0, 1]. For any bounded target output y, we can always scale it to satisfy such a requirement.
The results presented for linear neural networks here can be generalized to deep ReLU neural networks by applying the results from Choromanska et al. (2015a) and Kawaguchi (2016), which transform deep ReLU neural networks into deep linear neural networks under proper assumptions.
4.1 UNIFORM CONVERGENCE OF EMPIRICAL RISK GRADIENT
We first analyze the convergence of gradients for the DNN empirical and population risks. To our best knowledge, these results are the first ones giving guarantees on gradient convergence, which help better understand the landscape of DNNs and their optimization behavior. The results are stated blow. Theorem 1. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then the empirical gradient uniformly converges to the population gradient in Euclidean norm. Specifically, there exist two universal constants cg′ and cg such that if n ≥ cg′ max(l3r2r4x/(cqs log(d/l)ε2τ4 log(1/ε)), s log(d/l)/(lτ2)) where cq = √ max0≤i≤l di, then
sup w∈Ω ∥∥∥∇Ĵn(w)−∇J(w)∥∥∥ 2 ≤ g , cgτωg √ lcq √ s log(dn/l) + log(12/ε) n
holds with probability at least 1 − ε, where s denotes the number of nonzero entries of all weight parameters and ωg = max ( τr2l−1, r2l−1, rl−1 ) .
From Theorem 1, one can observe that with an increasingly larger sample size n, the difference between empirical risk and population risk gradients decreases monotonically at the rate of O(1/ √ n) (up to a log factor). Theorem 1 also characterizes how the depth l contributes to obtaining small difference between the empirical and population risk gradients. Specifically, a deeper neural network needs more training samples to mitigate the difference. Also, due to the factor d, training a network of larger size using gradient descent also requires more training samples. We observe a factor of√
maxi di (i.e. cq), which prefers a DNN architecture of balanced layer sizes (without extremely wide layers). This result also matches the trend and empirical performance in deep learning applications advocating deep but thin networks (He et al., 2016; Szegedy et al., 2015).
By observing Theorem 1, imposing certain regularizations on the weight parameters is useful. For example, reducing the number of nonzero entries s encourages sparsity regularization like ‖w‖1. The results also suggest not choosing large-magnitude weights w in order for a smaller factor r by adopting regularization like ‖w‖22. Theorem 1 also reveals the point derived from optimizing that the empirical and population risks have similar properties when the sample size n is sufficiently large. For example, an /2-stationary point w̃ of Ĵn(w) is also an -stationary point of J(w) with probability 1−ε if n ≥ c (τωg/ )2lcqs log(d/l) with c being a constant. Here -stationary point for a function F means the point w satisfying ‖∇wF ‖2 ≤ . Understanding such properties is useful, since in practice one usually computes an -stationary point of Ĵn(w). These results guarantee the computed point is at most a 2 -stationary point of J(w) and is thus close to the optimum.
4.2 UNIFORM CONVERGENCE OF STATIONARY POINTS
We then proceed to analyze the distribution and convergence properties of stationary points of the DNN empirical risk. Here we consider non-degenerate stationary points which are geometrically isolated and thus unique in local regions. Since degenerate stationary points are not unique in a local region, we cannot expect to establish one-to-one corresponding relationship (see below) between them in empirical risk and population risk. Definition 1. (Non-degenerate stationary points) (Gromoll & Meyer, 1969) If a stationary point w is said to be a non-degenerate stationary point of J(w), then it satisfies
inf i ∣∣λi (∇2J(w))∣∣ ≥ ζ, where λi ( ∇2J(w) ) denotes the i-th eigenvalue of the Hessian∇2J(w) and ζ is a positive constant.
Non-degenerate stationary points include local minima/maxima and non-degenerate saddle points, while degenerate stationary points refer to degenerate saddle points. Then we introduce the index of non-degenerate stationary points which can characterize their geometric properties. Definition 2. (Index of non-degenerate stationary points) (Dubrovin et al., 2012) The index of a symmetric non-degenerate matrix is the number of its negative eigenvalues, and the index of a non-degenerate stationary pointw of a smooth function F is simply the index of its Hessian∇2F (w).
Suppose that J(w) has m non-degenerate stationary points that are denoted as {w(1), w(2), · · · ,w(m)}. We prove following convergence behavior of these stationary points. Theorem 2. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then if n ≥ ch max(l3r2r4x/(cqs log(d/l)ε2τ4 log(1/ε)), s log(d/l)/ζ2) where ch is a constant, for k ∈ {1, · · · ,m}, there exists a non-degenerate stationary point w(k)n of Ĵn(w) which corresponds to the non-degenerate stationary point w(k) of J(w) with probability at least 1− ε. In addition, w(k)n and w(k) have the same non-degenerate index and they satisfy
‖w(k)n −w(k)‖2 ≤ 2cgτωg ζ
√ lcq
√ s log(dn/l) + log(12/ε)
n , (k = 1, · · · ,m)
with probability at least 1− ε, where the parameters cq , ωg , and cg are given in Theorem 1.
Theorem 2 guarantees the one-to-one correspondence between the non-degenerate stationary points of the empirical risk Ĵn(w) and the popular risk J(w). The distances of the corresponding pairs
become smaller as n increases. In addition, the corresponding pairs have the same non-degenerate index. This implies that the corresponding stationary points have the same geometric properties, such as whether they are saddle points. Accordingly, we can develop more efficient algorithms, e.g. escaping saddle points (Ge et al., 2015), since Dauphin et al. (2014) empirically proved that saddle points are usually surrounded by high error plateaus. Also when n is sufficiently large, the properties of stationary points of Ĵn(w) are similar to the points of the population risk J(w) in the sense that they have exactly matching local minima/maxima and non-degenerate saddle points. By comparing Theorems 1 and 2, we find that the requirement for sample number in Theorem 2 is more restrict, since establishing exact one-to-one correspondence between the non-degenerate stationary points of Ĵn(w) and J(w) and bounding their uniform convergence rate to each other are more challenging. From Theorems 1 and 2, we also notice that the uniform convergence rate of non-degenerate stationary points has an extra factor 1/ζ. This is because bounding stationary points needs to access not only the gradient itself but also the Hessian matrix. See more details in proof.
Kawaguchi (2016) pointed out that degenerate stationary points indeed exist for DNNs. However, since degenerate stationary points are not isolated, such as forming flat regions, it is hard to establish the unique correspondence for them as for non-degenerate ones. Fortunately, by Theorem 1, the gradients at these points of Ĵn(w) and J(w) are close. This implies that a degenerate stationary point of J(w) will also give a near-zero gradient for Ĵn(w), i.e., it is also a stationary point for Ĵn(w).
In the proof, we consider the essential multi-layer architecture of the deep linear network, and do not transform it into a linear regression model and directly apply existing results (see Loh & Wainwright (2015) and Negahban et al. (2011)). This is because we care more about deep ReLU networks which cannot be reduced in this way. Our proof technique is more suitable for analyzing the multi-layer neural networks which paves a way for analyzing deep ReLU networks. Also such an analysis technique can reveal the role of network parameters (dimension, norm, etc.) of each weight matrix in the results which may benefit the design of networks. Besides, the obtained results are more consistent with those for deep nonlinear networks (see Sec. 5).
4.3 UNIFORM CONVERGENCE, STABILITY AND GENERALIZATION OF EMPIRICAL RISK
Based on the above results, we can derive the uniform convergence of empirical risk to population risk easily. In this subsection, we first give the uniform convergence rate of empirical risk for deep linear neural networks in Theorem 3, and then use this result to derive the stability and generalization bounds for DNNs in Corollary 1. Theorem 3. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then there exist two universal constants cf ′ and cf such that if n ≥ cf ′ max(l3r4x/(dls log(d/l)ε2τ4 log(1/ε)), s log(d/l)/(τ2dl)), then
sup w∈Ω ∣∣∣Ĵn(w)− J(w)∣∣∣ ≤ f , cfτ max(√dlτr2l, rl)√s log(dn/l) + log(8/ε) n
(2)
holds with probability at least 1− ε. Here l is the number of layers in the neural network, n is the sample size and dl is the dimension of the final layer.
From Theorem 3, when n→ +∞, we have |Ĵn(w)− J(w)| → 0. According to the definition of uniform convergence (Vapnik & Vapnik, 1998; Shalev-Shwartz et al., 2010), under the distribution D, the empirical risk of a deep linear neural network converges to its population risk uniformly at the rate of O(1/ √ n). Theorem 3 also explains the roles of the depth l, the network size d, and the number of nonzero weight parameters s in a DNN model.
Based on VC-dimension techniques, Bartlett & Maass (2003) proved that for a feedforward neural network with polynomial activation functions and one-dimensional output, with probability at least 1 − ε the convergence bound satisfies |Ĵn(w) − inff J(w)| ≤ O( √
(γ log2(n) + log(1/ε))/n). Here γ is the shattered parameter and can be as large as the VC-dimension of the network model, i.e. at the order ofO(ld log(d)+l2d) (Bartlett & Maass, 2003). Note that Bartlett & Maass (2003) did not reveal the role of the magnitude of weight in their results. In contrast, our uniform convergence bound is supw∈Ω |Ĵn(w)−J(w)| ≤ O( √ (s log(dn/l) + log(1/ε))/n). So our convergence rate is tighter.
Neyshabur et al. (2015) proved that the Rademacher complexity of a fully-connected neural network model with ReLU activation functions and one-dimensional output is O ( rl/ √ n )
(see Corollary 2 in (Neyshabur et al., 2015)). Then by applying Rademacher complexity based argument (ShalevShwartz & Ben-David, 2014a), we have | supf (Ĵn(w)−J(w))| ≤ O((rl + √ log(1/ε))/ √ n) with probability at least 1− ε where the loss function is the training error g = 1(v(l) 6=y) in which v(l) is the output of the l-th layer in the network model f(w;x,y). The convergence rate in our theorem is O(r2l √ (s log(d/l) + log(1/ε))/n) and has the same convergence speed O(1/ √ n) w.r.t. sample number n. Note that our convergence rate involves r2l since we use squared loss instead of the training error in (Neyshabur et al., 2015). The extra parameters s and d are involved since we consider the parameter space rather than the function hypothesis f in (Neyshabur et al., 2015), which helps people more transparently understand the roles of the network parameters. Besides, the Rademacher complexity cannot be applied to analyzing convergence properties of the empirical risk gradient and stationary points as our techniques.
Based on Theorem 3, we proceed to analyze the stability property of the empirical risk and the convergence rate of the generalization error in expectation. Let S = {(x(1),y(1)), · · · , (x(n),y(n))} denote the sample set in which the samples are i.i.d. drawn from D. When the optimal solution wn to problem (1) is computed by deterministic algorithms, the generalization error is defined as g = Ĵn(w
n)− J(wn). But one usually employs randomized algorithms, e.g. stochastic gradient descent (SGD), for computing wn. In this case, stability and generalization error in expectation defined in Definition 3 are more applicable.
Definition 3. (Stability and generalization in expectation) (Vapnik & Vapnik, 1998; Shalev-Shwartz et al., 2010; Gonen & Shalev-Shwartz, 2017) Assume a randomized algorithm A is employed, ((x′(1),y ′ (1)), · · · , (x ′ (n),y ′ (n))) ∼ D and wn = argminw Ĵn(w) is the empirical risk minimizer
(ERM). For every j ∈ [n], suppose wj∗ = argminw 1n−1 ∑ i 6=j fi(w;x(i),y(i)). We say that the
ERM is on average stable with stability rate k under distribution D if ∣∣∣ES∼D,A,(x′
(j) ,y′ (j) )∼D
1 n ∑n j=1 [ fj(w j ∗;x ′ (j),y ′ (j))− fj(w n;x′(j),y ′ (j)) ]∣∣∣ ≤ k. The ERM is said to have generalization
error with convergence rate k′ under distribution D if we have ∣∣∣ES∼D,A (J(wn)− Ĵn(wn))∣∣∣ ≤
k′ .
Stability measures the sensibility of the empirical risk to the input and generalization error measures the effectiveness of ERM on new data. Generalization error in expectation is especially important for applying DNNs considering their internal randomness, e.g. from SGD optimization. Now we present the results on stability and generalization performance of deep linear neural networks.
Corollary 1. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then with probability at least 1− ε, both the stability rate and the generalization error rate of ERM of a deep linear neural network are at least f :∣∣∣ES∼D,A,(x′
(j) ,y′ (j) )∼D
1
n n∑ j=1 ( f∗j − fj ) ∣∣∣ ≤ f and ∣∣∣ES∼D,A (J(wn)− Ĵn(wn)) ∣∣∣ ≤ f , where f∗j and fj respectively denote fj(w j ∗;x ′ (j),y ′ (j)) and fj(w n;x′(j),y ′ (j)), and f is defined in Eqn. (2).
According to Corollary 1, both the stability rate and the convergence rate of generalization error are O( f ). This result indicates that deep learning empirical risk is stable and its output is robust to small perturbation over the training data. When n is sufficiently large, small generalization error of DNNs is guaranteed.
5 RESULTS FOR DEEP NONLINEAR NEURAL NETWORKS
In the above section, we analyze the empirical risk optimization landscape for deep linear neural network models. In this section, we extend our analysis to deep nonlinear neural networks which adopt the sigmoid activation function. Our analysis techniques are also applicable to other third-order
differentiable activation functions, e.g., tanh function with different convergence rate. Here we assume the input data are i.i.d. Gaussian variables.
Assumption 2. The input datum x is a vector of i.i.d. Gaussian variables from N (0, τ2).
Since for any input, the sigmoid function always maps it to the range [0, 1]. Thus, we do not require the input x to have bounded magnitude. Such an assumption is common. For instance, Tian (2017) and Soudry & Hoffer (2017) both assumed that the entries in the input vector are from Gaussian distribution. We also assumew ∈ Ω as in (Xu & Mannor, 2012). Here we also assume that the entry value of the target output y falls in [0, 1]. Similar to the analysis of deep linear neural networks, here we also aim to characterize the empirical risk gradient, stationary points and empirical risk for deep nonlinear neural networks.
5.1 UNIFORM CONVERGENCE OF GRADIENT AND STATIONARY POINTS
Here we analyze convergence properties of gradients of the empirical risk for deep nonlinear neural networks.
Theorem 4. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are sigmoid functions. Then the empirical gradient uniformly converges to the population gradient in Euclidean norm. Specifically, there are two universal constants cy and cy′ such that if n ≥ cy′cdl3r2/(s log(d)τ2ε2 log(1/ε)) where cd=max0≤i≤l di, then with probability at least 1− ε
sup w∈Ω ∥∥∥∇Ĵn(w)−∇J(w)∥∥∥ 2 ≤ l , τ √ 512 729 cyl(l + 2) (lcr + 1) cdcr √ s log(dn/l) + log(4/ε) n ,
where cr = max(r2/16, ( r2/16 )l−1 ), and s denotes the nonzero entry number of all weights.
Similar to deep linear neural networks, the layer number l, width di, number of nonzero parameter entries s, network size d and magnitude of weights are all critical to the convergence rate. Also, since there is a factor maxi di in the convergence rate, it is better to avoid choosing an extremely wide layer. Interestingly, when analyzing the representation ability of deep learning, Eldan & Shamir (2016) also suggested non-extreme-wide layers, though the conclusion was derived from a different perspective. By comparing Theorems 1 and 4, one can observe that there is a factor (1/16)l−1 in the convergence rate in Theorem 4. This is because the convergence rate accesses the Lipschitz constant and when we bound it, sigmoid activation function brings the factor 1/16 for each layer.
Now we analyze the non-degenerate stationary points of the empirical risk for deep nonlinear neural networks. Here we also assume that the population risk has m non-degenerate stationary points denoted by {w(1),w(2), · · · ,w(m)}. Theorem 5. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are sigmoid functions. Then if n ≥ cs max ( cdl
3r2/(s log(d)τ2ε2 log(1/ε)) , s log(d/l)/ζ2 ) where cs is a constant, for k ∈ {1, · · · ,m}, there exists a non-degenerate stationary pointw(k)n of Ĵn(w) which corresponds to the non-degenerate stationary pointw(k) of J(w) with probability at least 1− ε. Moreover, w(k)n and w(k) have the same non-degenerate index and they obey∥∥∥w(k)n −w(k)∥∥∥
2 ≤ 2τ ζ
√ 512
729 cyl(l + 2) (lcr + 1) cdcr
√ s log(dn/l) + log(4/ε)
n , (k = 1, · · · ,m)
with probability at least 1− ε, where cy , cd and cr are the same parameters in Theorem 4.
According to Theorem 5, there is one-to-one correspondence between the non-degenerate stationary points of Ĵn(w) and J(w). Also the corresponding pair has the same non-degenerate index, implying they have exactly matching local minima/maxima and non-degenerate saddle points. When n is sufficiently large, the non-degenerate stationary pointw(k)n in Ĵn(w) is very close to its corresponding non-degenerate stationary point w(k) in J(w). As for the degenerate stationary points, Theorem 4 guarantees the gradients at these points of J(w) and Ĵn(w) are very close to each other.
5.2 UNIFORM CONVERGENCE, STABILITY AND GENERALIZATION OF EMPIRICAL RISK
Here we first give the uniform convergence analysis of the empirical risk and then analyze its stability and generalization.
Theorem 6. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are the sigmoid functions. If n ≥ 18l2r2/(s log(d)τ2ε2 log(1/ε)), then
sup w∈Ω ∣∣∣Ĵn(w)− J(w)∣∣∣ ≤ n , τ√9 8 cycd (1 + cr(l − 1)) √ s log(nd/l) + log(4/ε) n (3)
holds with probability at least 1−ε, where cy , cd and cr are given in Theorem 4.
From Theorem 6, we obtain that under the distribution D, the empirical risk of a deep nonlinear neural network converges at the rate of O(1/ √ n) (up to a log factor). Theorem 6 also gives similar results as Theorem 3, including the inclination of regularization penalty on weight and suggestion on non-extreme-wide layers. Similar to linear networks, our risk convergence rate is also tighter than the convergence rate on the networks with polynomial activation functions and one-dimensional output in (Bartlett & Maass, 2003) since ours is at the order ofO( √ (l − 1)(s log(dn/l) + log(1/ε))/n), while
the later is O( √
(γ log2(n) + log(1/ε))/n) where γ is at the order of O(ld log(d) + l2d) (Bartlett & Maass, 2003).
We then establish the stability property and the generalization error of the empirical risk for nonlinear neural networks. By Theorem 6, we can obtain the following results.
Corollary 2. Assume the input sample x obeys Assumption 2 and the activation functions in a deep neural network are sigmoid functions. Then with probability at least 1− ε, we have∣∣∣ES∼D,A,(x′
(j) ,y′ (j) )∼D
1
n n∑ j=1 ( f∗j − fj ) ∣∣∣ ≤ n and ∣∣∣ES∼D,A (J(wn)− Ĵn(wn)) ∣∣∣ ≤ n, where n is defined in Eqn. (3). The notations f∗j and fj here are the same in Corollary 1.
By Corollary 2, we know that both the stability convergence rate and the convergence rate of generalization error are O(1/ √ n). This result accords with Theorems 8 and 9 in (Shalev-Shwartz et al., 2010) which impliesO(1/ √ n) is the bottleneck of the stability and generalization convergence rate for generic learning algorithms. From this result, we have that if n is sufficiently large, the empirical risk can be expected to be very stable. This also dispels misgivings of the random selection of training samples in practice. Such a result indicates that the deep nonlinear neural network can offer good performance on testing data if it achieves small training error.
6 PROOF ROADMAP
Here we briefly introduce our proof roadmap. Due to space limitation, all the proofs of Theorems 1 ∼ 6 and Corollaries 1 and 2 as well as technical lemmas are deferred to the supplementary material. The proofs of Theorems 1 and 4 are similar but essentially differ in some techniques for bounding probability due to their different assumptions. For explanation simplicity, we define four events: E = {supw∈Ω ‖∇Ĵn(w) − ∇J(w)‖2 > t}, E1 = {supw∈Ω ‖ 1n ∑n i=1 ( ∇f(w,x(i))−∇f(wkw ,x(i)) ) ‖2 > t/3}, E2 = {supwikw∈Ni, i∈[l]
‖ 1n ∑n i=1∇f(wkw ,x(i)) − E∇f(wkw ,x)‖2 > t/3}, and E3 = {supw∈Ω ‖E∇f(wkw ,x) −E∇f(w,x)‖2> t/3}, where wkw = [w1kw ;w 2 kw ; · · · ;wlkw ] is constructed by selecting w i kw ∈ Rdidi−1 from didi−1 /d-net Ni such that ‖w −wkw‖2 ≤ . Note that in Theorems 1 and 4, t is respectively set to g and l. Then we have P(E) ≤ P(E1) + P(E2) + P(E3). So we only need to separately bound P(E1), P(E2) and P(E3). For P(E1) and P(E3), we use the gradient Lipschitz constant and the properties of -net to prove P(E1) ≤ ε/2 and P(E3) = 0, while bounding P(E2) needs more efforts. Here based on the assumptions, we prove that P(E2) has sub-exponential tail associated to the sample number n and the networks parameters, and it satisfies P(E2) ≤ ε/2 with proper conditions. Finally, combining the bounds of the three terms, we obtain the desired results.
To prove Theorems 2 and 5, we first prove the uniform convergence of the empirical Hessian to its population Hessian. Then, we define such a set D = {w ∈ Ω : ‖∇J(w)‖2 < and infi ∣∣λi (∇2J(w))∣∣ ≥ ζ}. In this way, D can be decomposed into countably components, with each component containing either exactly one or zero non-degenerate stationary point. For each component, the uniform convergence of gradient and the results in differential topology guarantee that if J(w) has no stationary points, then Ĵn(w) also has no stationary points and vise versa. Similarly, for each component, the uniform convergence of Hessian and the results in differential topology guarantee that if J(w) has a unique non-degenerate stationary point, then Ĵn(w) also has a unique non-degenerate stationary point with the same index. After establishing exact correspondence between the non-degenerate stationary points of empirical risk and population risk, we use the uniform convergence of gradient and Hessian to bound the distance between the corresponding pairs.
We adopt a similar strategy to prove Theorems 3 and 6. Specifically, we divide the event supw∈Ω|Ĵn(w)−∇J(w)| > t into E1, E2 and E3 which have the same forms as their counterparts in the proofs of Theorem 1 with the gradient replaced by the loss function. To prove P(E1) ≤ ε/2 and P(E3) = 0, we can use the Lipschitz constant of the loss function and the -net properties. The remaining is to prove P(E2). We also prove that it has sub-exponential tail associated to the sample number n and the networks parameters and it obeys P(E2) ≤ ε/2 with proper conditions. Then we utilize the uniform convergence of Ĵn(w) to prove the stability and generalization bounds of Ĵn(w) (i.e. Corollaries 1 and 2).
7 CONCLUSION
In this work, we provided theoretical analysis on the landscape of empirical risk optimization for deep linear/nonlinear neural networks with (stochastic-)gradient descent, including the properties of the gradient and stationary points of empirical risk as well as the uniform convergence, stability, and generalization of the empirical risk itself. To our best knowledge, most of the results are new to deep learning community. These results also reveal that the depth l, the nonzero entry number s of all weights, the network size d and the width of a network are critical to the convergence rates. We also prove that the weight parameter magnitude is important to the convergence rate. Indeed, small magnitude of the weights is suggested. All the results are consistent with the widely used network architectures in practice.
ACKNOWLEDGMENT
This work is partially supported by National University of Singapore startup grant R-263-000-C08133, Ministry of Education of Singapore AcRF Tier One grant R-263-000-C21-112, NUS IDS R-263-000-C67-646 and ECRA R-263-000-C87-133.
REFERENCES R. Alessandro. Lecture notes of advanced statistical theory I, CMU. http://www.stat.cmu.edu/ ~arinaldo/36755/F16/Scribed_Lectures/LEC0914.pdf, 2016.
B. Bakshi and G. Stephanopoulos. Wave-net: A multiresolution, hierarchical neural network with localized learning. AIChE Journal, 39(1):57–81, 1993.
P. Bartlett and W. Maass. Vapnik-chervonenkis dimension of neural nets. The handbook of brain theory and neural networks, pp. 1188–1192, 2003.
E. Baum. On the capabilities of multilayer perceptrons. Journal of complexity, 4(3):193–215, 1988.
A. Choromanska, M. Henaff, M. Mathieu, G. Arous, and Y. LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015a.
A. Choromanska, Y. LeCun, and G. Arous. Open problem: The landscape of the loss surfaces of multilayer networks. In COLT, pp. 1756–1760, 2015b.
R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, pp. 160–167, 2008.
Y. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In NIPS, pp. 2933–2941, 2014.
B. Dubrovin, A. Fomenko, and S. Novikov. Modern geometry—methods and applications: Part II: The geometry and topology of manifolds, volume 104. Springer Science & Business Media, 2012.
R. Eldan and O. Shamir. The power of depth for feedforward neural networks. In COLT, pp. 907–940, 2016.
Y. Fyodorov and I. Williams. Replica symmetry breaking condition exposed by random matrix calculation of landscape complexity. Journal of Statistical Physics, 129(5-6):1081–1116, 2007.
R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In COLT, pp. 797–842, 2015.
A. Gonen and S. Shalev-Shwartz. Fast rates for empirical risk minimization of strict saddle problems. COLT, 2017.
A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, pp. 6645–6649, 2013.
D. Gromoll and W. Meyer. On differentiable functions with isolated critical points. Topology, 8(4):361–369, 1969.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016.
G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7): 1527–1554, 2006.
G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.
K. Kawaguchi. Deep learning without poor local minima. In NIPS, pp. 1097–1105, 2016.
P. Loh and M. J. Wainwright. Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. JMLR, 16(Mar):559–616, 2015.
S. Mei, Y. Bai, and A. Montanari. The landscape of empirical risk for non-convex losses. Annals of Statistics, 2017.
S. Negahban, B. Yu, M. Wainwright, and P. Ravikumar. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. In NIPS, pp. 1348–1356, 2009.
S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. In NIPS, 2011.
B. Neyshabur, R. Tomioka, and N. Srebro. Norm-based capacity control in neural networks. In COLT, pp. 1376–1401, 2015.
Q. Nguyen and M. Hein. The loss surface of deep and wide neural networks. In ICML, 2017.
P. Rigollet. Statistic s997 lecture notes, MIT mathematics. MIT OpenCourseWare, pp. 23–24, 2015.
M. Rudelson and R. Vershynin. Hanson-wright inequality and sub-gaussian concentration. Electronic Communications in Probability, 18(82):1–9, 2013.
S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge Univ. Press, Cambridge, pp. 375–382, 2014a.
S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014b.
S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform convergence. JMLR, 11:2635–2670, 2010.
S. Shalev-Shwartz, O. Shamir, and S. Shammah. Failures of deep learning. ICML, 2017.
D. Soudry and Y. Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016.
D. Soudry and E. Hoffer. Exponentially vanishing sub-optimal local minima in multilayer neural networks. arXiv preprint arXiv:1702.05777, 2017.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pp. 1–9, 2015.
Y. Tian. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. ICML, 2017.
V. N. Vapnik and V. Vapnik. Statistical learning theory, volume 1. Wiley New York, 1998.
R. Vershynin. Introduction to the non-asymptotic analysis of random matrices, compressed sensing. Cambridge Univ. Press, Cambridge, pp. 210–268, 2012.
H. Xu and S. Mannor. Robustness and generalization. Machine Learning, 86(3):391–423, 2012.
Y. Zhang, J. Lee, and M. Jordan. `1-regularized neural networks are improperly learnable in polynomial time. In ICML, pp. 993–1001, 2016.
Y. Zhang, P. Liang, and M. Wainwright. Convexified convolutional neural networks. ICML, 2017.
SUPPLEMENTARY MATERIAL OF EMPIRICAL RISK LANDSCAPE ANALYSIS FOR UNDERSTANDING DEEP NEURAL NETWORKS
A STRUCTURE OF THIS DOCUMENT
This document gives some other necessary notations and preliminaries for our analysis in Sec. B. Then we prove Theorems 1∼ 3 and Corollary 1 for deep linear neural networks in Sec. C. Then we present the proofs of Theorems 4 ∼ 6 and Corollary 2 for deep nonlinear neural networks in Sec. D. In both Sec. C and D, we first present the technical lemmas for proving our final results and subsequently present the proofs of these lemmas. Then we utilize these technical lemmas to prove our desired results. Finally, we give the proofs of other auxiliary lemmas.
B NOTATIONS AND PRELIMINARY TOOLS
Beyond the notations introduced in the manuscript, we need some other notations used in this document. Then we introduce several lemmas that will be used later.
B.1 NOTATIONS
Throughout this document, we use 〈·, ·〉 to denote the inner product. A⊗C denotes the Kronecker product betweenA and C. Note thatA and C inA⊗C can be matrices or vectors. For a matrix A ∈ Rn1×n2 , we use ‖A‖F = √∑ i,jA 2 ij to denote its Frobenius norm, whereAij is the (i, j)-th entry ofA. We use ‖A‖op = maxi |λi(A)| to denote the operation norm of a matrixA ∈ Rn1×n1 , where λi(A) denotes the i-th eigenvalue of the matrixA. For a 3-way tensor A ∈ Rn1×n2×n3 , its operation norm is computed as
‖A‖op = sup ‖λ‖2≤1
〈 λ⊗ 3 ,A 〉 = ∑ i,j,k Aijkλiλjλk,
where Aijk denotes the (i, j, k)-th entry of A. Also we denote the vectorization ofW (j) (the weight matrix of the j-th layer) as w(j) = vec ( W (j) ) ∈ Rdjdj−1 .
We denote Ik as the identity matrix of size k × k.
For notational simplicity, we further define e , v(l) − y as the output error vector. Then the squared loss is defined as f(w;x,y) = 12‖e‖ 2 2, where w = (w(1); · · · ;w(l)) ∈ Rd contains all the weight parameters.
B.2 TECHNICAL LEMMAS
We first introduce Lemmas 1 and 2 which are respectively used for bounding the `2-norm of a vector and the operation norm of a matrix. Then we introduce Lemmas 3 and 4 which discuss the topology of functions. In Lemma 5, we give the relationship between the stability and generalization of empirical risk. Lemma 1. (Vershynin, 2012) For any vector x ∈ Rd, its `2-norm can be bounded as
‖x‖2 ≤ 1
1− sup λ∈λ
〈λ,x〉 .
where λ = {λ1, . . . ,λkw} be an -covering net of Bd(1). Lemma 2. (Vershynin, 2012) For any symmetric matrix X ∈ Rd×d, its operator norm can be bounded as
‖X‖op ≤ 1
1− 2 sup λ∈λ
|〈λ,Xλ〉| .
where λ = {λ1, . . . ,λkw} be an -covering net of Bd(1).
Lemma 3. (Mei et al., 2017) Let D ⊆ Rd be a compact set with a C2 boundary ∂D, and f, g : A → R be C2 functions defined on an open set A, with D ⊆ A. Assume that for all w ∈ ∂D and all t ∈ [0, 1], t∇f(w) + (1 − t)∇g(w) 6= 0. Finally, assume that the Hessian ∇2f(w) is non-degenerate and has index equal to r for all w ∈ D. Then the following properties hold:
(1) If g has no critical point in D, then f has no critical point in D.
(2) If g has a unique critical pointw in D that is non-degenerate with an index of r, then f also has a unique critical point w′ in D with the index equal to r.
Lemma 4. (Mei et al., 2017) Suppose that F (w) : Θ→ R is a C2 function where w ∈ Θ. Assume that {w(1), . . . , w(m)} is its non-degenerate critical points and let D = {w ∈ Θ : ‖∇F (w)‖2 < and infi ∣∣λi (∇2F (w))∣∣ ≥ ζ}. Then D can be decomposed into (at most) countably components, with each component containing either exactly one critical point, or no critical point. Concretely, there exist disjoint open sets {Dk}k∈N, with Dk possibly empty for k ≥ m+ 1, such that
D = ∪∞k=1Dk .
Furthermore, w(k) ∈ Dk for 1 ≤ k ≤ m and each Di, k ≥ m+ 1 contains no stationary points. Lemma 5. (Shalev-Shwartz & Ben-David, 2014b; Gonen & Shalev-Shwartz, 2017) Assume that D is a sample distribution and randomized algorithm A is employed for optimization. Suppose that ((x′(1),y ′ (1)), · · · , (x ′ (n),y ′ (n))) ∼ D and wn = argminw Ĵn(w). For every j ∈ {1, · · · , n}, suppose wj∗ = argminw 1
n−1 ∑ i 6=j fi(w;x(i),y(i)). For arbitrary distribution D, we have∣∣∣∣∣∣ES∼D,A,(x′(j),y′(j))∼D 1n n∑ j=1 ( f∗j − fj )∣∣∣∣∣∣ = ∣∣∣∣∣ES∼D,A (J(wn)− Ĵn(wn))
∣∣∣∣∣. where f∗j and fj respectively denote fj(w j ∗;x ′ (j),y ′ (j)) and fj(w n;x′(j),y ′ (j)).
C PROOFS FOR DEEP LINEAR NEURAL NETWORKS
In this section, we first present the technical lemmas in Sec. C.1 and then we give the proofs of these lemmas in Sec. C.2. Next, we utilize these lemmas to prove the results in Theorems 1∼ 3 and Corollary 1 in Sec. C.3. Finally, we give the proofs of other lemmas in Sec. C.4.
C.1 TECHNICAL LEMMAS
Here we present the technical lemmas for proving our desired results. For brevity, we also define Bj:s as follows:
Bs:t ,W (s)W (s−1) · · ·W (t) ∈ Rds×dt−1 , (s ≥ t); Bs:t , I, (s < t). (4)
Lemma 6. Assume that the activation functions in the deep neural network f(w,x) are linear functions. Then the gradient of f(w,x) with respect to w(j) can be written as
∇w(j)f(w,x) = ( (Bj−1:1x)⊗BTl:j+1 ) e, (j = 1, · · · , l),
where ⊗ denotes the Kronecke product. Then we can compute the Hessian matrix as follows:
∇2f(w,x) = ∇w(1) ( ∇w(1)f(w,x) ) · · · ∇w(1) ( ∇w(l)f(w,x) ) ∇w(2) ( ∇w(1)f(w,x) ) · · · ∇w(2) ( ∇w(l)f(w,x) ) ... . . . ...
∇w(l) ( ∇w(1)f(w,x) ) · · · ∇w(l) ( ∇w(l)f(w,x)
) ,
whereQst , ∇w(s) ( ∇w(t)f(w,x) ) is defined as
Qst= ( BTt−1:s+1 ) ⊗ ( Bs−1:1xe TBTl:t+1 ) + ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) , if s<t,( Bs−1:1xx TBs−1:1 ) ⊗ ( Bl:s+1 TBl:s+1 ) , if s= t,(
BTl:s+1ex TBTt−1:1 ) ⊗Bs−1:t+1+ ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) , if s>t.
Lemma 7. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then for any t > 0, the objective f(w,x) obeys
P
( 1
n n∑ i=1 ( f(w,x(i))−E(f(w,x(i))) ) >t
) ≤ 2 exp −cf ′nmin t2 ω2f max ( dlω2fτ 4, τ2 ) , t ω2fτ 2 , where cf ′ is a positive constant and ωf = rl. Lemma 8. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then for any t > 0 and arbitrary unit vector λ ∈ Sd−1, the gradient∇f(w,x) obeys
P
( 1
n n∑ i=1 (〈 λ,∇wf(w,x(i))−E∇wf(w,x(i)) 〉) >t
)
≤ 3 exp ( −cg′nmin ( t2
lmax (ωgτ2, ωgτ4, ωg′τ2) , t√ lωg max (τ, τ2)
)) ,
where cg′ is a constant; ωg = cqr2(2l−1) and ωg′ = cqr2(l−1) in which cq = √
max0≤i≤l di. Lemma 9. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then for any t > 0 and arbitrary unit vector λ ∈ Sd−1, the Hessian∇2f(w,x) obeys
P
( 1
n n∑ i=1 (〈 λ, (∇2wf(w,x(i))− E∇2wf(w,x(i)))λ 〉) > t
)
≤ 5 exp ( −ch′nmin ( t2
τ2l2 max (ωg, ωgτ2, ωh) , t √ ωglmax (τ, τ2)
)) ,
where ωg = r4(l−1) and ωh = r2(l−2). Lemma 10. Suppose the activation functions in deep neural network are linear functions. Then for any w ∈ Bd(r) and x ∈ Bd0(rx), we have
‖∇wf(w,x)‖2 ≤ √ αg, where αg = ctlr4xr 4l−2.
in which ct is a constant. Further, for any w ∈ Bd(r) and x ∈ Bd0(rx), we also have∥∥∇2f(w,x)∥∥op ≤ ∥∥∇2f(w,x)∥∥F ≤ l√αl, where αl , ct′r4xr4l−2. in which ct′ is a constant. With the same condition, we can bound the operation norm of ∇3f(w,x). That is, there exists a universal constant αp such that
∥∥∇3f(w,x)∥∥op ≤ αp. Lemma 11. Suppose Assumption 1 on the input data x holds and the activation functions in deep neural network are linear functions. Then there exist two universal constant cg and ch such that the sample Hessian converges uniformly to the population Hessian in operator norm. Specifically, there exit two universal constants ch1 and ch2 such that if n ≥ ch2 max( α2pr 2
τ2l2ω2hε 2s log(d/l) , s log(d/l)/(lτ2)), then
sup w∈Ω ∥∥∥∇2Ĵn(w)−∇2J(w)∥∥∥op≤ch1τ lωh √ d log(nl)+log(20/ε) n
holds with probability at least 1− ε, where ωh = max ( τr2(l−1), r2(l−2), rl−2 ) .
C.2 PROOFS OF TECHNICAL LEMMAS
To prove the above lemmas, we first introduce some useful results. Lemma 12. (Rudelson & Vershynin, 2013) Assume that x = (x1;x2; · · · ;xk) ∈ Rk is a random vector with independent components xi which have zero mean and are independent τ2i -sub-Gaussian variables. Here maxi τ2i ≤ τ2. LetA be an k × k matrix. Then we have
E exp λ ∑ i,j:i 6=j Aijxixj − E( ∑ i,j:i 6=j Aijxixj) ≤ exp (2τ2λ2‖A‖2F ) , |λ| ≤ 1/(2τ‖A‖2).
Lemma 13. Assume that x = (x1;x2; · · · ;xk) ∈ Rk is a random vector with independent components xi which have zero mean and are independent τ2i -sub-Gaussian variables. Here maxi τ 2 i ≤ τ2. Let a be an n-dimensional vector. Then we have
E exp ( λ ( k∑ i=1 aix 2 i − E ( k∑ i=1 aix 2 i ))) ≤ E exp ( 128λ2τ4 ( k∑ i=1 a2i )) , |λ| ≤ 1 τ2 maxi ai .
Lemma 14. ForBj:t defined in Eqn. (4), we have the following properties:
‖Bs:t‖op ≤ ‖Bs:t‖F ≤ ωr and ‖Bl:1‖op ≤ ‖Bl:1‖F ≤ ωf , where ωr = rs−t+1 ≤ max ( r, rl ) and ωf = rl.
Lemma 13 is useful for bounding probability. The two inequalities in Lemma 14 can be obtained by using ‖w(j)‖2 ≤ r (∀j = 1, · · · , l). We defer the proofs of Lemmas 13 and 14 to Sec. C.4.2.
C.2.1 PROOF OF LEMMA 6
Proof. When the activation functions are linear functions, we can easily compute the gradient of f(w,x) with respect to w(j):
∇w(j)f(w,x) = ( (Bj−1:1x)⊗BTl:j+1 ) e, (j = 1, · · · , l),
where ⊗ denotes the Kronecker product. Now we consider the computation of the Hessian matrix. For brevity, letQs = ( (Bs−1:1x)⊗BTl:s+1 ) . Then we can compute∇2w(s)f(w,x) as follows:
∇2w(s)f(w,x) = ∂2f(w,x)
∂wT(s)∂w(s) =
∂2f(w,x)
∂wT(s)∂w(s) = ∂(Qse) ∂wT(s) = ∂vec (Qse) ∂wT(s)
= ∂vec
( QsBl:s+1W (t)Bs−1:1x )
∂wT(s) = ∂ ( (Bs−1:1x) T ⊗ (QsBl:s+1) ) vec ( W (s) ) ∂wT(s)
=(Bs−1:1x) T ⊗ (( (Bs−1:1x)⊗BTl:s+1 ) Bl:s+1 ) ¬ =(Bs−1:1x) T ⊗ ( (Bs−1:1x)⊗ ( BTl:s+1Bl:s+1
)) = ( (Bs−1:1x) T ⊗ (Bs−1:1x) ) ⊗ ( BTl:s+1Bl:s+1
) ® = ( (Bs−1:1x)(Bs−1:1x) T ) ⊗ ( BTl:s+1Bl:s+1 ) ,
where ¬ holds since Bj−1:1x is a vector and for any vector x, we have (x⊗A)B = x⊗ (AB). holds because for any four matrices Z1 ∼ Z3 of proper sizes, we have (Z1 ⊗ Z2) ⊗ Z3 = Z1 ⊗ (Z2 ⊗ Z3). ® holds because for any two matrices z1, z2 of proper sizes, we have z1zT2 = z1 ⊗ zT2 = zT2 ⊗ z1. Then, we consider the case s > t:
∇w(t) ( ∇w(s)f(w,x) ) = ∂2f(w,x)
∂wT(t)∂w(s) =
∂2f(w,x)
∂wT(t)∂w(s) = ∂(Qse) ∂wT(t) = ∂vec (Qse) ∂wT(t)
= ∂vec
( QsBl:t+1W (t)Bt−1:1x )
∂wT(t) + ∂vec
(( (Bs−1:1x)⊗BTl:s+1 ) e )
∂wT(t) .
Notice, here we just think thatQs in the ∂vec(QsBl:t+1W (t)Bt−1:1x)
∂wT (t)
is a constant matrix and is not
related toW (t). Similarly, we also take e in ∂vec(((Bs−1:1x)⊗BTl:s+1)e)
∂wT (t)
as a constant vector. Since
we have ∂vec ( QsBl:t+1W (t)Bt−1:1x )
∂wT(t) = ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) ,
we only need to consider
∂vec (( (Bs−1:1x)⊗BTl:s+1 ) e )
∂wT(t) = ∂vec
( (Bs−1:1x)⊗ ( BTl:s+1e )) ∂wT(t)
= ∂vec
( (Bs−1:1x) ( BTl:s+1e )T) ∂wT(t)
= ∂vec
( Bs−1:t+1W (t) ( Bt−1:1xe TBl:s+1 ))
∂wTt = ∂ ( Bt−1:1xe TBl:s+1 )T ⊗Bs−1:t+1vec (W (t)) ∂wTt
= ( Bt−1:1xe TBl:s+1 )T ⊗Bs−1:t+1.
Therefore, for s > t, by combining the above two terms, we can obtain ∇w(t) ( ∇w(s)f(w,x) ) = ( BTl:s+1ex TBTt−1:1 ) ⊗Bs−1:t+1+ ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) .
Then, by similar method, we can compute the Hessian for the case s < t as follows: ∇w(t) ( ∇w(s)f(w,x) ) = ( BTt−1:s+1 ) ⊗ ( Bs−1:1xe TBTl:t+1 ) + ( Bs−1:1xx TBTt−1:1 ) ⊗ ( BTl:s+1Bl:t+1 ) .
The proof is completed.
C.2.2 PROOF OF LEMMA 7
Proof. We first prove that v(l), which is defined in Eqn. (5), is sub-Gaussian.
v(l) = W (l) · · ·W (1)x = Bl:1x. (5)
Then by the convexity in λ of exp(λt) and Lemma 14, we can obtain
E ( exp (〈 λ,v(l) − E(v(l)) 〉)) =E (exp (〈λ,Bl:1x− EBl:1x〉))
≤E ( exp (〈 BTl:1λ,x 〉)) ≤ exp ( ‖BTl:1λ‖22τ2
2 ) ¬ ≤ exp ( ω2fτ
2‖λ‖22 2
) ,
(6)
where ¬ uses the conclusion that ‖Bl:1‖op ≤ ‖Bl:1‖F ≤ ωf in Lemma 14. This means that v(l) is centered and is ω2fτ 2-sub-Gaussian. Accordingly, we can obtain that the k-th entry of v(l) is also zkτ 2-sub-Gaussian, where zk is a universal positive constant. Note that maxk zk ≤ ω2f . Let v (l) i
denotes the output of the i-th sample x(i). By Lemma 13, we have that for s > 0,
P
( 1
n n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 ) > t 2 ) = P ( s n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 ) > nst 2 ) ¬ ≤ exp ( −snt
2
) E ( s
n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 )) ≤ exp ( −snt
2 ) n∏ i=1 E ( s ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 )) ® ≤ exp ( −snt
2 ) n∏ i=1 exp ( 128dls 2ω4fτ 4 ) |s| ≤ 1 ω2fτ 2
¯ ≤ exp ( −c′nmin ( t2
dlω4fτ 4 ,
t
ω2fτ 2
)) .
Note that ¬ holds because of Chebyshev’s inequality. holds since x(i) are independent. ® is established by applying Lemma 13. We have ¯ by optimizing s. Since v(l) is sub-Gaussian, we have
P
( 1
n n∑ i=1 ( yTv (l) i − Ey Tv (l) i ) > t 2 ) ≤P ( s n∑ i=1 ( yTv (l) i − Ey Tv (l) i ) > nst 2 )
≤ exp ( −nst
2
) E exp ( s
n∑ i=1 ( yTv (l) i − Ey Tv (l) i
))
≤ exp ( −nst
2 ) n∏ i=1 E exp ( s ( yTv (l) i − Ey Tv (l) i )) ¬ ≤ exp ( −nst
2 ) n∏ i=1 exp ( ω2fτ 2s2‖y‖22 2 ) ≤ exp ( − nt 2
8ω2fτ 2‖y‖22
) ,
where ¬ holds because of Eqn. (6) and we have since we optimize s.
Since the loss function f(w,x) is defined as f(w,x) = ‖v(l) − y‖22, we have f(w,x)− E(f(w,x))=‖v(l) − y‖22−E(‖v(l)−y‖22)= ( ‖v(l)‖22−E‖v(l)‖22 ) + ( yTv(l)−EyTv(l) ) .
Therefore, we have
P
( 1
n n∑ i=1 ( f(w,x(i))− E(f(w,x(i))) ) > t
)
≤P
( 1
n n∑ i=1 ( ‖v(l)i ‖ 2 2 − E‖v (l) i ‖ 2 2 | 1. What is the main contribution of the paper regarding the relationship between empirical and true population loss landscapes?
2. What are the three answers provided by the analysis in terms of when empirical quantities are close to true quantities?
3. Why does depth play a role in the convergence of empirical to true values in deep linear networks?
4. How do the upper bounds found in the paper compare to those based on Rademacher complexity or VC dimension?
5. What is the suggestion made by another reviewer regarding comparing the results to simple linear regression?
6. What is the overall assessment of the paper's contribution to the deep learning theory literature?
7. What is the recommendation for improving the paper by providing more intuitive statements about the implications of the results for practice? | Review | Review
Overall, this work seems like a reasonable attempt to answer the question of how the empirical loss landscape relates to the true population loss landscape. The analysis answers:
1) When empirical gradients are close to true gradients
2) When empirical isolated saddle points are close to true isolated saddle points
3) When the empirical risk is close to the true risk.
The answers are all of the form that if the number of training examples exceeds a quantity that grows with the number of layers, width and the exponential of the norm of the weights with respect to depth, then empirical quantities will be close to true quantities. I have not verified the proofs in this paper (given short notice to review) but the scaling laws in the upper bounds found seem reasonably correct.
Another reviewer's worry about why depth plays a role in the convergence of empirical to true values in deep linear networks is a reasonable worry, but I suspect that depth will necessarily play a role even in deep linear nets because the backpropagation of gradients in linear nets can still lead to exponential propagation of errors between empirical and true quantities due to finite training data. Moreover the loss surface of deep linear networks depends on depth even though the expressive capacity does not. An analysis of dynamics on this loss surface was presented in Saxe et. al. ICLR 2014 which could be cited to address that reviewer's concern. However, the reviewer's suggestion that the results be compared to what is known more exactly for simple linear regression is a nice one.
Overall, I believe this paper is a nice contribution to the deep learning theory literature. However, it would even better to help the reader with more intuitive statements about the implications of their results for practice, and the gap between their upper bounds and practice, especially given the intense interest in the generalization error problem. Because their upper bounds look similar to those based on Rademacher complexity or VC dimension (although they claim theirs are a little tighter) - they should put numbers in to their upper bounds taken from trained neural networks, and see what the numerical evaluation of their upper bounds turn out to be in situations of practical interest where deep networks show good generalization performance despite having significantly less training data than number of parameters. I suspect their upper bounds will be loose, but still - it would be an excellent contribution to the literature to quantitatively compare theory and practice with bounds that are claimed to be slightly tigher than previous bounds. Even if they are loose - identifying the degree of looseness could inspire interesting future work. |
ICLR | Title
Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization
Abstract
An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.
1 INTRODUCTION AND OVERVIEW
Neural networks used in practice often have sufficient effective capacity to learn arbitrary maps from their inputs to their outputs. This is typically demonstrated by training a classification network that achieves good test accuracy on a real dataset S, on a modified version of S (call it S′) where the labels are randomized and observing that the training accuracy on S′ is very high, though, of course, the test accuracy is no better than chance (Zhang et al., 2017). This leads to an important open question in the Deep Learning community (Zhang et al. (2017); Arpit et al. (2017); Bartlett et al. (2017); Kawaguchi et al. (2017); Neyshabur et al. (2018); Arora et al. (2018); Belkin et al. (2019); Rahaman et al. (2019); Nagarajan & Kolter (2019), etc.): Among all maps that fit a real dataset, how does Gradient Descent (GD) find one that generalizes well? This is the question we address in this paper.
We start by observing that this phenomenon is not limited to neural networks trained with GD but also applies to Random Forests and Decision Trees. However, there is no mystery with trees: A typical tree construction algorithm splits the training set recursively into similar subsets based on input features. If no similarity is found, eventually, each example is put into its own leaf to achieve good training accuracy (but, of course, at the cost of poor generalization). Thus, trees that achieve good accuracy on a randomized dataset are much larger than those on a real dataset (e.g. Chatterjee & Mishchenko (2019, Expt. 5)).
Is it possible that something similar happens with GD? We believe so. The type of randomized-label experiments described above show that if there are common patterns to be found, then GD finds them. If not, it fits each example on a case-by-case basis. The question then is, what is it about the dynamics of GD that makes it possible to extract common patterns from the data? And what does it mean for a pattern to be common?
Since the only change to the network parameters in GD comes from the gradients, the mechanism to detect commonality amongst examples must be through the gradients. We propose that this commonality detection can be explained as follows:
1. Gradients are coherent, i.e, similar examples (or parts of examples) have similar gradients (or similar components of gradients) and dissimilar examples have dissimilar gradients.
2. Since the overall gradient is the sum of the per-example gradients, it is stronger in directions where the per-example gradients are similar and reinforce each other and weaker in other directions where they are different and do not add up.
3. Since network parameters are updated proportionally to gradients, they change faster in the direction of stronger gradients.
4. Thus the changes to the network during training are biased towards those that simultaneously benefit many examples instead of a few (or one example).
For convenience, we refer to this as the Coherent Gradients hypothesis.
It is instructive to work through the proposed mechanism in the context of a simple thought experiment. Consider a training set with two examples a and b. At some point in training, suppose the gradient of a, ga, can be decomposed into two orthogonal components ga1 and ga2 of roughly equal magnitude, i.e., there are two, equally good, independent ways in which the network can better fit a (by using say two disjoint parts of the network). Likewise, for b. Now, further suppose that one of the two ways is common to both a and b, i.e., say ga2 = gb2 = gab, whereas, the other two are example specific, i.e., 〈ga1 , gb1〉 = 0. Now, the overall gradient is
g = ga + gb = ga1 + 2 gab + gb1 .
Observe that the gradient is stronger in the direction that simultaneously helps both examples and thus the corresponding parameter changes are bigger than those those that only benefit only one example.1
It is important to emphasize that the notion of similarity used above (i.e., which examples are considered similar) is not a constant but changes in the course of training as network parameters change. It starts from a mostly task independent notion due to random initialization and is bootstrapped in the course of training to be task dependent. We say “mostly” because even with random initialization, examples that are syntactically close are treated similarly (e.g., two images differing in the intensities of some pixels as opposed to two images where one is a translated version of the other).
The relationship between strong gradients and generalization can also be understood through the lens of algorithmic stability (Bousquet & Elisseeff, 2002): strong gradient directions are more stable since the presence or absence of a single example does not impact them as much, as opposed to weak gradient directions which may altogether disappear if a specific example is missing from the training set. With this observation, we can reason inductively about the stability of GD: since the initial values of the parameters do not depend on the training data, the initial function mapping examples to their gradients is stable. Now, if all parameter updates are due to strong gradient directions, then stability is preserved. However, if some parameter updates are due to weak gradient directions, then stability is diminished. Since stability (suitably formalized) is equivalent to generalization (Shalev-Shwartz et al., 2010), this allows us to see how generalization may degrade as training progresses. Based on this insight, we shall see later how a simple modification to GD to suppress the weak gradient directions can dramatically reduce overfitting.
In addition to providing insight into why GD generalizes in practice, we believe that the Coherent Gradients hypothesis can help explain several other empirical observations about deep learning in the literature:
(a) Learning is slower with random labels than with real labels (Zhang et al., 2017; Arpit et al., 2017)
(b) Robustness to large amounts of label noise (Rolnick et al., 2017) (c) Early stopping leads to better generalization (Caruana et al., 2000) (d) Increasing capacity improves generalization (Caruana et al., 2000; Neyshabur et al., 2018) (e) The existence of adversarial initialization schemes (Liu et al., 2019) (f) GD detects common patterns even when trained with random labels (Chatterjee &
Mishchenko, 2019) 1While the mechanism is easiest to see with full or large minibatches, we believe it holds even for small
minibatches (though there one has to consider the bias in updates over time).
A direct experimental verification of the Coherent Gradients hypothesis is challenging since the notion of similarity between examples depends on the parameters of the network and thus changes during training. Our approach, therefore, is to design intervention experiments where we establish a baseline and compare it against variants designed to test some aspect or prediction of the theory. As part of these experiments, we replicate the observations (a)–(c) in the literature noted above, and analyze the corresponding explanations provided by Coherent Gradients (§2), and outline for future work how (d)–(f) may be accounted for (§5). In this paper, we limit our study to simple baselines: vanilla Stochastic Gradient Descent (SGD) on MNIST using fully connected networks. We believe that this is a good starting point, since even in this simple setting, with all frills eliminated (e.g., inductive bias from architecture or explicit regularization, or a more sophisticated optimization procedure), we are challenged to find a satisfactory explanation of why SGD generalizes well. Furthermore, our prior is that the difference between weak and strong directions is small at any one step of training, and therefore having a strong learning signal as in the case of MNIST makes a direct analysis of gradients easier. It also has the benefit of having a smaller carbon footprint and being easier to reproduce. Finally, based on preliminary experiments on other architectures and datasets we are optimistic that the insights we get from studying this simple setup apply more broadly.
2 EFFECT OF REDUCING SIMILARITY BETWEEN EXAMPLES
Our first test of the Coherent Gradients hypothesis is to see what happens when we reduce similarity between examples. Although, at any point during training, we do not know which examples are similar, and which are different, we can (with high probability) reduce the similarity among training examples simply by injecting label noise. In other words, under any notion of similarity, adding label noise to a dataset that has clean labels is likely to make similar examples less similar. Note that this perturbation does not reduce coherence since gradients still depend on the examples. (To break coherence, we would have to make the gradients independent of the training examples which would requiring perturbing SGD itself and not just the dataset).
2.1 SETUP
For our baseline, we use the standard MNIST dataset of 60,000 training examples and 10,000 test examples. Each example is a 28x28 pixel grayscale handwritten digit along with a label (‘0’–‘9’). We train a fully connected network on this dataset. The network has one hidden layer with 2048 ReLUs and an output layer with a 10-way softmax. We initialize it with Xavier and train using vanilla SGD (i.e., no momentum) using cross entropy loss with a constant learning rate of 0.1 and a minibatch size of 100 for 105 steps (i.e., about 170 epochs). We do not use any explicit regularizers.
We perturb the baseline by modifying only the dataset and keeping all other aspects of the architecture and learning algorithm fixed. The dataset is modified by adding various amounts of noise (25%, 50%, 75%, and 100%) to the labels of the training set (but not the test set). This noise is added by taking, say in the case of 25% label noise, 25% of the examples at random and randomly permuting their labels. Thus, when we add 25% label noise, we still expect about 75% + 0.1 * 25%, i.e., 77.5% of the examples to have unchanged (i.e. “correct”) labels which we call the proper accuracy of the modified dataset. In what follows, we call examples with unchanged labels, pristine, and the remaining, corrupt. Also, from this perspective, it is convenient to refer to the original MNIST dataset as having 0% label noise.
We use a fully connected architecture instead of a convolutional one to mitigate concerns that some of the difference in generalization between the original MNIST and the noisy variants could stem from architectural inductive bias. We restrict ourselves to only 1 hidden layer to have the gradients be as well-behaved as possible. Finally, the network width, learning rate, and the number of training steps are chosen to ensure that exactly the same procedure is usually able to fit all 5 variants to 100% training accuracy.
2.2 QUALITATIVE PREDICTIONS
Before looking at the experimental results, it is useful to consider what Coherent Gradients can qualitatively say about this setup. In going from 0% label noise to 100% label noise, as per experiment design, we expect examples in the training set to become more dissimilar (no matter what the current notion of similarity is). Therefore, we expect the per-example gradients to be less aligned with each other. This in turn causes the overall gradient to become more diffuse, i.e., stronger directions become relatively weaker, and consequently, we expect it to take longer to reach a given level of accuracy as label noise increases, i.e., to have a lower realized learning rate.
This can be made more precise by considering the following heuristic argument. Let θt be the vector of trainable parameters of the network at training step t. Let L denote the loss function of the network (over all training examples). Let gt be the gradient of L at θt and let α denote the learning rate. By Taylor expansion, to first order, the change ∆Lt in the loss function due to a small gradient descent step ht = −α · gt is given by
∆Lt := L(θt + ht)− L(θt) ≈ 〈gt, ht〉 = −α · 〈gt, gt〉 = −α · ‖gt‖2 (1)
where ‖ · ‖ denotes the l2-norm. Now, let gte denote the gradient of training example e at step t. Since the overall gradient is the sum of the per-example gradients, we have,
‖gt‖2 = 〈gt, gt〉 = 〈 ∑ e gte, ∑ e gte〉 = ∑ e,e′ 〈gte, gte′〉 = ∑ e ‖gte‖2 + ∑ e,e′
e 6=e′
〈gte, gte′〉 (2)
Now, heuristically, let us assume that all the ‖gte‖ are roughly the same and equal to ‖g◦t ‖ which is not entirely unreasonable (at least at the start of training, if the network has no a priori reason to treat different examples very differently). If all the per-example gradients are approximately orthogonal (i.e., 〈gte, gte′〉 ≈ 0 for e 6= e′), then ‖gt‖2 ≈ m · ‖g◦t ‖2 where m is the number of examples. On the other hand, if they are approximately the same (i.e., 〈gte, gte′〉 ≈ ‖g◦t ‖2), then ‖gt‖2 ≈ m2 · ‖g◦t ‖2. Thus, we expect that greater the agreement in per-example gradients, the faster loss should decrease.
Finally, for datasets that have a significant fractions of pristine and corrupt examples (i.e., the 25%, 50%, and 75% noise) we can make a more nuanced prediction. Since, in those datasets, the pristine examples as a group are still more similar than the corrupt ones, we expect the pristine gradients to continue to align well and sum up to a strong gradient. Therefore, we expect them to be learned faster than the corrupt examples, and at a rate closer to the realized learning rate in the 0% label noise case. Likewise, we expect the realized learning rate on the corrupt examples to be closer to the 100% label noise case. Finally, as the proportion of pristine examples falls with increasing noise, we expect the realized learning rate for pristine examples to degrade.
Note that this provides an explanation for the observation in the literature that that networks can learn even when the number of examples with noisy labels greatly outnumber the clean examples as long as the number of clean examples is sufficiently large (Rolnick et al., 2017) since with too few clean examples the pristine gradients are not strong enough to dominate.
2.3 AGREEMENT WITH EXPERIMENT
Figure 1(a) and (b) show the training and test curves for the baseline and the 4 variants. We note that for all 5 variants, at the end of training, we achieve 100% training accuracy but different amounts of generalization. As expected, SGD is able to fit random labels, yet when trained on real data, generalizes well. Figure 1(c) shows the reduction in training loss over the course of training, and Figure 1(d) shows the fraction of pristine and corrupt labels learned as training processes.
The results are in agreement with the qualitative predictions made above:
1. In general, as noise increases, the time taken to reach a given level of accuracy (i.e., realized learning rate) increases.
2. Pristine examples are learned faster than corrupt examples. They are learned at a rate closer to the 0% label noise rate whereas the corrupt examples are learned at a rate closer to the 100% label noise rate.
3. With fewer pristine examples, their learning rate reduces. This is most clearly seen in the first few steps of training by comparing say 0% noise with 25% noise.
Using Equation 1, note that the magnitude of the slope of the training loss curve is a good measure of the square of the l2-norm of the overall gradient. Therefore, from the loss curves of Figure 1(c), it is clear that in early training, the more the noise, the weaker the l2-norm of the gradient. If we assume that the per-example l2-norm is the same in all variants at start of training, then from Equation 2, it is clear that with greater noise, the gradients are more dissimilar.
Finally, we note that this experiment is an instance where early stopping (e.g., Caruana et al. (2000)) is effective. Coherent gradients and the discussion in §2.2 provide some insight into this: Strong gradients both generalize well (they are stable since they are supported by many examples) and they bring the training loss down quickly for those examples. Thus early stopping maximizes the use of strong gradients and limits the impact of weak gradients. (The experiment in the §3 discusses a different way to limit the impact of weak gradients and is an interesting point of comparison with early stopping.)
2.4 ANALYZING STRONG AND WEAK GRADIENTS
Within each noisy dataset, we expect the pristine examples to be more similar to each other and the corrupt ones to be less similar. In turn, based on the training curves (particularly, Figure 1 (d)), during the initial part of training, this should mean that the gradients from the pristine examples should be stronger than the gradients from the corrupt examples. We can study this effect via a different decomposition of square of the l2-norm of the gradient (of equivalently upto a constant, the change in the loss function):
〈gt, gt〉 = 〈gt, gpt + gct 〉 = 〈gt, g p t 〉+ 〈gt, gct 〉
where gpt and g c t are the sum of the gradients of the pristine examples and corrupt examples respectively. (We cannot decompose the overall norm into a sum of norms of pristine and corrupt due to the cross terms 〈gpt , gct 〉. With this decomposition, we attribute the cross terms equally to both.) Now, set fpt = 〈gt,gpt 〉 <gt,gt> and f ct = 〈gt,gct 〉 〈gt,gt〉 . Thus, f p t and f c t represent the fraction of the loss reduction due to pristine and corrupt at each time step respectively (and we have fpt + f c t = 1), and based on the foregoing, we expect the pristine fraction to be a larger fraction of the total when training starts and to diminish as training progresses and the pristine examples are fitted.
The first row of Figure 2 shows a plot of estimates of fpt and f c t for 25%, 50% and 75% noise. These quantities were estimated by recording a sample of 400 per-example gradients for 600 weights (300 from each layer) in the network. We see that for 25% and 50% label noise, fpt initially starts off higher than f ct and after a few steps they cross over. This happens because at that point all the pristine examples have been fitted and for most of the rest of training the corrupt examples need to be fitted and so they largely contribute to the l2-norm of the gradient (or equivalently by Equation 1 to loss reduction). Only at the end when the corrupt examples have also been fit, the two curves reach parity. In the case of 75% noise, we see that the cross over doesn’t happen, but there is a slight slope downwards for the contribution from pristine examples. We believe this is because of the sheer number of corrupt examples, and so even though the individual corrupt example gradients are weak, their sum dominates.
To get a sense of statistical significance in our hypothesis that there is a difference between the pristine and corrupt examples as a group, in the remaining rows of Figure 2, we construct a null world where there is no difference between pristine and corrupt. We do that by randomly permuting the “corrupt” and “pristine” designations among the examples (instead of using the actual designations) and reploting. Although the null pristine and corrupt curves are mirror images (as they must be even in the null world since each example is given one of the two designations), we note that for 25% and 50% they do not cross over as they do with the real data. This increases our confidence that the null may be rejected. The 75% case is weaker but only the real data shows the slight downward slope in pristine which none of the nulls typically show. However, all the nulls do show that corrupt is more than pristine which increases our confidence that this is due to the significantly differing sizes of the two sets. (Note that this happens in reverse in the 25% case: pristine is always above corrupt, but they never cross over in the null worlds.)
To get a stronger signal for the difference between pristine and corrupt in the 75% case, we can look at a different statistic that adjusts for the different sizes of the pristine and corrupt sets. Let |p| and |c|
be the number of pristine and corrupt examples respectively. Define
ipt := 1
|p| t∑ t′=0 〈gt′ , gpt′〉 and i c t := 1 |c| t∑ t′=0 〈gt′ , gct′〉
which represents to a first order and upto a scale factor (α) the mean cumulative contribution of a pristine or corrupt example up until that point in training (since the total change in loss from the start of training to time t is approximately the sum of first order changes in the loss at each time step).
The first row of Figure 3 shows ipt and i c t for the first 10 steps of training where the difference between pristine and corrupt is the most pronounced. As before, to give a sense of statistical significance, the remaining rows show the same plots in null worlds where we randomly permute the pristine or corrupt designations of the examples. The results appear somewhat significant but not overwhelmingly so. It would be interesting to redo this on the entire population of examples and trainable parameters instead of a small sample.
3 EFFECT OF SUPPRESSING WEAK GRADIENT DIRECTIONS
In the second test of the Coherent Gradients hypothesis, we change GD itself in a very specific (and to our knowledge, novel) manner suggested by the theory. Our inspiration comes from random forests. As noted in the introduction, by building sufficiently deep trees a random forest algorithm can get perfect training accuracy with random labels, yet generalize well when trained on real data. However, if we limit the tree construction algorithm to have a certain minimum number of examples in each leaf, then it no longer overfits. In the case of GD, we can do something similar by suppressing the weak gradient directions.
3.1 SETUP
Our baseline setup is the same as before (§2.1) but we add a new dimension by modifying SGD to update each parameter with a “winsorized” gradient where we clip the most extreme values (outliers) among all the per-example gradients. Formally, let gwe be the gradient for the trainable parameter w for example e. The usual gradient computation for w is
gw = ∑ e gwe
Now let c ∈ [0, 50] be a hyperparameter that controls the level of winsorization. Define lw to be the c-th percentile of gwe taken over the examples. Similarly, let uw be the (100− c)-th percentile. Now, compute the c-winsorized gradient for w (denoted by gcw) as follows:
gcw := ∑ e clip(gwe, lw, uw)
The change to gradient descent is to simply use gcw instead of gw when updating w at each step.
Note that although this is conceptually a simple change, it is computationally very expensive due to the need for per-example gradients. To reduce the computational cost we only use the examples in the minibatch to compute lw and uw. Furthermore, instead of using 1 hidden layer of 2048 ReLUs, we use a smaller network with 3 hidden layers of 256 ReLUs each, and train for 60,000 steps (i.e., 100 epochs) with a fixed learning rate of 0.1. We train on the baseline dataset and the 4 noisy variants with c ∈ {0, 1, 2, 4, 8}. Since we have 100 examples in each minibatch, the value of c immediately tells us how many outliers are clipped in each minibatch. For example, c = 2 means the 2 largest and 2 lowest values of the per-example gradient are clipped (independently for each trainable parameter in the network), and c = 0 corresponds to unmodified SGD.
3.2 QUALITATIVE PREDICTIONS
If the Coherent Gradient hypothesis is right, then the strong gradients are responsible for making changes to the network that generalize well since they improve many examples simultaneously. On the other hand, the weak gradients lead to overfitting since they only improve a few examples. By winsorizing each coordinate, we suppress the most extreme values and thus ensure that a parameter is only updated in a manner that benefits multiple examples. Therefore:
• Since c controls which examples are considered extreme, the larger c is, the less we expect the network to overfit. • But this also makes it harder for the network to fit the training data, and so we expect the
training accuracy to fall as well. • Winsorization will not completely eliminate the weak directions. For example, for small
values of c we should still expect overfitting to happen over time though at a reduced rate since only the most egregious outliers are suppressed.
3.3 AGREEMENT WITH EXPERIMENT
The resulting training and test curves shown in Figure 4. The columns correspond to different amounts of label noise and the rows to different amounts of winsorization. In addition to the training and test accuracies (ta and va, respectively), we show the level of overfit which is defined as ta− [ · 110 + (1− ) · va] to account for the fact that the test labels are not randomized. We see that the experimental results are in agreement with the predictions above. In particular,
• For c > 1, training accuracies do not exceed the proper accuracy of the dataset, though they may fall short, specially for large values of c. • The rate at which the overfit curve grows goes down with increasing c.
Additionally, we notice that with a large amount of winsorization, the training and test accuracies reach a maximum and then go down. Part of the reason is that as a result of winsorization, each step is no longer in a descent direction, i.e., this is no longer gradient descent.
4 DISCUSSION AND RELATED WORK
Although there has been a lot of work in recent years in trying to understand generalization in Deep Learning, no entirely satisfactory explanation has emerged so far.
There is a rich literature on aspects of the stochastic optimization problem such as the loss landscape and minima (e.g., Choromanska et al. (2015); Zhu et al. (2018)), the curvature around stationary points (e.g., Hochreiter & Schmidhuber (1997); Keskar et al. (2016); Dinh et al. (2017); Wu et al. (2018)), and the implications of stochasticity due to sampling in SGD (e.g., Simsekli et al. (2019)). However, we believe it should be possible to understand generalization without a detailed understanding of the optimization landscape. For example, since stopping early typically leads to small generalization gap, the nature of the solutions of GD (e.g., stationary points, the limit cycles of SGD at equilibrium) cannot be solely responsible for generalization. In fact, from this observation, it would appear that an inductive argument for generalization would be more natural. Likewise, there is reason to believe that stochasticity is not fundamental to generalization (though it may help). For example, modifying the experiment in §2.1 to use full batch leads to similar qualitative generalization results. This is consistent with other small scale studies (e.g., Figure 1 of Wu et al. (2018)) though we are not aware of any large scale studies on full batch.
Our view of optimization is a simple, almost combinatorial, one: gradient descent is a greedy search with some hill-climbing thrown in (due to sampling in SGD and finite step size). Therefore, we worry less about the quality of solutions reached, but more about staying “feasible” at all times during the search. In our context, feasibility means being able to generalize; and this naturally leads us to look at the transition dynamics to see if that preserves generalizability.
Another approach to understanding generalization, is to argue that gradient-based optimization induces a form of implicit regularization leading to a bias towards models of low complexity. This is an extension of the classical approach where bounding a complexity measure leads to bounds on the generalization gap. As is well known, classical measures of complexity (also called capacity) do not work well. For example, sometimes adding more parameters to a net can help generalization (see for e.g. Lawrence et al. (1996); Neyshabur et al. (2018)) and, as we have seen, VC-Dimension
and Rademacher Complexity-based bounds must be vacuous since networks can memorize random labels and yet generalize on real data. This has led to a lot of recent work in identifying better measures of complexity such as spectrally-normalized margin (Bartlett et al., 2017), path-based group norm (Neyshabur et al., 2018), a compression-based approach (Arora et al., 2018), etc. However, to our knowledge, none of these measures is entirely satisfactory for accounting for generalization in practice. Please see Nagarajan & Kolter (2019) for an excellent discussion of the challenges.
We rely on a different classical notion to argue generalization: algorithmic stability (see Bousquet & Elisseeff (2002) for a historical overview). We have provided only an informal argument in Section 1, but there has been prior work by Hardt et al. (2016) in looking at GD and SGD through the lens of stability, but their formal results do not explain generalization in practical settings (e.g., multiple epochs of training and non-convex objectives). In fact, such an attempt appears unlikely to work since our experimental results imply that any stability bounds for SGD that do not account for the actual training data must be vacuous! (This was also noted by Zhang et al. (2017).) That said, we believe stability is the right way to think about generalization in GD for a few reasons. First, since by Shalev-Shwartz et al. (2010) stability, suitably formalized, is equivalent to generalization. Therefore, in principle, any explanation of generalizability for a learning problem must—to borrow a term from category theory—factor through stability. Second, a stability based analysis may be more amenable to taking the actual training data into account (perhaps by using a “stability accountant” similar to a privacy accountant) which appears necessary to get non-vacuous bounds for practical networks and datasets. Finally, as we have seen with the modification in §3, a stability based approach is not just descriptive but prescriptive2 and can point the way to better learning algorithms.
Finally, we look at two relevant lines of work pointed out by a reviewer. First, Rahaman et al. (2019) compute the Fourier spectrum of ReLU networks and argue based on heuristics and experiments that these networks learn low frequency functions first. In contrast, we focus not on the function learnt, but on the mechanism in GD to detect commonality. This leads to a perspective that is at once simpler and more general (for e.g., it applies equally to networks with other activation functions, with attention, LSTMs, and discrete (combinatorial) inputs). Furthermore, it opens up a path to analyzing generalization via stability. It is is not clear if Rahaman et al. (2019) claim a causal mechanism, but their analysis does not suggest an obvious intervention experiment such as ours of §3 to test causality. There are other experimental results that show biases towards linear functions (Nakkiran et al., 2019) and functions with low descriptive complexity (Valle-Perez et al., 2019) but these papers do not posit a causal mechanism. It is interesting to consider if Coherent Gradients can provide a unified explanation for these observed biases.
Second, Fort et al. (2019) propose a descriptive statistic stiffness based on pairwise per-example gradients and show experimentally that it can be used to characterize generalization. Sankararaman et al. (2019) propose a very similar statistic called gradient confusion but use it to study the speed of training. Unlike our work, these do not propose causal mechanisms for generalization, but these statistics (which are different from those in §2.4) could be useful for the further study of Coherent Gradients.
5 DIRECTIONS FOR FUTURE WORK
Does the Coherent Gradients hypothesis hold in other settings such as BERT, ResNet, etc.? For that we would need to develop more computationally efficient tests. Can we use the state of the network to explicitly characterize which examples are considered similar and study this evolution in the course of training? We expect non-parametric methods for similarity such as those developed in Chatterjee & Mishchenko (2019) and their characterization of “easy” examples (i.e., examples learnt early as per Arpit et al. (2017)) as those with many others like them, to be useful in this context.
Can Coherent Gradients explain adversarial initializations (Liu et al., 2019)? The adversarial initial state makes semantically similar examples purposefully look different. Therefore, during training, they continue to be treated differently (i.e., their gradients share less in common than they would if starting from a random initialization). Thus, fitting is more case-by-case and while it achieves good final training accuracy, it does not generalize.
2See https://www.offconvex.org/2017/12/08/generalization1/ for a nice discussion of the difference.
Can Coherent Gradients along with the Lottery Ticket Hypothesis (Frankle & Carbin, 2018) explain the observation in Neyshabur et al. (2018) that wider networks generalize better? By Lottery Ticket, wider networks provide more chances to find initial gradient directions that improve many examples, and by Coherent Gradients, these popular hypotheses are learned preferentially (faster).
Can we use the ideas behind Winsorized SGD from §3 to develop a computationally efficient learning algorithm with generalization (and even privacy) guarantees? How does winsorized gradients compare in practice to the algorithm proposed in Abadi et al. (2016) for privacy? Last, but not least, can we use the insights from this work to design learning algorithms that operate natively on discrete networks?
ACKNOWLEDGMENTS
I thank Alan Mishchenko, Shankar Krishnan, Piotr Zielinski, Chandramouli Kashyap, Sergey Ioffe, Michele Covell, and Jay Yagnik for helpful discussions. | 1. What is the main contribution of the paper regarding gradient coherence?
2. How does the reviewer assess the significance and originality of the paper's content?
3. What are the strengths of the paper, particularly in terms of experiment design and analysis?
4. Are there any concerns or limitations regarding the scalability of the proposed regularization technique?
5. How does the reviewer perceive the overall quality and impact of the paper? | Review | Review
This paper posits that similar input examples will have similar gradients, leading to a gradient "coherence" phenomenon. A simple argument then suggests that the loss should decrease much more rapidly when gradients cohere than when they do not. This hypothesis and analysis is supported with clever experiments that confirm some of the predictions of this theory. Furthermore, since, as the authors emphasize, their hypothesis is prescriptive, they are able to suggest a novel regularization technique and show that it is effective in a simple setting.
I find the coherent gradient hypothesis to be simple and reasonable. Furthermore, the paper is written very clearly, and as far as I know the main idea is original (although since it is a rather simple phenomenon, it's possible something similar could have appeared elsewhere in the literature). Perhaps more importantly, the associated experiments are very cleverly designed and are very supportive of the hypothesis. For instance, Figure 1 provides compelling evidence for the coherent gradient hypothesis and in particular motivates the way phenomenon of early stopping arises a natural consequence. Overall, the paper is of very high quality, and I recommend its acceptance.
One criticism perhaps is whether these results are sufficiently significant. On the one hand, most of the experiments were done on small network and dataset combinations -- and the proposed regularization scheme as is will not scale to practical problems of interest. On the other hand, I really feel like I learned something interesting about gradient descent from reading this paper and absorbing the experimental results -- which is often not something I can say given the large array of reported experimental results in this field. It's clear that the authors themselves are aware that it's of interest to extend their results to more realistic settings, and regardless I think that this paper stands alone as is and should be accepted to ICLR. |
ICLR | Title
Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization
Abstract
An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.
1 INTRODUCTION AND OVERVIEW
Neural networks used in practice often have sufficient effective capacity to learn arbitrary maps from their inputs to their outputs. This is typically demonstrated by training a classification network that achieves good test accuracy on a real dataset S, on a modified version of S (call it S′) where the labels are randomized and observing that the training accuracy on S′ is very high, though, of course, the test accuracy is no better than chance (Zhang et al., 2017). This leads to an important open question in the Deep Learning community (Zhang et al. (2017); Arpit et al. (2017); Bartlett et al. (2017); Kawaguchi et al. (2017); Neyshabur et al. (2018); Arora et al. (2018); Belkin et al. (2019); Rahaman et al. (2019); Nagarajan & Kolter (2019), etc.): Among all maps that fit a real dataset, how does Gradient Descent (GD) find one that generalizes well? This is the question we address in this paper.
We start by observing that this phenomenon is not limited to neural networks trained with GD but also applies to Random Forests and Decision Trees. However, there is no mystery with trees: A typical tree construction algorithm splits the training set recursively into similar subsets based on input features. If no similarity is found, eventually, each example is put into its own leaf to achieve good training accuracy (but, of course, at the cost of poor generalization). Thus, trees that achieve good accuracy on a randomized dataset are much larger than those on a real dataset (e.g. Chatterjee & Mishchenko (2019, Expt. 5)).
Is it possible that something similar happens with GD? We believe so. The type of randomized-label experiments described above show that if there are common patterns to be found, then GD finds them. If not, it fits each example on a case-by-case basis. The question then is, what is it about the dynamics of GD that makes it possible to extract common patterns from the data? And what does it mean for a pattern to be common?
Since the only change to the network parameters in GD comes from the gradients, the mechanism to detect commonality amongst examples must be through the gradients. We propose that this commonality detection can be explained as follows:
1. Gradients are coherent, i.e, similar examples (or parts of examples) have similar gradients (or similar components of gradients) and dissimilar examples have dissimilar gradients.
2. Since the overall gradient is the sum of the per-example gradients, it is stronger in directions where the per-example gradients are similar and reinforce each other and weaker in other directions where they are different and do not add up.
3. Since network parameters are updated proportionally to gradients, they change faster in the direction of stronger gradients.
4. Thus the changes to the network during training are biased towards those that simultaneously benefit many examples instead of a few (or one example).
For convenience, we refer to this as the Coherent Gradients hypothesis.
It is instructive to work through the proposed mechanism in the context of a simple thought experiment. Consider a training set with two examples a and b. At some point in training, suppose the gradient of a, ga, can be decomposed into two orthogonal components ga1 and ga2 of roughly equal magnitude, i.e., there are two, equally good, independent ways in which the network can better fit a (by using say two disjoint parts of the network). Likewise, for b. Now, further suppose that one of the two ways is common to both a and b, i.e., say ga2 = gb2 = gab, whereas, the other two are example specific, i.e., 〈ga1 , gb1〉 = 0. Now, the overall gradient is
g = ga + gb = ga1 + 2 gab + gb1 .
Observe that the gradient is stronger in the direction that simultaneously helps both examples and thus the corresponding parameter changes are bigger than those those that only benefit only one example.1
It is important to emphasize that the notion of similarity used above (i.e., which examples are considered similar) is not a constant but changes in the course of training as network parameters change. It starts from a mostly task independent notion due to random initialization and is bootstrapped in the course of training to be task dependent. We say “mostly” because even with random initialization, examples that are syntactically close are treated similarly (e.g., two images differing in the intensities of some pixels as opposed to two images where one is a translated version of the other).
The relationship between strong gradients and generalization can also be understood through the lens of algorithmic stability (Bousquet & Elisseeff, 2002): strong gradient directions are more stable since the presence or absence of a single example does not impact them as much, as opposed to weak gradient directions which may altogether disappear if a specific example is missing from the training set. With this observation, we can reason inductively about the stability of GD: since the initial values of the parameters do not depend on the training data, the initial function mapping examples to their gradients is stable. Now, if all parameter updates are due to strong gradient directions, then stability is preserved. However, if some parameter updates are due to weak gradient directions, then stability is diminished. Since stability (suitably formalized) is equivalent to generalization (Shalev-Shwartz et al., 2010), this allows us to see how generalization may degrade as training progresses. Based on this insight, we shall see later how a simple modification to GD to suppress the weak gradient directions can dramatically reduce overfitting.
In addition to providing insight into why GD generalizes in practice, we believe that the Coherent Gradients hypothesis can help explain several other empirical observations about deep learning in the literature:
(a) Learning is slower with random labels than with real labels (Zhang et al., 2017; Arpit et al., 2017)
(b) Robustness to large amounts of label noise (Rolnick et al., 2017) (c) Early stopping leads to better generalization (Caruana et al., 2000) (d) Increasing capacity improves generalization (Caruana et al., 2000; Neyshabur et al., 2018) (e) The existence of adversarial initialization schemes (Liu et al., 2019) (f) GD detects common patterns even when trained with random labels (Chatterjee &
Mishchenko, 2019) 1While the mechanism is easiest to see with full or large minibatches, we believe it holds even for small
minibatches (though there one has to consider the bias in updates over time).
A direct experimental verification of the Coherent Gradients hypothesis is challenging since the notion of similarity between examples depends on the parameters of the network and thus changes during training. Our approach, therefore, is to design intervention experiments where we establish a baseline and compare it against variants designed to test some aspect or prediction of the theory. As part of these experiments, we replicate the observations (a)–(c) in the literature noted above, and analyze the corresponding explanations provided by Coherent Gradients (§2), and outline for future work how (d)–(f) may be accounted for (§5). In this paper, we limit our study to simple baselines: vanilla Stochastic Gradient Descent (SGD) on MNIST using fully connected networks. We believe that this is a good starting point, since even in this simple setting, with all frills eliminated (e.g., inductive bias from architecture or explicit regularization, or a more sophisticated optimization procedure), we are challenged to find a satisfactory explanation of why SGD generalizes well. Furthermore, our prior is that the difference between weak and strong directions is small at any one step of training, and therefore having a strong learning signal as in the case of MNIST makes a direct analysis of gradients easier. It also has the benefit of having a smaller carbon footprint and being easier to reproduce. Finally, based on preliminary experiments on other architectures and datasets we are optimistic that the insights we get from studying this simple setup apply more broadly.
2 EFFECT OF REDUCING SIMILARITY BETWEEN EXAMPLES
Our first test of the Coherent Gradients hypothesis is to see what happens when we reduce similarity between examples. Although, at any point during training, we do not know which examples are similar, and which are different, we can (with high probability) reduce the similarity among training examples simply by injecting label noise. In other words, under any notion of similarity, adding label noise to a dataset that has clean labels is likely to make similar examples less similar. Note that this perturbation does not reduce coherence since gradients still depend on the examples. (To break coherence, we would have to make the gradients independent of the training examples which would requiring perturbing SGD itself and not just the dataset).
2.1 SETUP
For our baseline, we use the standard MNIST dataset of 60,000 training examples and 10,000 test examples. Each example is a 28x28 pixel grayscale handwritten digit along with a label (‘0’–‘9’). We train a fully connected network on this dataset. The network has one hidden layer with 2048 ReLUs and an output layer with a 10-way softmax. We initialize it with Xavier and train using vanilla SGD (i.e., no momentum) using cross entropy loss with a constant learning rate of 0.1 and a minibatch size of 100 for 105 steps (i.e., about 170 epochs). We do not use any explicit regularizers.
We perturb the baseline by modifying only the dataset and keeping all other aspects of the architecture and learning algorithm fixed. The dataset is modified by adding various amounts of noise (25%, 50%, 75%, and 100%) to the labels of the training set (but not the test set). This noise is added by taking, say in the case of 25% label noise, 25% of the examples at random and randomly permuting their labels. Thus, when we add 25% label noise, we still expect about 75% + 0.1 * 25%, i.e., 77.5% of the examples to have unchanged (i.e. “correct”) labels which we call the proper accuracy of the modified dataset. In what follows, we call examples with unchanged labels, pristine, and the remaining, corrupt. Also, from this perspective, it is convenient to refer to the original MNIST dataset as having 0% label noise.
We use a fully connected architecture instead of a convolutional one to mitigate concerns that some of the difference in generalization between the original MNIST and the noisy variants could stem from architectural inductive bias. We restrict ourselves to only 1 hidden layer to have the gradients be as well-behaved as possible. Finally, the network width, learning rate, and the number of training steps are chosen to ensure that exactly the same procedure is usually able to fit all 5 variants to 100% training accuracy.
2.2 QUALITATIVE PREDICTIONS
Before looking at the experimental results, it is useful to consider what Coherent Gradients can qualitatively say about this setup. In going from 0% label noise to 100% label noise, as per experiment design, we expect examples in the training set to become more dissimilar (no matter what the current notion of similarity is). Therefore, we expect the per-example gradients to be less aligned with each other. This in turn causes the overall gradient to become more diffuse, i.e., stronger directions become relatively weaker, and consequently, we expect it to take longer to reach a given level of accuracy as label noise increases, i.e., to have a lower realized learning rate.
This can be made more precise by considering the following heuristic argument. Let θt be the vector of trainable parameters of the network at training step t. Let L denote the loss function of the network (over all training examples). Let gt be the gradient of L at θt and let α denote the learning rate. By Taylor expansion, to first order, the change ∆Lt in the loss function due to a small gradient descent step ht = −α · gt is given by
∆Lt := L(θt + ht)− L(θt) ≈ 〈gt, ht〉 = −α · 〈gt, gt〉 = −α · ‖gt‖2 (1)
where ‖ · ‖ denotes the l2-norm. Now, let gte denote the gradient of training example e at step t. Since the overall gradient is the sum of the per-example gradients, we have,
‖gt‖2 = 〈gt, gt〉 = 〈 ∑ e gte, ∑ e gte〉 = ∑ e,e′ 〈gte, gte′〉 = ∑ e ‖gte‖2 + ∑ e,e′
e 6=e′
〈gte, gte′〉 (2)
Now, heuristically, let us assume that all the ‖gte‖ are roughly the same and equal to ‖g◦t ‖ which is not entirely unreasonable (at least at the start of training, if the network has no a priori reason to treat different examples very differently). If all the per-example gradients are approximately orthogonal (i.e., 〈gte, gte′〉 ≈ 0 for e 6= e′), then ‖gt‖2 ≈ m · ‖g◦t ‖2 where m is the number of examples. On the other hand, if they are approximately the same (i.e., 〈gte, gte′〉 ≈ ‖g◦t ‖2), then ‖gt‖2 ≈ m2 · ‖g◦t ‖2. Thus, we expect that greater the agreement in per-example gradients, the faster loss should decrease.
Finally, for datasets that have a significant fractions of pristine and corrupt examples (i.e., the 25%, 50%, and 75% noise) we can make a more nuanced prediction. Since, in those datasets, the pristine examples as a group are still more similar than the corrupt ones, we expect the pristine gradients to continue to align well and sum up to a strong gradient. Therefore, we expect them to be learned faster than the corrupt examples, and at a rate closer to the realized learning rate in the 0% label noise case. Likewise, we expect the realized learning rate on the corrupt examples to be closer to the 100% label noise case. Finally, as the proportion of pristine examples falls with increasing noise, we expect the realized learning rate for pristine examples to degrade.
Note that this provides an explanation for the observation in the literature that that networks can learn even when the number of examples with noisy labels greatly outnumber the clean examples as long as the number of clean examples is sufficiently large (Rolnick et al., 2017) since with too few clean examples the pristine gradients are not strong enough to dominate.
2.3 AGREEMENT WITH EXPERIMENT
Figure 1(a) and (b) show the training and test curves for the baseline and the 4 variants. We note that for all 5 variants, at the end of training, we achieve 100% training accuracy but different amounts of generalization. As expected, SGD is able to fit random labels, yet when trained on real data, generalizes well. Figure 1(c) shows the reduction in training loss over the course of training, and Figure 1(d) shows the fraction of pristine and corrupt labels learned as training processes.
The results are in agreement with the qualitative predictions made above:
1. In general, as noise increases, the time taken to reach a given level of accuracy (i.e., realized learning rate) increases.
2. Pristine examples are learned faster than corrupt examples. They are learned at a rate closer to the 0% label noise rate whereas the corrupt examples are learned at a rate closer to the 100% label noise rate.
3. With fewer pristine examples, their learning rate reduces. This is most clearly seen in the first few steps of training by comparing say 0% noise with 25% noise.
Using Equation 1, note that the magnitude of the slope of the training loss curve is a good measure of the square of the l2-norm of the overall gradient. Therefore, from the loss curves of Figure 1(c), it is clear that in early training, the more the noise, the weaker the l2-norm of the gradient. If we assume that the per-example l2-norm is the same in all variants at start of training, then from Equation 2, it is clear that with greater noise, the gradients are more dissimilar.
Finally, we note that this experiment is an instance where early stopping (e.g., Caruana et al. (2000)) is effective. Coherent gradients and the discussion in §2.2 provide some insight into this: Strong gradients both generalize well (they are stable since they are supported by many examples) and they bring the training loss down quickly for those examples. Thus early stopping maximizes the use of strong gradients and limits the impact of weak gradients. (The experiment in the §3 discusses a different way to limit the impact of weak gradients and is an interesting point of comparison with early stopping.)
2.4 ANALYZING STRONG AND WEAK GRADIENTS
Within each noisy dataset, we expect the pristine examples to be more similar to each other and the corrupt ones to be less similar. In turn, based on the training curves (particularly, Figure 1 (d)), during the initial part of training, this should mean that the gradients from the pristine examples should be stronger than the gradients from the corrupt examples. We can study this effect via a different decomposition of square of the l2-norm of the gradient (of equivalently upto a constant, the change in the loss function):
〈gt, gt〉 = 〈gt, gpt + gct 〉 = 〈gt, g p t 〉+ 〈gt, gct 〉
where gpt and g c t are the sum of the gradients of the pristine examples and corrupt examples respectively. (We cannot decompose the overall norm into a sum of norms of pristine and corrupt due to the cross terms 〈gpt , gct 〉. With this decomposition, we attribute the cross terms equally to both.) Now, set fpt = 〈gt,gpt 〉 <gt,gt> and f ct = 〈gt,gct 〉 〈gt,gt〉 . Thus, f p t and f c t represent the fraction of the loss reduction due to pristine and corrupt at each time step respectively (and we have fpt + f c t = 1), and based on the foregoing, we expect the pristine fraction to be a larger fraction of the total when training starts and to diminish as training progresses and the pristine examples are fitted.
The first row of Figure 2 shows a plot of estimates of fpt and f c t for 25%, 50% and 75% noise. These quantities were estimated by recording a sample of 400 per-example gradients for 600 weights (300 from each layer) in the network. We see that for 25% and 50% label noise, fpt initially starts off higher than f ct and after a few steps they cross over. This happens because at that point all the pristine examples have been fitted and for most of the rest of training the corrupt examples need to be fitted and so they largely contribute to the l2-norm of the gradient (or equivalently by Equation 1 to loss reduction). Only at the end when the corrupt examples have also been fit, the two curves reach parity. In the case of 75% noise, we see that the cross over doesn’t happen, but there is a slight slope downwards for the contribution from pristine examples. We believe this is because of the sheer number of corrupt examples, and so even though the individual corrupt example gradients are weak, their sum dominates.
To get a sense of statistical significance in our hypothesis that there is a difference between the pristine and corrupt examples as a group, in the remaining rows of Figure 2, we construct a null world where there is no difference between pristine and corrupt. We do that by randomly permuting the “corrupt” and “pristine” designations among the examples (instead of using the actual designations) and reploting. Although the null pristine and corrupt curves are mirror images (as they must be even in the null world since each example is given one of the two designations), we note that for 25% and 50% they do not cross over as they do with the real data. This increases our confidence that the null may be rejected. The 75% case is weaker but only the real data shows the slight downward slope in pristine which none of the nulls typically show. However, all the nulls do show that corrupt is more than pristine which increases our confidence that this is due to the significantly differing sizes of the two sets. (Note that this happens in reverse in the 25% case: pristine is always above corrupt, but they never cross over in the null worlds.)
To get a stronger signal for the difference between pristine and corrupt in the 75% case, we can look at a different statistic that adjusts for the different sizes of the pristine and corrupt sets. Let |p| and |c|
be the number of pristine and corrupt examples respectively. Define
ipt := 1
|p| t∑ t′=0 〈gt′ , gpt′〉 and i c t := 1 |c| t∑ t′=0 〈gt′ , gct′〉
which represents to a first order and upto a scale factor (α) the mean cumulative contribution of a pristine or corrupt example up until that point in training (since the total change in loss from the start of training to time t is approximately the sum of first order changes in the loss at each time step).
The first row of Figure 3 shows ipt and i c t for the first 10 steps of training where the difference between pristine and corrupt is the most pronounced. As before, to give a sense of statistical significance, the remaining rows show the same plots in null worlds where we randomly permute the pristine or corrupt designations of the examples. The results appear somewhat significant but not overwhelmingly so. It would be interesting to redo this on the entire population of examples and trainable parameters instead of a small sample.
3 EFFECT OF SUPPRESSING WEAK GRADIENT DIRECTIONS
In the second test of the Coherent Gradients hypothesis, we change GD itself in a very specific (and to our knowledge, novel) manner suggested by the theory. Our inspiration comes from random forests. As noted in the introduction, by building sufficiently deep trees a random forest algorithm can get perfect training accuracy with random labels, yet generalize well when trained on real data. However, if we limit the tree construction algorithm to have a certain minimum number of examples in each leaf, then it no longer overfits. In the case of GD, we can do something similar by suppressing the weak gradient directions.
3.1 SETUP
Our baseline setup is the same as before (§2.1) but we add a new dimension by modifying SGD to update each parameter with a “winsorized” gradient where we clip the most extreme values (outliers) among all the per-example gradients. Formally, let gwe be the gradient for the trainable parameter w for example e. The usual gradient computation for w is
gw = ∑ e gwe
Now let c ∈ [0, 50] be a hyperparameter that controls the level of winsorization. Define lw to be the c-th percentile of gwe taken over the examples. Similarly, let uw be the (100− c)-th percentile. Now, compute the c-winsorized gradient for w (denoted by gcw) as follows:
gcw := ∑ e clip(gwe, lw, uw)
The change to gradient descent is to simply use gcw instead of gw when updating w at each step.
Note that although this is conceptually a simple change, it is computationally very expensive due to the need for per-example gradients. To reduce the computational cost we only use the examples in the minibatch to compute lw and uw. Furthermore, instead of using 1 hidden layer of 2048 ReLUs, we use a smaller network with 3 hidden layers of 256 ReLUs each, and train for 60,000 steps (i.e., 100 epochs) with a fixed learning rate of 0.1. We train on the baseline dataset and the 4 noisy variants with c ∈ {0, 1, 2, 4, 8}. Since we have 100 examples in each minibatch, the value of c immediately tells us how many outliers are clipped in each minibatch. For example, c = 2 means the 2 largest and 2 lowest values of the per-example gradient are clipped (independently for each trainable parameter in the network), and c = 0 corresponds to unmodified SGD.
3.2 QUALITATIVE PREDICTIONS
If the Coherent Gradient hypothesis is right, then the strong gradients are responsible for making changes to the network that generalize well since they improve many examples simultaneously. On the other hand, the weak gradients lead to overfitting since they only improve a few examples. By winsorizing each coordinate, we suppress the most extreme values and thus ensure that a parameter is only updated in a manner that benefits multiple examples. Therefore:
• Since c controls which examples are considered extreme, the larger c is, the less we expect the network to overfit. • But this also makes it harder for the network to fit the training data, and so we expect the
training accuracy to fall as well. • Winsorization will not completely eliminate the weak directions. For example, for small
values of c we should still expect overfitting to happen over time though at a reduced rate since only the most egregious outliers are suppressed.
3.3 AGREEMENT WITH EXPERIMENT
The resulting training and test curves shown in Figure 4. The columns correspond to different amounts of label noise and the rows to different amounts of winsorization. In addition to the training and test accuracies (ta and va, respectively), we show the level of overfit which is defined as ta− [ · 110 + (1− ) · va] to account for the fact that the test labels are not randomized. We see that the experimental results are in agreement with the predictions above. In particular,
• For c > 1, training accuracies do not exceed the proper accuracy of the dataset, though they may fall short, specially for large values of c. • The rate at which the overfit curve grows goes down with increasing c.
Additionally, we notice that with a large amount of winsorization, the training and test accuracies reach a maximum and then go down. Part of the reason is that as a result of winsorization, each step is no longer in a descent direction, i.e., this is no longer gradient descent.
4 DISCUSSION AND RELATED WORK
Although there has been a lot of work in recent years in trying to understand generalization in Deep Learning, no entirely satisfactory explanation has emerged so far.
There is a rich literature on aspects of the stochastic optimization problem such as the loss landscape and minima (e.g., Choromanska et al. (2015); Zhu et al. (2018)), the curvature around stationary points (e.g., Hochreiter & Schmidhuber (1997); Keskar et al. (2016); Dinh et al. (2017); Wu et al. (2018)), and the implications of stochasticity due to sampling in SGD (e.g., Simsekli et al. (2019)). However, we believe it should be possible to understand generalization without a detailed understanding of the optimization landscape. For example, since stopping early typically leads to small generalization gap, the nature of the solutions of GD (e.g., stationary points, the limit cycles of SGD at equilibrium) cannot be solely responsible for generalization. In fact, from this observation, it would appear that an inductive argument for generalization would be more natural. Likewise, there is reason to believe that stochasticity is not fundamental to generalization (though it may help). For example, modifying the experiment in §2.1 to use full batch leads to similar qualitative generalization results. This is consistent with other small scale studies (e.g., Figure 1 of Wu et al. (2018)) though we are not aware of any large scale studies on full batch.
Our view of optimization is a simple, almost combinatorial, one: gradient descent is a greedy search with some hill-climbing thrown in (due to sampling in SGD and finite step size). Therefore, we worry less about the quality of solutions reached, but more about staying “feasible” at all times during the search. In our context, feasibility means being able to generalize; and this naturally leads us to look at the transition dynamics to see if that preserves generalizability.
Another approach to understanding generalization, is to argue that gradient-based optimization induces a form of implicit regularization leading to a bias towards models of low complexity. This is an extension of the classical approach where bounding a complexity measure leads to bounds on the generalization gap. As is well known, classical measures of complexity (also called capacity) do not work well. For example, sometimes adding more parameters to a net can help generalization (see for e.g. Lawrence et al. (1996); Neyshabur et al. (2018)) and, as we have seen, VC-Dimension
and Rademacher Complexity-based bounds must be vacuous since networks can memorize random labels and yet generalize on real data. This has led to a lot of recent work in identifying better measures of complexity such as spectrally-normalized margin (Bartlett et al., 2017), path-based group norm (Neyshabur et al., 2018), a compression-based approach (Arora et al., 2018), etc. However, to our knowledge, none of these measures is entirely satisfactory for accounting for generalization in practice. Please see Nagarajan & Kolter (2019) for an excellent discussion of the challenges.
We rely on a different classical notion to argue generalization: algorithmic stability (see Bousquet & Elisseeff (2002) for a historical overview). We have provided only an informal argument in Section 1, but there has been prior work by Hardt et al. (2016) in looking at GD and SGD through the lens of stability, but their formal results do not explain generalization in practical settings (e.g., multiple epochs of training and non-convex objectives). In fact, such an attempt appears unlikely to work since our experimental results imply that any stability bounds for SGD that do not account for the actual training data must be vacuous! (This was also noted by Zhang et al. (2017).) That said, we believe stability is the right way to think about generalization in GD for a few reasons. First, since by Shalev-Shwartz et al. (2010) stability, suitably formalized, is equivalent to generalization. Therefore, in principle, any explanation of generalizability for a learning problem must—to borrow a term from category theory—factor through stability. Second, a stability based analysis may be more amenable to taking the actual training data into account (perhaps by using a “stability accountant” similar to a privacy accountant) which appears necessary to get non-vacuous bounds for practical networks and datasets. Finally, as we have seen with the modification in §3, a stability based approach is not just descriptive but prescriptive2 and can point the way to better learning algorithms.
Finally, we look at two relevant lines of work pointed out by a reviewer. First, Rahaman et al. (2019) compute the Fourier spectrum of ReLU networks and argue based on heuristics and experiments that these networks learn low frequency functions first. In contrast, we focus not on the function learnt, but on the mechanism in GD to detect commonality. This leads to a perspective that is at once simpler and more general (for e.g., it applies equally to networks with other activation functions, with attention, LSTMs, and discrete (combinatorial) inputs). Furthermore, it opens up a path to analyzing generalization via stability. It is is not clear if Rahaman et al. (2019) claim a causal mechanism, but their analysis does not suggest an obvious intervention experiment such as ours of §3 to test causality. There are other experimental results that show biases towards linear functions (Nakkiran et al., 2019) and functions with low descriptive complexity (Valle-Perez et al., 2019) but these papers do not posit a causal mechanism. It is interesting to consider if Coherent Gradients can provide a unified explanation for these observed biases.
Second, Fort et al. (2019) propose a descriptive statistic stiffness based on pairwise per-example gradients and show experimentally that it can be used to characterize generalization. Sankararaman et al. (2019) propose a very similar statistic called gradient confusion but use it to study the speed of training. Unlike our work, these do not propose causal mechanisms for generalization, but these statistics (which are different from those in §2.4) could be useful for the further study of Coherent Gradients.
5 DIRECTIONS FOR FUTURE WORK
Does the Coherent Gradients hypothesis hold in other settings such as BERT, ResNet, etc.? For that we would need to develop more computationally efficient tests. Can we use the state of the network to explicitly characterize which examples are considered similar and study this evolution in the course of training? We expect non-parametric methods for similarity such as those developed in Chatterjee & Mishchenko (2019) and their characterization of “easy” examples (i.e., examples learnt early as per Arpit et al. (2017)) as those with many others like them, to be useful in this context.
Can Coherent Gradients explain adversarial initializations (Liu et al., 2019)? The adversarial initial state makes semantically similar examples purposefully look different. Therefore, during training, they continue to be treated differently (i.e., their gradients share less in common than they would if starting from a random initialization). Thus, fitting is more case-by-case and while it achieves good final training accuracy, it does not generalize.
2See https://www.offconvex.org/2017/12/08/generalization1/ for a nice discussion of the difference.
Can Coherent Gradients along with the Lottery Ticket Hypothesis (Frankle & Carbin, 2018) explain the observation in Neyshabur et al. (2018) that wider networks generalize better? By Lottery Ticket, wider networks provide more chances to find initial gradient directions that improve many examples, and by Coherent Gradients, these popular hypotheses are learned preferentially (faster).
Can we use the ideas behind Winsorized SGD from §3 to develop a computationally efficient learning algorithm with generalization (and even privacy) guarantees? How does winsorized gradients compare in practice to the algorithm proposed in Abadi et al. (2016) for privacy? Last, but not least, can we use the insights from this work to design learning algorithms that operate natively on discrete networks?
ACKNOWLEDGMENTS
I thank Alan Mishchenko, Shankar Krishnan, Piotr Zielinski, Chandramouli Kashyap, Sergey Ioffe, Michele Covell, and Jay Yagnik for helpful discussions. | 1. What is the main contribution of the paper regarding neural networks and stochastic gradient descent?
2. What are the strengths of the proposed hypothesis, and how does it explain the generalization properties of neural networks?
3. How do the experimental results support the coherent gradient hypothesis?
4. Are there any potential limitations or assumptions in the paper's approach or conclusions?
5. How would the results change if the experiments were conducted using deterministic gradient descent instead of stochastic gradient descent?
6. Is there any impact of large batch sizes on the experimental outcomes? | Review | Review
Summary
The surprising generalization properties of neural networks trained with stochastic gradient descent are still poorly understood. The present work suggests that they can be explained at least partly by the fact that patterns shared across many data points will lead to gradients pointing in similar directions, thus reinforcing each other. Artefacts specific to small numbers of data points however will not have this property and thus have a substantially smaller impact on the learning. Numerical experiments on MNIST with label-noise indeed show that even though the neural network is able to perfectly fit even the flipped labels, the "pristine" labels are fittet much earlier during training. The authors also experiment with explicitly clipping "outlier gradients" and show that the resulting algorithm drastically reduces overfitting, thus further supporting the coherent gradient hypothesis.
Decision
The present work proposes a plausible, simple mechanism that might be contributing to the generalization of Neural Networks trained with gradient descent. Parts of the discussion stay informal as the authors themselves admit, but I appreciate that rather than providing mathematical decoration the authors focus on well-designed experiments that support their claims. Overall, the paper is of high quality and provides an interesting perspective on an important topic, which is why I think it should be accepted.
Questions for the authors
The coherent gradient hypothesis seems equally valid in the absence of stochasticity. However, the latter is often seen as an explanation of the generalization performance of SGD. My understanding is that you are also using minibatched gradient descent. Would you expect your experiments to still be valid when using deterministic gradient descent (full batch)? Did you study the effects of large batch sizes on the experiments? |
ICLR | Title
Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization
Abstract
An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.
1 INTRODUCTION AND OVERVIEW
Neural networks used in practice often have sufficient effective capacity to learn arbitrary maps from their inputs to their outputs. This is typically demonstrated by training a classification network that achieves good test accuracy on a real dataset S, on a modified version of S (call it S′) where the labels are randomized and observing that the training accuracy on S′ is very high, though, of course, the test accuracy is no better than chance (Zhang et al., 2017). This leads to an important open question in the Deep Learning community (Zhang et al. (2017); Arpit et al. (2017); Bartlett et al. (2017); Kawaguchi et al. (2017); Neyshabur et al. (2018); Arora et al. (2018); Belkin et al. (2019); Rahaman et al. (2019); Nagarajan & Kolter (2019), etc.): Among all maps that fit a real dataset, how does Gradient Descent (GD) find one that generalizes well? This is the question we address in this paper.
We start by observing that this phenomenon is not limited to neural networks trained with GD but also applies to Random Forests and Decision Trees. However, there is no mystery with trees: A typical tree construction algorithm splits the training set recursively into similar subsets based on input features. If no similarity is found, eventually, each example is put into its own leaf to achieve good training accuracy (but, of course, at the cost of poor generalization). Thus, trees that achieve good accuracy on a randomized dataset are much larger than those on a real dataset (e.g. Chatterjee & Mishchenko (2019, Expt. 5)).
Is it possible that something similar happens with GD? We believe so. The type of randomized-label experiments described above show that if there are common patterns to be found, then GD finds them. If not, it fits each example on a case-by-case basis. The question then is, what is it about the dynamics of GD that makes it possible to extract common patterns from the data? And what does it mean for a pattern to be common?
Since the only change to the network parameters in GD comes from the gradients, the mechanism to detect commonality amongst examples must be through the gradients. We propose that this commonality detection can be explained as follows:
1. Gradients are coherent, i.e, similar examples (or parts of examples) have similar gradients (or similar components of gradients) and dissimilar examples have dissimilar gradients.
2. Since the overall gradient is the sum of the per-example gradients, it is stronger in directions where the per-example gradients are similar and reinforce each other and weaker in other directions where they are different and do not add up.
3. Since network parameters are updated proportionally to gradients, they change faster in the direction of stronger gradients.
4. Thus the changes to the network during training are biased towards those that simultaneously benefit many examples instead of a few (or one example).
For convenience, we refer to this as the Coherent Gradients hypothesis.
It is instructive to work through the proposed mechanism in the context of a simple thought experiment. Consider a training set with two examples a and b. At some point in training, suppose the gradient of a, ga, can be decomposed into two orthogonal components ga1 and ga2 of roughly equal magnitude, i.e., there are two, equally good, independent ways in which the network can better fit a (by using say two disjoint parts of the network). Likewise, for b. Now, further suppose that one of the two ways is common to both a and b, i.e., say ga2 = gb2 = gab, whereas, the other two are example specific, i.e., 〈ga1 , gb1〉 = 0. Now, the overall gradient is
g = ga + gb = ga1 + 2 gab + gb1 .
Observe that the gradient is stronger in the direction that simultaneously helps both examples and thus the corresponding parameter changes are bigger than those those that only benefit only one example.1
It is important to emphasize that the notion of similarity used above (i.e., which examples are considered similar) is not a constant but changes in the course of training as network parameters change. It starts from a mostly task independent notion due to random initialization and is bootstrapped in the course of training to be task dependent. We say “mostly” because even with random initialization, examples that are syntactically close are treated similarly (e.g., two images differing in the intensities of some pixels as opposed to two images where one is a translated version of the other).
The relationship between strong gradients and generalization can also be understood through the lens of algorithmic stability (Bousquet & Elisseeff, 2002): strong gradient directions are more stable since the presence or absence of a single example does not impact them as much, as opposed to weak gradient directions which may altogether disappear if a specific example is missing from the training set. With this observation, we can reason inductively about the stability of GD: since the initial values of the parameters do not depend on the training data, the initial function mapping examples to their gradients is stable. Now, if all parameter updates are due to strong gradient directions, then stability is preserved. However, if some parameter updates are due to weak gradient directions, then stability is diminished. Since stability (suitably formalized) is equivalent to generalization (Shalev-Shwartz et al., 2010), this allows us to see how generalization may degrade as training progresses. Based on this insight, we shall see later how a simple modification to GD to suppress the weak gradient directions can dramatically reduce overfitting.
In addition to providing insight into why GD generalizes in practice, we believe that the Coherent Gradients hypothesis can help explain several other empirical observations about deep learning in the literature:
(a) Learning is slower with random labels than with real labels (Zhang et al., 2017; Arpit et al., 2017)
(b) Robustness to large amounts of label noise (Rolnick et al., 2017) (c) Early stopping leads to better generalization (Caruana et al., 2000) (d) Increasing capacity improves generalization (Caruana et al., 2000; Neyshabur et al., 2018) (e) The existence of adversarial initialization schemes (Liu et al., 2019) (f) GD detects common patterns even when trained with random labels (Chatterjee &
Mishchenko, 2019) 1While the mechanism is easiest to see with full or large minibatches, we believe it holds even for small
minibatches (though there one has to consider the bias in updates over time).
A direct experimental verification of the Coherent Gradients hypothesis is challenging since the notion of similarity between examples depends on the parameters of the network and thus changes during training. Our approach, therefore, is to design intervention experiments where we establish a baseline and compare it against variants designed to test some aspect or prediction of the theory. As part of these experiments, we replicate the observations (a)–(c) in the literature noted above, and analyze the corresponding explanations provided by Coherent Gradients (§2), and outline for future work how (d)–(f) may be accounted for (§5). In this paper, we limit our study to simple baselines: vanilla Stochastic Gradient Descent (SGD) on MNIST using fully connected networks. We believe that this is a good starting point, since even in this simple setting, with all frills eliminated (e.g., inductive bias from architecture or explicit regularization, or a more sophisticated optimization procedure), we are challenged to find a satisfactory explanation of why SGD generalizes well. Furthermore, our prior is that the difference between weak and strong directions is small at any one step of training, and therefore having a strong learning signal as in the case of MNIST makes a direct analysis of gradients easier. It also has the benefit of having a smaller carbon footprint and being easier to reproduce. Finally, based on preliminary experiments on other architectures and datasets we are optimistic that the insights we get from studying this simple setup apply more broadly.
2 EFFECT OF REDUCING SIMILARITY BETWEEN EXAMPLES
Our first test of the Coherent Gradients hypothesis is to see what happens when we reduce similarity between examples. Although, at any point during training, we do not know which examples are similar, and which are different, we can (with high probability) reduce the similarity among training examples simply by injecting label noise. In other words, under any notion of similarity, adding label noise to a dataset that has clean labels is likely to make similar examples less similar. Note that this perturbation does not reduce coherence since gradients still depend on the examples. (To break coherence, we would have to make the gradients independent of the training examples which would requiring perturbing SGD itself and not just the dataset).
2.1 SETUP
For our baseline, we use the standard MNIST dataset of 60,000 training examples and 10,000 test examples. Each example is a 28x28 pixel grayscale handwritten digit along with a label (‘0’–‘9’). We train a fully connected network on this dataset. The network has one hidden layer with 2048 ReLUs and an output layer with a 10-way softmax. We initialize it with Xavier and train using vanilla SGD (i.e., no momentum) using cross entropy loss with a constant learning rate of 0.1 and a minibatch size of 100 for 105 steps (i.e., about 170 epochs). We do not use any explicit regularizers.
We perturb the baseline by modifying only the dataset and keeping all other aspects of the architecture and learning algorithm fixed. The dataset is modified by adding various amounts of noise (25%, 50%, 75%, and 100%) to the labels of the training set (but not the test set). This noise is added by taking, say in the case of 25% label noise, 25% of the examples at random and randomly permuting their labels. Thus, when we add 25% label noise, we still expect about 75% + 0.1 * 25%, i.e., 77.5% of the examples to have unchanged (i.e. “correct”) labels which we call the proper accuracy of the modified dataset. In what follows, we call examples with unchanged labels, pristine, and the remaining, corrupt. Also, from this perspective, it is convenient to refer to the original MNIST dataset as having 0% label noise.
We use a fully connected architecture instead of a convolutional one to mitigate concerns that some of the difference in generalization between the original MNIST and the noisy variants could stem from architectural inductive bias. We restrict ourselves to only 1 hidden layer to have the gradients be as well-behaved as possible. Finally, the network width, learning rate, and the number of training steps are chosen to ensure that exactly the same procedure is usually able to fit all 5 variants to 100% training accuracy.
2.2 QUALITATIVE PREDICTIONS
Before looking at the experimental results, it is useful to consider what Coherent Gradients can qualitatively say about this setup. In going from 0% label noise to 100% label noise, as per experiment design, we expect examples in the training set to become more dissimilar (no matter what the current notion of similarity is). Therefore, we expect the per-example gradients to be less aligned with each other. This in turn causes the overall gradient to become more diffuse, i.e., stronger directions become relatively weaker, and consequently, we expect it to take longer to reach a given level of accuracy as label noise increases, i.e., to have a lower realized learning rate.
This can be made more precise by considering the following heuristic argument. Let θt be the vector of trainable parameters of the network at training step t. Let L denote the loss function of the network (over all training examples). Let gt be the gradient of L at θt and let α denote the learning rate. By Taylor expansion, to first order, the change ∆Lt in the loss function due to a small gradient descent step ht = −α · gt is given by
∆Lt := L(θt + ht)− L(θt) ≈ 〈gt, ht〉 = −α · 〈gt, gt〉 = −α · ‖gt‖2 (1)
where ‖ · ‖ denotes the l2-norm. Now, let gte denote the gradient of training example e at step t. Since the overall gradient is the sum of the per-example gradients, we have,
‖gt‖2 = 〈gt, gt〉 = 〈 ∑ e gte, ∑ e gte〉 = ∑ e,e′ 〈gte, gte′〉 = ∑ e ‖gte‖2 + ∑ e,e′
e 6=e′
〈gte, gte′〉 (2)
Now, heuristically, let us assume that all the ‖gte‖ are roughly the same and equal to ‖g◦t ‖ which is not entirely unreasonable (at least at the start of training, if the network has no a priori reason to treat different examples very differently). If all the per-example gradients are approximately orthogonal (i.e., 〈gte, gte′〉 ≈ 0 for e 6= e′), then ‖gt‖2 ≈ m · ‖g◦t ‖2 where m is the number of examples. On the other hand, if they are approximately the same (i.e., 〈gte, gte′〉 ≈ ‖g◦t ‖2), then ‖gt‖2 ≈ m2 · ‖g◦t ‖2. Thus, we expect that greater the agreement in per-example gradients, the faster loss should decrease.
Finally, for datasets that have a significant fractions of pristine and corrupt examples (i.e., the 25%, 50%, and 75% noise) we can make a more nuanced prediction. Since, in those datasets, the pristine examples as a group are still more similar than the corrupt ones, we expect the pristine gradients to continue to align well and sum up to a strong gradient. Therefore, we expect them to be learned faster than the corrupt examples, and at a rate closer to the realized learning rate in the 0% label noise case. Likewise, we expect the realized learning rate on the corrupt examples to be closer to the 100% label noise case. Finally, as the proportion of pristine examples falls with increasing noise, we expect the realized learning rate for pristine examples to degrade.
Note that this provides an explanation for the observation in the literature that that networks can learn even when the number of examples with noisy labels greatly outnumber the clean examples as long as the number of clean examples is sufficiently large (Rolnick et al., 2017) since with too few clean examples the pristine gradients are not strong enough to dominate.
2.3 AGREEMENT WITH EXPERIMENT
Figure 1(a) and (b) show the training and test curves for the baseline and the 4 variants. We note that for all 5 variants, at the end of training, we achieve 100% training accuracy but different amounts of generalization. As expected, SGD is able to fit random labels, yet when trained on real data, generalizes well. Figure 1(c) shows the reduction in training loss over the course of training, and Figure 1(d) shows the fraction of pristine and corrupt labels learned as training processes.
The results are in agreement with the qualitative predictions made above:
1. In general, as noise increases, the time taken to reach a given level of accuracy (i.e., realized learning rate) increases.
2. Pristine examples are learned faster than corrupt examples. They are learned at a rate closer to the 0% label noise rate whereas the corrupt examples are learned at a rate closer to the 100% label noise rate.
3. With fewer pristine examples, their learning rate reduces. This is most clearly seen in the first few steps of training by comparing say 0% noise with 25% noise.
Using Equation 1, note that the magnitude of the slope of the training loss curve is a good measure of the square of the l2-norm of the overall gradient. Therefore, from the loss curves of Figure 1(c), it is clear that in early training, the more the noise, the weaker the l2-norm of the gradient. If we assume that the per-example l2-norm is the same in all variants at start of training, then from Equation 2, it is clear that with greater noise, the gradients are more dissimilar.
Finally, we note that this experiment is an instance where early stopping (e.g., Caruana et al. (2000)) is effective. Coherent gradients and the discussion in §2.2 provide some insight into this: Strong gradients both generalize well (they are stable since they are supported by many examples) and they bring the training loss down quickly for those examples. Thus early stopping maximizes the use of strong gradients and limits the impact of weak gradients. (The experiment in the §3 discusses a different way to limit the impact of weak gradients and is an interesting point of comparison with early stopping.)
2.4 ANALYZING STRONG AND WEAK GRADIENTS
Within each noisy dataset, we expect the pristine examples to be more similar to each other and the corrupt ones to be less similar. In turn, based on the training curves (particularly, Figure 1 (d)), during the initial part of training, this should mean that the gradients from the pristine examples should be stronger than the gradients from the corrupt examples. We can study this effect via a different decomposition of square of the l2-norm of the gradient (of equivalently upto a constant, the change in the loss function):
〈gt, gt〉 = 〈gt, gpt + gct 〉 = 〈gt, g p t 〉+ 〈gt, gct 〉
where gpt and g c t are the sum of the gradients of the pristine examples and corrupt examples respectively. (We cannot decompose the overall norm into a sum of norms of pristine and corrupt due to the cross terms 〈gpt , gct 〉. With this decomposition, we attribute the cross terms equally to both.) Now, set fpt = 〈gt,gpt 〉 <gt,gt> and f ct = 〈gt,gct 〉 〈gt,gt〉 . Thus, f p t and f c t represent the fraction of the loss reduction due to pristine and corrupt at each time step respectively (and we have fpt + f c t = 1), and based on the foregoing, we expect the pristine fraction to be a larger fraction of the total when training starts and to diminish as training progresses and the pristine examples are fitted.
The first row of Figure 2 shows a plot of estimates of fpt and f c t for 25%, 50% and 75% noise. These quantities were estimated by recording a sample of 400 per-example gradients for 600 weights (300 from each layer) in the network. We see that for 25% and 50% label noise, fpt initially starts off higher than f ct and after a few steps they cross over. This happens because at that point all the pristine examples have been fitted and for most of the rest of training the corrupt examples need to be fitted and so they largely contribute to the l2-norm of the gradient (or equivalently by Equation 1 to loss reduction). Only at the end when the corrupt examples have also been fit, the two curves reach parity. In the case of 75% noise, we see that the cross over doesn’t happen, but there is a slight slope downwards for the contribution from pristine examples. We believe this is because of the sheer number of corrupt examples, and so even though the individual corrupt example gradients are weak, their sum dominates.
To get a sense of statistical significance in our hypothesis that there is a difference between the pristine and corrupt examples as a group, in the remaining rows of Figure 2, we construct a null world where there is no difference between pristine and corrupt. We do that by randomly permuting the “corrupt” and “pristine” designations among the examples (instead of using the actual designations) and reploting. Although the null pristine and corrupt curves are mirror images (as they must be even in the null world since each example is given one of the two designations), we note that for 25% and 50% they do not cross over as they do with the real data. This increases our confidence that the null may be rejected. The 75% case is weaker but only the real data shows the slight downward slope in pristine which none of the nulls typically show. However, all the nulls do show that corrupt is more than pristine which increases our confidence that this is due to the significantly differing sizes of the two sets. (Note that this happens in reverse in the 25% case: pristine is always above corrupt, but they never cross over in the null worlds.)
To get a stronger signal for the difference between pristine and corrupt in the 75% case, we can look at a different statistic that adjusts for the different sizes of the pristine and corrupt sets. Let |p| and |c|
be the number of pristine and corrupt examples respectively. Define
ipt := 1
|p| t∑ t′=0 〈gt′ , gpt′〉 and i c t := 1 |c| t∑ t′=0 〈gt′ , gct′〉
which represents to a first order and upto a scale factor (α) the mean cumulative contribution of a pristine or corrupt example up until that point in training (since the total change in loss from the start of training to time t is approximately the sum of first order changes in the loss at each time step).
The first row of Figure 3 shows ipt and i c t for the first 10 steps of training where the difference between pristine and corrupt is the most pronounced. As before, to give a sense of statistical significance, the remaining rows show the same plots in null worlds where we randomly permute the pristine or corrupt designations of the examples. The results appear somewhat significant but not overwhelmingly so. It would be interesting to redo this on the entire population of examples and trainable parameters instead of a small sample.
3 EFFECT OF SUPPRESSING WEAK GRADIENT DIRECTIONS
In the second test of the Coherent Gradients hypothesis, we change GD itself in a very specific (and to our knowledge, novel) manner suggested by the theory. Our inspiration comes from random forests. As noted in the introduction, by building sufficiently deep trees a random forest algorithm can get perfect training accuracy with random labels, yet generalize well when trained on real data. However, if we limit the tree construction algorithm to have a certain minimum number of examples in each leaf, then it no longer overfits. In the case of GD, we can do something similar by suppressing the weak gradient directions.
3.1 SETUP
Our baseline setup is the same as before (§2.1) but we add a new dimension by modifying SGD to update each parameter with a “winsorized” gradient where we clip the most extreme values (outliers) among all the per-example gradients. Formally, let gwe be the gradient for the trainable parameter w for example e. The usual gradient computation for w is
gw = ∑ e gwe
Now let c ∈ [0, 50] be a hyperparameter that controls the level of winsorization. Define lw to be the c-th percentile of gwe taken over the examples. Similarly, let uw be the (100− c)-th percentile. Now, compute the c-winsorized gradient for w (denoted by gcw) as follows:
gcw := ∑ e clip(gwe, lw, uw)
The change to gradient descent is to simply use gcw instead of gw when updating w at each step.
Note that although this is conceptually a simple change, it is computationally very expensive due to the need for per-example gradients. To reduce the computational cost we only use the examples in the minibatch to compute lw and uw. Furthermore, instead of using 1 hidden layer of 2048 ReLUs, we use a smaller network with 3 hidden layers of 256 ReLUs each, and train for 60,000 steps (i.e., 100 epochs) with a fixed learning rate of 0.1. We train on the baseline dataset and the 4 noisy variants with c ∈ {0, 1, 2, 4, 8}. Since we have 100 examples in each minibatch, the value of c immediately tells us how many outliers are clipped in each minibatch. For example, c = 2 means the 2 largest and 2 lowest values of the per-example gradient are clipped (independently for each trainable parameter in the network), and c = 0 corresponds to unmodified SGD.
3.2 QUALITATIVE PREDICTIONS
If the Coherent Gradient hypothesis is right, then the strong gradients are responsible for making changes to the network that generalize well since they improve many examples simultaneously. On the other hand, the weak gradients lead to overfitting since they only improve a few examples. By winsorizing each coordinate, we suppress the most extreme values and thus ensure that a parameter is only updated in a manner that benefits multiple examples. Therefore:
• Since c controls which examples are considered extreme, the larger c is, the less we expect the network to overfit. • But this also makes it harder for the network to fit the training data, and so we expect the
training accuracy to fall as well. • Winsorization will not completely eliminate the weak directions. For example, for small
values of c we should still expect overfitting to happen over time though at a reduced rate since only the most egregious outliers are suppressed.
3.3 AGREEMENT WITH EXPERIMENT
The resulting training and test curves shown in Figure 4. The columns correspond to different amounts of label noise and the rows to different amounts of winsorization. In addition to the training and test accuracies (ta and va, respectively), we show the level of overfit which is defined as ta− [ · 110 + (1− ) · va] to account for the fact that the test labels are not randomized. We see that the experimental results are in agreement with the predictions above. In particular,
• For c > 1, training accuracies do not exceed the proper accuracy of the dataset, though they may fall short, specially for large values of c. • The rate at which the overfit curve grows goes down with increasing c.
Additionally, we notice that with a large amount of winsorization, the training and test accuracies reach a maximum and then go down. Part of the reason is that as a result of winsorization, each step is no longer in a descent direction, i.e., this is no longer gradient descent.
4 DISCUSSION AND RELATED WORK
Although there has been a lot of work in recent years in trying to understand generalization in Deep Learning, no entirely satisfactory explanation has emerged so far.
There is a rich literature on aspects of the stochastic optimization problem such as the loss landscape and minima (e.g., Choromanska et al. (2015); Zhu et al. (2018)), the curvature around stationary points (e.g., Hochreiter & Schmidhuber (1997); Keskar et al. (2016); Dinh et al. (2017); Wu et al. (2018)), and the implications of stochasticity due to sampling in SGD (e.g., Simsekli et al. (2019)). However, we believe it should be possible to understand generalization without a detailed understanding of the optimization landscape. For example, since stopping early typically leads to small generalization gap, the nature of the solutions of GD (e.g., stationary points, the limit cycles of SGD at equilibrium) cannot be solely responsible for generalization. In fact, from this observation, it would appear that an inductive argument for generalization would be more natural. Likewise, there is reason to believe that stochasticity is not fundamental to generalization (though it may help). For example, modifying the experiment in §2.1 to use full batch leads to similar qualitative generalization results. This is consistent with other small scale studies (e.g., Figure 1 of Wu et al. (2018)) though we are not aware of any large scale studies on full batch.
Our view of optimization is a simple, almost combinatorial, one: gradient descent is a greedy search with some hill-climbing thrown in (due to sampling in SGD and finite step size). Therefore, we worry less about the quality of solutions reached, but more about staying “feasible” at all times during the search. In our context, feasibility means being able to generalize; and this naturally leads us to look at the transition dynamics to see if that preserves generalizability.
Another approach to understanding generalization, is to argue that gradient-based optimization induces a form of implicit regularization leading to a bias towards models of low complexity. This is an extension of the classical approach where bounding a complexity measure leads to bounds on the generalization gap. As is well known, classical measures of complexity (also called capacity) do not work well. For example, sometimes adding more parameters to a net can help generalization (see for e.g. Lawrence et al. (1996); Neyshabur et al. (2018)) and, as we have seen, VC-Dimension
and Rademacher Complexity-based bounds must be vacuous since networks can memorize random labels and yet generalize on real data. This has led to a lot of recent work in identifying better measures of complexity such as spectrally-normalized margin (Bartlett et al., 2017), path-based group norm (Neyshabur et al., 2018), a compression-based approach (Arora et al., 2018), etc. However, to our knowledge, none of these measures is entirely satisfactory for accounting for generalization in practice. Please see Nagarajan & Kolter (2019) for an excellent discussion of the challenges.
We rely on a different classical notion to argue generalization: algorithmic stability (see Bousquet & Elisseeff (2002) for a historical overview). We have provided only an informal argument in Section 1, but there has been prior work by Hardt et al. (2016) in looking at GD and SGD through the lens of stability, but their formal results do not explain generalization in practical settings (e.g., multiple epochs of training and non-convex objectives). In fact, such an attempt appears unlikely to work since our experimental results imply that any stability bounds for SGD that do not account for the actual training data must be vacuous! (This was also noted by Zhang et al. (2017).) That said, we believe stability is the right way to think about generalization in GD for a few reasons. First, since by Shalev-Shwartz et al. (2010) stability, suitably formalized, is equivalent to generalization. Therefore, in principle, any explanation of generalizability for a learning problem must—to borrow a term from category theory—factor through stability. Second, a stability based analysis may be more amenable to taking the actual training data into account (perhaps by using a “stability accountant” similar to a privacy accountant) which appears necessary to get non-vacuous bounds for practical networks and datasets. Finally, as we have seen with the modification in §3, a stability based approach is not just descriptive but prescriptive2 and can point the way to better learning algorithms.
Finally, we look at two relevant lines of work pointed out by a reviewer. First, Rahaman et al. (2019) compute the Fourier spectrum of ReLU networks and argue based on heuristics and experiments that these networks learn low frequency functions first. In contrast, we focus not on the function learnt, but on the mechanism in GD to detect commonality. This leads to a perspective that is at once simpler and more general (for e.g., it applies equally to networks with other activation functions, with attention, LSTMs, and discrete (combinatorial) inputs). Furthermore, it opens up a path to analyzing generalization via stability. It is is not clear if Rahaman et al. (2019) claim a causal mechanism, but their analysis does not suggest an obvious intervention experiment such as ours of §3 to test causality. There are other experimental results that show biases towards linear functions (Nakkiran et al., 2019) and functions with low descriptive complexity (Valle-Perez et al., 2019) but these papers do not posit a causal mechanism. It is interesting to consider if Coherent Gradients can provide a unified explanation for these observed biases.
Second, Fort et al. (2019) propose a descriptive statistic stiffness based on pairwise per-example gradients and show experimentally that it can be used to characterize generalization. Sankararaman et al. (2019) propose a very similar statistic called gradient confusion but use it to study the speed of training. Unlike our work, these do not propose causal mechanisms for generalization, but these statistics (which are different from those in §2.4) could be useful for the further study of Coherent Gradients.
5 DIRECTIONS FOR FUTURE WORK
Does the Coherent Gradients hypothesis hold in other settings such as BERT, ResNet, etc.? For that we would need to develop more computationally efficient tests. Can we use the state of the network to explicitly characterize which examples are considered similar and study this evolution in the course of training? We expect non-parametric methods for similarity such as those developed in Chatterjee & Mishchenko (2019) and their characterization of “easy” examples (i.e., examples learnt early as per Arpit et al. (2017)) as those with many others like them, to be useful in this context.
Can Coherent Gradients explain adversarial initializations (Liu et al., 2019)? The adversarial initial state makes semantically similar examples purposefully look different. Therefore, during training, they continue to be treated differently (i.e., their gradients share less in common than they would if starting from a random initialization). Thus, fitting is more case-by-case and while it achieves good final training accuracy, it does not generalize.
2See https://www.offconvex.org/2017/12/08/generalization1/ for a nice discussion of the difference.
Can Coherent Gradients along with the Lottery Ticket Hypothesis (Frankle & Carbin, 2018) explain the observation in Neyshabur et al. (2018) that wider networks generalize better? By Lottery Ticket, wider networks provide more chances to find initial gradient directions that improve many examples, and by Coherent Gradients, these popular hypotheses are learned preferentially (faster).
Can we use the ideas behind Winsorized SGD from §3 to develop a computationally efficient learning algorithm with generalization (and even privacy) guarantees? How does winsorized gradients compare in practice to the algorithm proposed in Abadi et al. (2016) for privacy? Last, but not least, can we use the insights from this work to design learning algorithms that operate natively on discrete networks?
ACKNOWLEDGMENTS
I thank Alan Mishchenko, Shankar Krishnan, Piotr Zielinski, Chandramouli Kashyap, Sergey Ioffe, Michele Covell, and Jay Yagnik for helpful discussions. | 1. What is the main contribution of the paper regarding deep neural networks?
2. How does the "Coherent Gradients hypothesis" relate to prior work, such as spectral bias interpretation and gradient stiffness?
3. What are the strengths and weaknesses of the proposed "Gradient coherence" metric compared to similar metrics in previous works?
4. How do the experiments conducted in the paper support or not support the "Coherent Gradients hypothesis"?
5. What is the significance of the finding that SGD is not necessary for training large models, but rather for computational speed?
6. Are there any limitations or potential biases in the experimental design or data analysis? | Review | Review
The paper studies the link between alignment of the gradients computed on different examples, and generalization of deep neural networks. The paper tackles an important research question, is very clearly written, and proposes an insightful metric. In particular, through the lenses of the metric it is possible to understand better the learning dynamics on random labels. However, the submission seems to have limited novelty, based on which I am leaning towards rejecting the paper.
Detailed comments
1. The prior and concurrent work is not discussed sufficiently:
a) The novelty of the "Coherent Gradients hypothesis" is not clear to me. First, the empirical fact that some examples are easier to learn than others in training of deep networks was the key focus of [5].
Hence, "Coherent Graident Hypothesis" should be mostly considered an explanation for why simple examples are/simple function are learned first. "Coherent Gradient Hypothesis" proposes that the key mechanism behind this phenomena is that simple examples/functions have co-aligned gradients and hence a larger "effective" learning rate. However, there are already quite convincing and closely related hypotheses. For example, the spectral bias interpretation of deep networks [2] and (2) suggests the same view actually. Just expressed in a different formalism, but can be also casted as having a higher effective learning rate for the strongest modes. Similarly, [3] proposes that SGD learns functions of increased complexity. A detailed comparison between these hypotheses is needed.
b) "Gradient coherence" metric is very closely related to Stiffness studied in [1] (01.2019 on arXiv). [1] studies the cosine (or sign) between gradients coming from different examples, and reach quite similar conclusions. It is also worth noting that [6, 7] propose and study a very similar metric as well. While arXiv submissions is not consider prior work, these three preprints should be discussed in detail in the submission.
c) It should be also remarked that "Coherent Gradient hypothesis" is to some extend folk knowledge. In particular, it is quite well known and also brought to the attention of the deep learning community that in linear regression strongest modes of the datasets as learned first when training using GD (see for instance [4]), which causally speaking stems directly from gradient coherence; these modes correspond to the largest eigenvalues of the (constant) Hessian. To make it more precise: consider that GD solving linear regression can be seen as having higher "effective" learning rates along the strongest modes in the dataset.
2. Experiments on random labels and restricting gradient norms are interesting. However, [5] should be cited. They experimented with regularization impact on memorization, which due to the addition of noise, probably also supresses weak gradients.
3. Experiments on MNIST do not feel adequate. While I do not doubt the validity of the experimental results, the paper should include results on another dataset; ideally from other domain than vision.
4. Plots in Figure 4 are too small to read. I would recommend moving half of them to the Supplement?
5. "Understanding why solutions of the optimization problem on the training sample carry over to the population at large" - Not sure what do you mean here. Could you please clarify?
6. "Furthermore, while SGD is critical for computational speed, from our experiments and others (Keskar et al., 2016; Wu et al., 2017; Zhang et al., 2017) it appears not to be necessary.". Please note there is very little work on training with GD large models. Also, citing in this context Keskar is misleading. Wasn't the whole point of Keskar to show why large batch size training overfits? Finally, there are many papers on studying the role of learning rate and batch size in generalization (not computational speed). I think this sentence should be rewritten to clarify what is the experimental data that GD is "sufficient", and SGD is just needed for "computational speed".
References
[1] Stanislav Fort et al, Stiffness: A New Perspective on Generalization in Neural Networks, https://arxiv.org/abs/1901.09491
[2] Rahaman et al, On the Spectral Bias of Neural Networks, https://arxiv.org/abs/1806.08734
[3] Nakkiran et al, SGD on Neural Networks Learns Functions of Increasing Complexity, https://arxiv.org/abs/1905.11604
[4] Goh, Why Momentum Really Works, https://distill.pub/2017/momentum/
[5] Arpit et al, A Closer Look at Memorization in Deep Networks, https://arxiv.org/abs/1706.05394
[6] He and Su, The Local Elasticity of Neural Networks, https://arxiv.org/abs/1910.06943
[7] Sankararaman, The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent, https://arxiv.org/abs/1904.06963 |
ICLR | Title
A Stochastic Trust Region Method for Non-convex Minimization
Abstract
We target the problem of finding a local minimum in non-convex finite-sum minimization. Towards this goal, we first prove that the trust region method with inexact gradient and Hessian estimation can achieve a convergence rate of order O(1/k) as long as those differential estimations are sufficiently accurate. Combining such result with a novel Hessian estimator, we propose a sample-efficient stochastic trust region (STR) algorithm which finds an ( , √ )-approximate local minimum within Õ( √ n/ ) stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor ofO(n). Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.
N/A
√ )-approximate local
minimum within Õ( √ n/ 1.5) stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor ofO(n1/6). Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.
1 INTRODUCTION
We consider the following finite-sum non-convex minimization problem
min x∈Rd
F (x) = 1
n ∑n i=1 fi(x), (1)
where each (non-convex) component function fi : Rd → R is assumed to have L1-Lipschitz continuous gradient and L2-Lipschitz continuous Hessian. Since first-order stationary points could be saddle points with inferior generalization performance (Dauphin et al., 2014), in this work we are particularly interested in computing ( , √ )-approximate second-order stationary points, -SOSP:
‖∇F (x )‖ ≤ and ∇2F (x ) < − √ L2 I. (2)
To find a local minimum of problem (1), the cubic regularization approach (Nesterov & Polyak, 2006) and the trust region algorithm (Conn et al., 2000; Curtis et al., 2017) are two classical methods. Specifically, cubic regularization forms a cubic surrogate function for the objective F (x) by adding a third-order regularization term to the second-order Taylor expansion, and minimizes it iteratively. Such a method is proved to achieve an O(1/k2/3) global convergence rate and thus needs O(n/ 1.5) stochastic first- and second-order oracle queries, namely the evaluation number of stochastic gradient and Hessian, to achieve a point that satisfies (2). On the other hand, trust region algorithms estimate the objective with its second-order Taylor expansion but minimize it only within a local region. Recently, Curtis et al. (2017) proposes a trust region variant to achieve the same convergence rate as the cubic regularization approach. But both methods require computing full gradients and Hessians of F (x) and thus suffer from high computational cost in large-scale problems.
To avoid costly exact differential evaluations, many works explore the finite-sum structure of problem (1) and develop stochastic cubic regularization approaches. Both Kohler & Lucchi (2017b) and Xu et al. (2017) propose to directly subsample the gradient and Hessian in the cubic surrogate function, and achieve Õ(1/ 3.5) and Õ(1/ 2.5) stochastic first- and second-order oracle complexities respectively. By plugging a stochastic variance reduced estimator (Johnson & Zhang, 2013) and the Hessian tracking technique (Gower et al., 2018) into the gradient and Hessian estimation, the approach in (Zhou et al., 2018a) improves both the stochastic first- and second-order oracle complexities to Õ(n0.8/ 1.5). Recently, Zhang et al. (2018) and Zhou et al. (2018b) develop more efficient stochastic cubic regularization variants, which further reduce the stochastic second-order oracle complexity to Õ(n2/3/ 1.5) at the cost of increasing the stochastic first-order oracle complexity to Õ(n2/3/ 2.5).
Contributions: In this paper we propose and exploit a formulation in which we make explicit control of the step size in the trust region method. This idea is leveraged to develop two efficient stochastic trust region (STR) approaches. We tailor our methods to achieve state-of-the-art oracle complexities under the following two measurements: (i) the stochastic second-order oracle complexity is prioritized; (ii) the stochastic first- and second-order oracle complexities are treated equally. Specifically, in Setting (i), our method STR1 employs a newly proposed estimator to approximate the Hessian and adopts the estimator in (Fang et al., 2018) for gradient approximation. Our novel Hessian estimator maintains an accurate second-order differential approximation with lower amortized oracle complexity. In this way, STR1 achieves Õ(min{1/ 2, √ n/ 1.5}) stochastic second-order oracle complexity. This is lower than existing results for solving problem (1). In Setting (ii), our method STR2 substitutes the gradient estimator in STR1 with one that integrates stochastic gradient and Hessian together to maintain an accurate gradient approximation. As a result, STR2 achieves convergence in Õ(n3/4/ 1.5) overall stochastic first- and second-order oracle queries. Finally, based on STR, we further develop Hessian-free STR algorithms, namely STRfree and STRfree+, which outperform existing Hessian-free algorithms theoretically.
1.1 RELATED WORK
Computing local minimum to a non-convex optimization problem is gaining considerable amount of attentions in recent years. Both cubic regularization (CR) approaches (Nesterov & Polyak, 2006) and trust region (TR) algorithms (Conn et al., 2000; Curtis et al., 2017) can escape saddle points and find a local minimum by iterating the variable along the direction related to the eigenvector of the Hessian with the most negative eigenvalue. As the CR heavily depends on the regularization parameter for the cubic term, Cartis et al. (2011) propose an adaptive cubic regularization (ARC) approach to boost the efficiency by adaptively tuning the regularization parameter according to the current objective decrease. Noting the high cost of full gradient and Hessian computation in ARC, sub-sampled cubic regularization (SCR) (Kohler & Lucchi, 2017a) is developed for sampling partial data points to estimate the full gradient and Hessian. Recently, by exploring the finite-sum structure of the target problem, many works incorporate variance reduction technique (Johnson & Zhang, 2013) into CR and propose stochastic variance-reduced methods. For example, Zhou et al. (2018c) propose stochastic variance-reduced cubic (SVRC) in which they integrate the stochastic variance-reduced gradient estimator (Johnson & Zhang, 2013) and the Hessian tracking technique (Gower et al., 2018) with CR. Such a method is proved to be at least O(n1/5) faster than CR and TR. Then Zhou et al. (2018b) use adaptive gradient batch size and constant Hessian batch size, and develop Lite-SVRC to further reduce the stochastic second-order oracle Õ(n4/5/ 1.5) of SVRC to Õ(n2/3/ 1.5) at the cost of higher gradient computation cost. Similarly, except turning the gradient batch size, Zhang et al. (2018) further adaptively sample a certain number of data points to estimate the Hessian and prove the proposed method to have the same stochastic second-order oracle complexity as Lite-SVRC.
2 PRELIMINARY
Notation. We use ‖v‖ to denote the Euclidean norm of vector v and use ‖A‖ to denote the spectral norm of matrix A. Let S be the set of component indices. We define the minibatch average of component functions by f(x;S) def= 1|S| ∑ i∈S fi(x). Then we specify the assumptions that are necessary to the analysis of our methods.
MetaAlgorithm 1 Inexact Trust Region Method
Input: initial point x0, step size r, number of iterations K, construction of differential estimators gk and Hk
1: for k = 0 to K − 1 do 2: Compute hk and λk by solving (8); 3: xk+1 := xk + hk; 4: if λk ≤ 3 √ /L2 then 5: Output x = xk+1; 6: end if 7: end for
Assumption 2.1. F is bounded from below and its global optimal is achieved at x∗. We denote ∆ = F (x0)− F (x∗). Assumption 2.2. Each fi : Rd → R has L1-Lipschitz continuous gradient: for any x,y ∈ Rd
‖∇fi(x)−∇fi(y)‖ ≤ L1‖x− y‖. (3) Assumption 2.3. Each fi : Rd → R has L2-Lipschitz continuous Hessian: for any x,y ∈ Rd
‖∇2fi(x)−∇2fi(y)‖ ≤ L2‖x− y‖. (4)
2.1 TRUST REGION METHOD
Here we briefly introduce the trust region method (Conn et al., 2000). In each step, it first solves the Quadratic Constraint Quadratic Program (QCQP) defined as
hk := argmin h∈Rd,‖h‖≤r 〈∇F (xk),h〉+ 1 2 〈∇2F (xk)h,h〉, (5)
where r is the trust-region radius. Then it updates the new variable as
xk+1 := xk + hk. (6)
Since ∇2F (xk) is indefinite, the trust-region subproblem (5) is non-convex. But its global optimizer can be characterized by the following lemma (Corollary 7.2.2 in (Conn et al., 2000)). Lemma 2.1. Any global minimizer of problem (5) satisfies the equation(
∇2F (xk) + λI ) hk = −∇F (xk), (7)
where the dual variable λ ≥ 0 should satisfy∇2F (xk) + λI < 0 and λ(‖hk‖ − r) = 0.
In particular, the standard QCQP solver returns both the minimizer hk as well as the corresponding dual variable λ of subproblem (5). In the following section, we first prove that the deterministic trust-region update (5) and (6) converges at the rate of O(1/k2/3), much sharper than existing provable convergence rate O(1/ √ k) (Conn et al., 2000), and then develop a more efficient stochastic trust-region approach.
3 METHODOLOGY
Here we first introduce a general inexact trust region method which is summarized in MetaAlgorithm 1. It accepts inexact gradient estimation gk and Hessian estimation Hk as input to the QCQP subproblem
hk := argmin h∈Rd,‖h‖≤r 〈gk,h〉+ 1 2 〈Hkh,h〉. (8)
Similar to (5), Lemma 2.1 characterizes the global optimum to problem (8) which can be efficiently solved by Lanczos method (Gould et al., 1999). Assume the dual variable of the minimizer hk is λk.
We prove that such inexact trust region method achieves the optimal O(1/k2/3) convergence rate when the estimation gk and Hk at each iteration are sufficiently close to their full (exact) counterparts ∇F (xk) and ∇2F (xk) respectively:
‖gk −∇F (xk)‖ ≤ 6 , ‖Hk −∇2F (xk)‖ ≤ √ L2 3 . (9)
Algorithm 2 STR1 Input: initial point x0, step size r, number of iterations K
1: for k = 1 to K do 2: Construct gradient estimator gk by Estimator 4; 3: Construct Hessian estimator Hk by Estimator 3; 4: Compute hk and λk by solving (8); 5: xk+1 := xk + hk; 6: if λk ≤ 3 √ /L2 then 7: Output x = xk+1; 8: end if 9: end for
Such result allows us to derive stochastic trust-region variants with novel differential estimators that are tailored to ensure the optimal convergence rate. We state our formal results in Theorem 3.1, whose proof is deferred to Appendix B.1 due to the space limit.
Theorem 3.1 (Main Result). Consider problem (1) under Assumption 2.1-2.3. If the differential estimators gk and Hk satisfy Eqn. (9) for all k, MetaAlgorithm 1 finds an O( )-SOSP in less than K = O( √ L2∆/ 1.5) iterations by setting the trust-region radius as r = √ /L2.
Remark 3.1. We emphasize that MetaAlgorithm 1 degenerates to the exact trust region method by taking gk = ∇F (xk) and Hk = ∇2F (xk). Such result is of its own interest because this is the first proof to show that the vanilla trust region method has the optimal O(1/k2/3) convergence rate. Similar rate is achieved by Curtis et al. (2017) but with a much more complicated trust region variant.
Theorem 3.1 shows the explicit step size control of the trust region method: Since the dual variable satisfies λk > 3 0.5/ √ L2 > 0 for all but the last iteration, we always find the solution to the trustregion subproblem (8) in the boundary, i.e. ‖hk‖ = r, according to the complementary condition (15) in Appendix B.1. Such exact step size control property is missing in the cubic-regularization method where the step size is implicitly decided by the cubic regularization parameter.
More importantly, we emphasize that such explicit step size control is crucial to the sample efficiency of our variance reduced differential estimators. The essence of variance reduction is to exploit the correlations between the differentials in consecutive iterations. Intuitively, when two neighboring iterates are close, so are their differentials due to the Lipschitz continuity, and hence a smaller number of samples suffice to maintain the accuracy of the estimators. On the other hand, smaller step size reduces the per-iteration objective decrease which harms the convergence rate of the algorithm (see proof of Theorem 3.1). Therefore, the explicit step size control in trust region method allows us to well trade-off the per-iteration sample complexity and convergence rate, from which we can derive stochastic trust region approaches with the state-of-the-art sample efficiency. In contrast, existing trust region methods change the step size at every iteration according to progress made, which requires loss evaluations that can be as expensive as gradient computations (e.g. the non-convex linear model in Section 7) and is thus prohibitive for large-scale problems.
4 STOCHASTIC TRUST REGION METHOD: TYPE I
Having the inexact trust region method as prototype, we now present our first sample-efficient stochastic trust region method, namely STR1, in Algorithm 2 which emphasizes cheaper stochastic second-order oracle complexity. As Theorem 3.1 already guarantees the optimal convergence rate of MetaAlgorithm 1 when the gradient estimator gk and the Hessian estimator Hk meet the requirement (9), here we focus on constructing such novel differential estimators. Specifically, we first present our Hessian estimator in Estimator 3 and our first gradient estimator in Estimator 4, both of which exploit the trust region radius r = √ /L2 to reduce their variances.
4.1 HESSIAN ESTIMATOR
Our epoch-wise Hessian estimator Hk is given in Estimator 3, where p2 controls the epoch length and s2 (and optionally s′2) controls the minibatch size. At the beginning of each epoch, Estimator 3 has two options, designed for different target accuracy: Option I is preferable for the high accuracy
Estimator 3 Hessian Estimator Input: Epoch length p2, sample size s2, s′2 (optional)
1: if mod(k, p2)= 0 then 2: Option I: high accuracy case (small ) 3: Hk := ∇2F (xk); 4: Option II: low accuracy case (moderate ) 5: Draw s′2 samples indexed byH′ and let Hk := ∇2f(xk;H′); 6: else 7: Draw s2 samples indexed byH and let Hk := ∇2f(xk;H)−∇2f(xk−1;H) + Hk−1; 8: end if
Estimator 4 Gradient Estimator: Case (1) 1: if mod(k, p1)= 0 then 2: gk := ∇F (xk); 3: else 4: Draw s1 samples indexed by G and gk = ∇f(xk;G)−∇f(xk−1;G) + gk−1; 5: end if
case ( < O(1/n)) where we compute the full Hessian to avoid approximation error, and Option II is designed for the moderate accuracy case ( > O(1/n)) where we only need an approximate Hessian estimator. Then, p2 iterations follow with Hk defined in a recurrent manner. These recurrent estimators exist for the first-order case (Nguyen et al., 2017; Fang et al., 2018), but their bound only holds under the vector `2 norm. Here we generalize them into Hessian estimation with matrix spectral norm bound.
The following lemma analyzes the amortized stochastic second-order oracle (Hessian) complexity for Algorithm 3 to meet the requirement in Theorem 3.1. As we need an error bound under the spectral norm, we will appeal to the matrix Azuma’s inequality (Tropp, 2012). The proof is deferred to Appendix B.2.
Lemma 4.1. Assume Algorithm 2 takes the trust region radius r = √ /L2 as in Theorem 3.1. For any k ≥ 0, Estimator 3 produces estimator Hk for the second order differential ∇2F (xk) such that ‖Hk −∇2F (xk)‖ ≤ √ L2/3 with probability at least 1 − δ/K0 if we set (1) p2 = √ n and s2 = 32 √ n log(dK0/δ) in option I, or (2) p2 = L1/(2 √ L2), s′2 = 16L 2 1/( L2) log(dK0/δ),
and s2 = 32L1/( √ L2) log(dK0/δ) in option II. Here K0 is a constant to be determined later. Consequently the amortized per-iteration stochastic second-order oracle complexity to construct Hk is no more than 2s2 = min{64 √ n log dK0δ , 64L1√ L2 log dK0δ }.
4.2 GRADIENT ESTIMATOR: CASE (1)
When the stochastic second-order oracle complexity is prioritized, we directly employ the SPIDER gradient estimator to construct gk (Fang et al., 2018). Similar to the construction for Hk, the estimator gk is also constructed in an epoch-wise manner as presented in Estimator 4, where p1 controls the epoch length and s1 controls the minibatch size.
We now analyze the stochastic first-order oracle complexity to meet the requirement in Theorem 3.1. Lemma 4.2. Assume Algorithm 2 takes the trust region radius r = √ /L2. Estimator 4 produces estimator gk of the first order differential ∇F (xk) such that ‖gk − ∇F (xk)‖ ≤ /6 with probability at least 1 − δ/K0 for any k ≥ 0, if we set p1 = max{1, √ n L2/(cL21 log K0 δ )} and
s1 = min{n, √ cnL21 log(K0/δ)/( L2)}, where the constant c = 1152 and K0 is a constant to be determined later. Consequently, the amortized per-iteration stochastic first-order oracle complexity to construct gk is min{n, √ 4cnL21 log (K0/δ)/( L2)}.
The proof of Lemma 4.2 is similar to the one of Lemma 4.1 and is deferred to Appendix B.3. These two lemmas only guarantee that the differential estimators satisfy the requirement (9) in a single iteration and can be extended to hold for all k by using the union bound with K0 = 2K, where K denotes the number of iterations. Combining such lifted result with Theorem 3.1, we can establish the computational complexity bound as follows.
Algorithm 5 STR2 Input: initial point x0, step size r, number of iterations K
1: for k = 1 to K do 2: Construct gradient estimator gk by Estimator 6; 3: Construct Hessian estimator Hk by Estimator 3; 4: Compute hk and λk by solving (8); 5: xk+1 := xk + hk; 6: if λk ≤ 3 √ /L2 then 7: Output x = xk+1; 8: end if 9: end for
Estimator 6 Gradient Estimator: Case (2) 1: if mod(k, p1)= 0 then 2: Let x̃ := xk, gk := ∇F (x̃) 3: else 4: Draw s1 samples indexed by G; 5: gk = ∇f(xk;G)−∇f(xk−1;G) + gk−1 + [∇2F (x̃)−∇2f(x̃;G)](xk − xk−1); 6: end if
Corollary 4.1 (Computational Complexity of STR1). Assume Algorithm 2 will use Estimator 4 to construct the first-order differential estimator gk and use Estimator 3 to construct the second-order differential estimator Hk. To find a 12 -SOSP with probability at least 1− δ, the overall stochastic first-order oracle complexity isO(min{n √ L2∆ 1.5 , √ nL1 2 log( L2∆ δ 1.5 )}) and the overall stochastic secondorder oracle complexity is O(min{ √ nL2∆ 1.5 , L1∆ 2 } log( d √ L2∆
δ 1.5 )).
From Corollary 4.1 we see that Õ(min{ √ n/ 1.5, 1/ 2}) stochastic second-order oracle queries are sufficient for STR1 to find an -SOSP which is significantly better than both the subsampled cubic regularization method Õ(1/ 2.5) (Kohler & Lucchi, 2017a) and the variance reduction based ones Õ(n2/3/ 1.5) (Zhou et al., 2018b; Zhang et al., 2018). Recently, Zhou & Gu (2019) developed a stochastic recursive variance-reduced cubic regularization (SRVRC) method which finds an ( , √ )- approximate local minimum with Õ(n/ 1.5, 1/ 3) SFO and Õ( √ n/ 1.5, 1/ 2) SSO. But the result of SRVRC needs to assume stochastic gradient to be bounded, i.e., ‖∇fi(x)−∇F (x)‖ ≤ σ. With this extra assumption, STR1 enjoys Õ(n/ 1.5, n/ 2, 1/ 3) SFO and Õ( √ n/ 1.5, 1/ 2) SSO. Thus, if 1/ ≤ n ≤ 1/ 2, STR1 outperforms SRVRC; otherwise they have the same complexities.
5 STOCHASTIC TRUST REGION METHOD: TYPE II
In the above section, we focus on the setting where the stochastic second-order oracle complexity is prioritized over the stochastic first-order oracle complexity. In this setting, STR1 achieves the stateof-the-art efficiency. In this section, we consider a different complexity measure where first-order and second-order oracle complexities are treated equally and our goal is to minimize the maximum of them. We note that, currently the best result is Õ(n4/5/ 1.5) of the SVRC method (Zhou et al., 2018c). Since the Hessian estimator Hk of STR1 already delivers the superior Õ( √ n/ 1.5) stochastic Hessian complexity, in STR2 (see Algorithm 5), we retain Estimator 3 for second-order differential estimation and use Estimator 6 to further reduce the stochastic gradient complexity.
5.1 GRADIENT ESTIMATOR: CASE (2)
When stochastic gradient and Hessian complexities are equally important, we use Hessian to improve the gradient estimation. Denote x(a) = axt + (1− a)x̃. From Assumption 2.3, we have
‖∇fi(xt)−∇fi(x̃)−∇2fi(x̃)(xt − x̃)‖=‖ ∫ 1
0
[∇2fi(x(a))−∇2fi(x̃)](xt − x̃)da‖≤ L2 2 ‖xt − x̃‖2.
Such property can be used to improve Lemma 4.2 of Estimator 4. Specifically, define the correction
ck = [∇2F (x̃)−∇2f(x̃;G)](xk − xk−1),
where x̃ is some reference point updated in an epoch-wise manner. Estimator 6 adds ck to the estimator in Estimator 4. Note that in Estimator 6, the first- and second-order oracle complexities are the same. We now analyze the first-order (and second-order) oracle complexity to meet requirement (9).
Lemma 5.1. Assume Algorithm 5 takes the trust region radius r = √ /L2 as in Theorem 3.1. For any k ≥ 0, Estimator 6 produces estimator gk for the first order differential∇F (xk) such that ‖gk− ∇F (xk)‖ ≤ /6 with probability at least 1−δ/K0, if we set p1 = n0.25 and s1 = n0.75c log(K0/δ), where c = 1152 and K0 is a constant to be determined. Consequently, the amortized per-iteration stochastic first-order oracle complexity to construct gk is 2s1 = 2n0.75c log(K0/δ).
The proof of Lemma 5.1 is similar to the one of Lemma 4.1 and is deferred to Appendix B.4. Similar to the previous section, Lemma 5.1 only guarantees that the gradient estimator satisfies the requirement (9) in a single iteration. Such result can be extended to hold for all k by using the union bound with K0 = 2K, which together with Theorem 3.1 gives the following corollary. Corollary 5.1 (Computational Complexity of STR2). Algorithm 5 finds a 12 -SOSP with probability at least 1 − δ, within O(n 0.75√L2∆ 1.5 log( √ L2∆ δ 1.5 )) overall stochastic first-order oracle queries and O(n 0.75√L2∆ 1.5 log( d √ L2∆
δ 1.5 )) overall stochastic second-order oracle queries.
Corollary 5.1 shows that to find an -SOSP, both SFO and SSO of STR2 are Õ(n3/4/ 1.5) which surpasses the best existing one Õ(n4/5/ 1.5) in (Zhou et al., 2018c).
6 PRACTICAL STOCHASTIC TRUST REGION VARIANTS
6.1 HANDLING INEXACT QCQP SOLUTIONS
One drawback of MetaAlgorithm 1 is that it requires the exact solution to the QCQP subproblem (8) and uses the dual variable as stopping criterion. We address this problem by developing a practical variant, MetaAlgorithm 7, which admits inexact QCQP solutions without access to the dual variable. This algorithm repeatedly invokes a procedure called INEXACTTRWEAK, which, as we shall see, outputs an O( )-SOSP with a constant probability of 2/3 in O(1/ 1.5) iterations. By repeatedly invoking INEXACTTRWEAK for Θ(log(1/δ)) times, MetaAlgorithm 7 boosts the probability to (1− δ) for any desired δ. This repeating technique has been studied by, e.g., (Allen-Zhu & Li, 2018; AllenZhu, 2018b). To test whether the t-th run outputs an O( )-SOSP, we need to compute ‖∇F (xt)‖ and the smallest eigenvalue of∇2F (xt). The latter one can be approximated by solving the QCQP
vt := argmin ‖v‖≤1 ψt(v) = 〈Htv,v〉, (10)
where Ht is the full Hessian∇2F (xt) or its estimation. One can show that MetaAlgorithm 7 finds an O( )-SOSP w.p. at least (1−δ) in Õ(1/ 1.5) iterations. We defer the detailed analysis to Appendix C.
6.2 HESSIAN-FREE IMPLEMENTATION
Based on MetaAlgorithm 7, we propose a Hessian-free method named STRfree, which is summarized in Algorithm 8. STRfree leverages the full/stochastic Hessian and Estimator 4 to construct Hk and gk, respectively. Besides, it uses Lanczos method (Gould et al., 1999; Carmon & Duchi, 2018) as the QCQP solver, which can be implemented in a Hessian-free manner (i.e., using only Hessian-vector products without explicit Hessian matrix evaluations). Thus, Hk is only accessed through Hessianvector products and is never explicitly constructed. Since Hessian-vector products can be computed in linear time (in terms of the dimension d) for many machine learning problems (Allen-Zhu, 2018b; Agarwal et al., 2017), Hessian-free methods are usually more practical than Hessian based ones for high dimensional problems. The following theorem, whose proof can be found in Appendix D, establishes the runtime complexity (i.e., the total complexity of stochastic gradient and Hessian-vector product evaluations (Zhou & Gu, 2019)) of STRfree.
MetaAlgorithm 7 Inexact Trust Region Method II
Input: initial point x0, step size r, number of inner iterations K, constants δ, ζ ∈ (0, 1), c1, c2, number of outer iterations T = Θ(log(1/δ)), sample size s (optional)
1: for t = 1 to T do 2: xt ← INEXACTTRWEAK(x0, r,K, ζ); 3: Option I: high accuracy case (small ) 4: Ht := ∇2F (xt); 5: Option II: low accuracy case (moderate ) 6: Draw s samples indexed byH and let Ht := ∇2f(xt;H); 7: Compute ṽt by solving (10) up to accuracy √ L2 with probability 1− δ/4;
8: if ‖∇F (xt)‖ ≤ c1 and ψt(ṽt) ≥ − √ c2 L2 then
9: return x := xt; 10: end if 11: end for
12: procedure INEXACTTRWEAK(x0, r, K, ζ) 13: for k = 0 to K − 1 do 14: Compute gk and Hk such that (9) holds with probability 1− ζ4K ; 15: Compute h̃k by solving (8) up to accuracy 1.5/ √ L2 with probability 1− ζ4K ; 16: xk+1 := xk + h̃k; 17: end for 18: Randomly select k̄ from {0, . . . ,K − 1}; 19: return xk̄+1; 20: end procedure
Algorithm 8 STRfree 1: In the same setting as MetaAlgorithm 7, 2: construct gradient estimator gk by Estimator 4; 3: construct Hessian estimator Hk by 4: Option I: Hk := ∇2F (xk); 5: Option II: Draw s samples indexed byH and let Hk := ∇2f(xk;H); 6: use Lanczos method to solve QCQP subproblems.
Theorem 6.1. Consider Algorithm 8 for solving problem (1). Let ζ = 1/3, r = √ /L2, K = 4 √ L2∆/ 1.5, T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). The hyper-parameters in Estimator 4 are set to the same values as those in Lemma 4.2. The number of iterations of Lanczos method is set to Õ(1/(L2 )0.25). To find an O( )-SOSP w.p. at least 1− δ, the runtime complexity is Õ(dmin{n/ 1.75, 1/ 2.75 + √ n/ 2}log(1/δ)).
To solve the QCQP more efficiently, here we develop a faster solver which is based on the AppxPCA method (Allen-Zhu & Li, 2016) and KatyushaXW (Allen-Zhu, 2018a). See details in Appendix E. By replacing Lanczos method with this solver in STRfree, we further improve the runtime complexity to Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 + √ n/ 2}). We call this new algorithm STRfree+ whose details can be found in Appendix E. Table 2 shows that for the runtime complexities, both STRfree and STRfree+ outperform existing methods. See more comparison and discussion in Appendix E.4.
7 EXPERIMENTS
Here we compare the proposed STR with several state-of-the-art (stochastic) cubic regularized algorithms and trust region approaches, including trust region (TR) algorithm (Conn et al., 2000), adaptive cubic regularization (ARC) (Cartis et al., 2011), sub-sampled cubic regularization (SCR) (Kohler & Lucchi, 2017a), stochastic variance-reduced cubic (SVRC) (Zhou et al., 2018c), Lite-SVRC (Zhou et al., 2018b), and SRVRC (Zhou & Gu, 2019). For STR, we estimate the gradient as the way in case (1). This is because such a method enjoys lower Hessian computational complexity over the way in case (2) and for most problems, computing their Hessian matrices is much more time-consuming than computing their gradients. For the subproblems in these compared methods, we use Lanczos
method (Gould et al., 1999; Kohler & Lucchi, 2017a) to solve the subproblem approximately in a Hessian-related Krylov subspace. We run simulations on seven datasets from LibSVM (a9a, ijcnn, codrna, phishing, w8a, epsilon and mnist). We run our algorithm for 40 epochs and use the output as the optimal value f∗ for sub-optimality estimation. Note the output has very small gradient already verified by Figure 2 and 4 in appendix. For all the considered algorithms, we set their initializations as zeros and tune their hyper-parameters optimally. For more experimental settings, e.g. details of testing datasets and algorithm parameter settings, please refer to Appendix F.
Two evaluation non-convex problems. Following (Kohler & Lucchi, 2017a; Zhou et al., 2018c), we evaluate all considered algorithms on two learning tasks: the logistic regression with nonconvex regularizer and the nonlinear least square. Given n data points (xi, yi) where xi ∈ Rd is the sample vector and yi ∈ {−1, 1} is the label, logistic regression with non-convex regularizer aims at distinguishing these two kinds of samples by solving the following problem minw 1 n ∑n i=1 log(1 + exp(−yiwTxi)) + λR(w;α), where the non-convex regularizer R(w;α) is
defined as R(w;α) = ∑d i=1 αw 2 i /(1 + αw 2 i ). The nonlinear least square problem fits the nonlinear
data by minimizing minw 12n ∑n i=1 [ yi − φ(wTxi) ]2 +λR(w, α). For these two kinds of problems, we set the parameters λ = 10−3 and α = 10 for all testing datasets.
Comparison of Hessian based algorithms. Figure 1 summarizes testing results on the non-convex logistic regression problem. For each dataset, we report the function value gap v.s. the overall algorithm running time which can reflect the overall computational complexity of an algorithm, and also show the function value gap v.s. Hessian sample complexity which reveals the complexity of Hessian computation. From Figure 1, one can observe that our proposed STR algorithm runs faster than the compared algorithms in terms of the algorithm running time, showing the overall superiority of STR. Furthermore, STR also reveals much sharper convergence curves in terms of the Hessian
sample complexity which is consistent with our theory. This is because to achieve an -accuracy local minimum, the Hessian sample complexity of the proposed STR is Õ(n0.5/ 1.5) and is superior over the complexity of the compared methods (see the comparison in Sec. 4.2). Indeed, this also explains why our algorithm is also faster in terms of algorithm running time, since for most optimization problems, Hessian matrix is much more computationally expensive than the gradient and thus more efficient Hessian sample complexity means faster overall convergence speed. Note, as all compared methods need to compute the Hessian and gradient, their memory complexity are all O(d2 + d). Figure 2 displays results of the compared algorithms on the nonlinear least square problem. STR shows very similar behaviors as those in Figure 1. Specifically, STR achieves fastest convergence rate in terms of both algorithm running time and Hessian sample complexity. On the codrna dataset we further plot the gradient norm versus running time and Hessian sample complexity. One can obverse that the gradient in STR vanishes significantly faster than other algorithms which means that STR can find the stationary point with high efficiency. See Figure 4 in Appendix F.2 for more experimental results on gradient norm comparison. All these results confirm the superiority of the proposed STR.
Comparison of Hessian-free algorithms. Here we compare our proposed Hessian-free STR, namely STRfree, with other state-ofthe-art Hessian-free algorithms on the two high-dimensional datasets, including epsilon and mnist (see details in Appendix F). Here we do not compare STRfree+, as it is based on AppxPCA method (Allen-Zhu & Li, 2016) and KatyushaXW (Allen-Zhu, 2018a) which require tuning a lot of hyper-parameters. From the results in Figure 3, one can observe that compared with other algorithms, our STRfree
achieves the best convergence speed which demonstrates its high efficiency in realistic applications. Besides, one also can find that STRfree is much faster than Hessian based STR since computing full Hessian is actually much computationally expensive than the computation of the Hessian vector.
8 CONCLUSION
We proposed two stochastic trust region variants. Under two settings (whether stochastic first- and second-order oracle complexities are treated equally), the proposed methods achieve state-of-theart oracle complexities. We also propose Hessian-free variants with lowest runtime complexity. Experimental results testify our theoretical implications and the efficiency of the proposed algorithms.
A APPENDIX
In this appendix, Sec. B first provides the proofs for the results in the manuscript. Then, we analyze MetaAlgorithm 7 and STRfree in Sec. C and Sec. D, respectively. Next, in Sec. E, we develop a fast QCQP solver to further improve the computational complexity of STRfree. Finally, more experimental details and results are presented in Sec. F.
B DEFERRED PROOFS
B.1 PROOF OF THEOREM 3.1
Proof. For simplicity of notation, we denote
∇k def = ∇F (xk)− gk and ∇2k def = ∇2F (xk)−Hk.
From Assumption 2.3 we have
F (xk+1) ≤F (xk)+〈∇F (xk),hk〉+ 1 2 〈∇2F (xk)hk,hk〉+ L2 6 ‖hk‖3
=F (xk)+〈∇k+gk,hk〉+ 1
2 〈[∇2k + Hk]hk,hk〉+ L2 6 ‖hk‖3.
Use the CauchySchwarz inequality to obtain
F (xk+1) ≤ F (xk)+〈gk,hk〉+ 1 2 〈Hkhk,hk〉+L2 6 ‖hk‖3+‖∇k‖‖hk‖+ 1 2 ‖∇2k‖‖hk‖2. (11)
The requirement (9) together with the trust region radius ‖h‖ ≤ r = √ /L2 allows us to bound
‖∇k‖‖hk‖+ 1
2 ‖∇2k‖‖hk‖2 ≤
1 3 ·
1.5
√ L2 . (12)
The optimality of (5) indicates that there exists a dual variable λk ≥ 0 so that (Corollary 7.2.2 in (Conn et al., 2000))
First Order : gk + Hkhk + λkL2
2 hk = 0, (13)
Second Order : Hk + λkL2
2 · I < 0, (14)
Complementary : λk · (‖hk‖ − r) = 0. (15)
Multiplying (13) by hk, we have
〈gk + Hkhk + λ kL2 2 hk,hk〉 = 0. (16)
Additionally, using (14) we have
〈(Hk + λ kL2 2 I)hk,hk〉 ≥ 0,
which together with (16) gives 〈gk,hk〉 ≤ 0. (17)
Moreover, the complementary property (15) indicates ‖hk‖= √ /L2 as we have λk > 3 √ /L2 > 0
before MetaAlgorithm 1 terminates. Plug (12), (16), and (17) into (11) and use ‖hk‖ = √ /L2:
F (xk+1) ≤ F (xk)− L2λ k
4 · L2
+ 1
2 ·
1.5
√ L2 . (18)
Therefore, if we have λk > 3 0.5/ √ L2, then
F (xk+1) ≤ F (xk)− 1 4 √ L2 · 1.5. (19)
Using Assumption 2.1, we find λk ≤ 3 0.5/ √ L2 in no more than 4 √ L2 · (F (x0)− F (x∗))/ 1.5 iterations. We now show that once λk ≤ 3 0.5/ √ L2, then xk+1 is already an O( )-SOSP: From (13), we have
‖gk + Hkhk‖ = L2λ k
2 · ‖hk‖ ≤ 2 . (20)
The assumptions ‖∇k‖ ≤ /6 and ‖∇2k‖ ≤ √ L2/3 together with the trust region radius ‖h‖ ≤√
/L2 imply
‖∇F (xk) +∇2F (xk)hk‖ ≤ ‖gk + Hkhk‖+ ‖∇k‖+ ‖∇2k · hk‖ ≤ 2.5 . (21)
On the other hand use Assumption 2.3 to bound
‖∇F (xk+1)−∇F (xk)−∇2F (xk)hk‖ ≤ L2 2 ‖hk‖2 ≤ 2 .
Combining these two results gives ‖∇F (xk+1)‖ ≤ 3 . Besides, using Assumption 2.3, ‖∇2k‖ ≤ √ L2/3, and (14), we derive the Hessian lower bound
∇2F (xk+1) < ∇2F (xk)− L2 · ‖hk‖I < Hk− √ L2/3I−L2‖hk‖I <− √ 12 L2I.
Hence xk+1 is a 12 -stationary point. Additionally, we have ‖hk‖ = r according to the complementary condition (15) for all but the last iteration.
B.2 PROOF OF LEMMA 4.1
Proof. Without loss of generality, we analyze the case 0 ≤ k < q2 for ease of notation. We first focus on Option II. The proof for Option I follows the similar argument. Option II: Define for k = 0 and i ∈ [s′2]
B0i def = ∇2fi(x0)−∇2F (x0),
and define for k ≥ 1 and i ∈ [s2]
Bki def = ∇2fi(xk)−∇2fi(xk−1)− (∇2F (xk)−∇2F (xk−1)).
{Bki } is a martingale difference sequence. We have for all k and i,
E[Bki |xk] = 0.
Besides, we use Assumption 2.2 for k = 0 to bound
‖B0i ‖ ≤ ‖∇2fi(x0)‖+ ‖∇2F (x0)‖ = 2L1, (22)
and use Assumption 2.3 for k ≥ 1 to bound ‖Bki ‖ ≤‖∇2fi(xk)−∇2fi(xk−1)‖+ ‖∇2F (xk)−∇2F (xk−1)‖ ≤ 2 √ L2.
From the construction of Hk, we have
Hk −∇2F (xk) = s′2∑ i=1 B0i s′2 + k∑ j=1 s2∑ i=1 Bji s2 .
Thus using the matrix Azuma’s Inequality in Theorem 7.1 of (Tropp, 2012) and k ≤ p2, we have
Pr{‖Hk −∇2F (xk)‖ ≥ t} ≤d · exp{− t 2/8∑s′2
i=1 4L 2 1/s ′2 2 + ∑k j=1 ∑s2 i=1 4 L2/s 2 2 }
≤d · exp{− t 2/8
4L21/s ′ 2 + 4p2 L2/s2
}.
Consequently, we have Pr{‖Hk −∇2F (xk)‖ ≤ √ L2} ≥ 1− δ/K0.
by taking t = √ L2, s′2 = 16L 2 1/( L2) log(dK0/δ), s2 = 32L1/( √ L2) log(dK0/δ), and p2 =
L1/(2 √ L2).
Option I: The proof is similar to the one of Option II except that we replace B0i with zero matrix. In such case, the matrix Azuma’s Inequality implies
Pr{‖Hk −∇2F (xk)‖ ≥ t} ≤ d · exp{− t 2/8∑k
j=1 ∑s2 i=1 4 L2/s 2 2 } ≤ d · exp{− t 2/8 4p2 L2/s2 }.
Thus by taking t = √ L2, s2 = 32 √ n log(dK0/δ), and p2 = √ n, we have the result.
Amortized Complexity: In option I, the choice of parameters ensures that: s′2 ≤ p2 × s2 and in option II: n ≤ p2 × s2. Consequently, the amortized stochastic second-order oracle complexity is bounded from above by 2s2.
B.3 PROOF OF LEMMA 4.2
Without loss of generality, we analyze the case 0 ≤ k < q1 for ease of notation. Define for k ≥ 1 and i ∈ [s1]
aki def = ∇fi(xk)−∇fi(xk−1)− (∇F (xk)−∇F (xk−1)).
{aki } is a martingale difference sequence: for all k and i
E[aki |xk] = 0.
Besides, aki has bounded norm:
‖aki ‖≤‖∇fi(xk)−∇fi(xk−1)‖+‖∇F (xk)−∇F (xk−1)‖ ≤L1‖xk − xk−1‖+L1‖xk − xk−1‖
≤2L1 √ /L2. (23)
From the construction of gk, we have
gk −∇F (xk) = k∑ j=1 s1∑ i=1 aji s1 .
Recall the Azuma’s Inequality. Using k ≤ p1, we have
Pr{‖gk −∇F (xk)‖ ≥ t}
≤ exp{− t 2/8∑k
j=1 ∑s1 i=1 4 L21 L2s21 } ≤ exp{− t 2/8 4 L21p1/(s1L2) }.
Take t = /6 and denote c = 1152. To ensure that
Pr{‖gk −∇F (xk)‖ ≥ /6} ≤ δ/K0,
we need cL 2 1
L2 log K0δ ≤ s1 p1 . The best amortized stochastic first-order oracle complexity can be obtained by solving the following two-dimensional programming:
min p1≥1,s1≥1
(n+ s1(p1 − 1))/p1
s.t. cL21 L2 log K0 δ ≤ s1 p1 ,
which has the solution s1 = min{n, √ n · cL21 log K0 δ
L2 }, and p1 = max{1,
√ n · L2
cL21 log K0 δ
}. Note
that when we take s1 = n, we directly compute gk = ∇F (xk) without sampling. The amortized stochastic first-order oracle complexity is obtained by plugging in the choice of s1 and p1, which completes the proof.
B.4 PROOF OF LEMMA 5.1
Without loss of generality, we analyze the case 0 ≤ k < q1 for ease of notation. Define for k ≥ 1 and i ∈ [s1]
bki def =∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1) − [∇F (xk)−∇F (xk−1)−∇2F (x̃)(xk − xk−1)].
{bki } is a martingale difference sequence: for all k and i E[bki |xk] = 0. Besides, bki has bounded norm:
‖bki ‖≤‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖ +‖∇F (xk)−∇F (xk−1)−∇2F (x̃)(xk − xk−1)‖.
We can bound ‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖ as follows. ‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖
= ‖ ∫ 1
0
[ ∇2fi(xk−1 + t(xk − xk−1))−∇2fi(x̃) ] (xk − xk−1)dt‖
≤ ∫ 1
0
L2 ∥∥∥txk + (1− t)xk−1 − x̃∥∥∥dt · ‖xk − xk−1‖ ≤ ∫ 1
0
( t‖xk − x̃‖+ (1− t)‖xk−1 − x̃‖ ) dt · L2r
≤ L2kr2, where the first inequality follows from Assumption 2.3 and the last inequality holds because ‖xk − x̃‖ ≤ kr and ‖xk−1 − x̃‖ ≤ kr, where r is the trust region radius. Similarly, we have ‖∇F (xk)− ∇F (xk−1)−∇2F (x̃)(xk − xk−1)‖ ≤ L2kr2. Thus, we bound
‖bki ‖ ≤ 2L2kr2 ≤ 2p1 From the construction of gk, we have
gk −∇F (xk) = k∑ j=1 s1∑ i=1 bji s1 .
We use k ≤ p1 and the Azuma’s inequality to bound Pr{‖gk −∇F (xk)‖ ≥ t}
≤ exp{− t 2/8∑k
j=1 ∑s1 i=1 4p21 2
s21
} ≤ exp{− t 2/8
4 2p31/s1 }.
Thus, by taking t = /6 and c = 1152, we need s1 p31 ≥ c log K0δ . Further we want s1p1 ' O(n) and hence we take p1 = n0.25 and s1 = n0.75c log K0δ . The amortized stochastic first-order oracle complexity is bounded by 2s1.
C ANALYSIS OF METAALGORITHM 7
We first show that INEXACTTRWEAK finds an O( )-SOSP in O(1/ 1.5) iterations with probability at least 2/3 as stated in the following lemma. Lemma C.1. Consider problem (1) under Assumptions 2.1-2.3. Suppose that the differential estimators gk and Hk satisfy Eqn. (9) with probability at least (1− ζ4K ). Besides, suppose that h̃
k is an approximate solution to (8) such that w.p. (1− ζ4K ),
〈gk, h̃k〉+ 1 2 〈Hkh̃k, h̃k〉 ≤ 〈gk,hk〉+ 1 2 〈Hkhk,hk〉+
1.5
√ L2 , (24) where hk is a global solution to (8). By setting ζ = 1/3, r = √ /L2, and K = 4 √ L2∆/
1.5, INEXACTTRWEAK outputs a 500 -SOSP w.p. at least 2/3.
Proof. Combining (11) and (24), we have w.p. (1− ζ4K ),
F (xk+1) ≤ F (xk) + 〈gk,hk〉+ 1 2 〈Hkhk,hk〉+
1.5
√ L2
+ L2 6 ‖h̃k‖3 + ‖∇k‖‖h̃k‖+ 1 2 ‖∇2k‖‖h̃k‖2,
(25)
where hk is a global solution to the QCQP (8) and h̃k is an approximate solution satisfying (24). We let λk denote the dual variable corresponding to the global solution hk as defined in Lemma 2.1. We note that hk and λk are used only in our analysis. The INEXACTWEAK algorithm only requires the approximate solution x̃k without knowledge of hk or λk.
By the assumption that (9) holds with probability (1− ζ4K ) and the fact that ‖h̃ k‖ ≤ r = √ L2 , we have w.p. (1− ζ4K ),
L2 6 ‖h̃k‖3 + ‖∇k‖‖h̃k‖+ 1 2 ‖∇2k‖‖h̃k‖2 ≤
1.5
2 √ L2 . (26)
Plugging (16), (17), and (26) into (25) and applying the union bound, we have w.p. at least (1− ζ2K ),
F (xk+1) ≤ F (xk)− L2λ k‖hk‖2
4 +
3 1.5
2 √ L2
= F (xk)− L2λ kr2
4 +
3 1.5
2 √ L2 , (27)
where the second inequality follows from (15):
0 = λk(‖hk‖ − r) = λk(‖hk‖ − r)(‖hk‖+ r) = λk(‖hk‖2 − r2). (28)
Summing inequality (27) from k = 0 to K − 1 and applying the union bound, we have w.p. at least (1− ζ/2),
1
K K−1∑ k=0 λk ≤ 4(F (x 0)− F (xK+1)) L2r2K + 6 1.5 L1.52 r 2 ≤ 4∆ K + 6 √ √ L2 , (29)
where the second inequality follows from Assumption 2.1 and our choice of the trust region radius.
By sampling k̄ uniformly from {0, . . . ,K − 1}, we obtain
E[λk̄] = 1
K K−1∑ k=0 λk, (30)
where the expectation is taken over the randomness of k̄. Combining (29) and (30) and taking K = 4∆ √ L2/ 1.5, we have w.p. at least (1− ζ/2)
E[λk̄] ≤ 7 √ √ L2 . (31)
Since λk is always no-negative, by Markov’s inequality and the union bound, with probability at least 1− ζ, we have
λk̄ ≤ 14 √
ζ √ L2 . (32)
By taking ζ = 1/3, we have w.p. at least 2/3, λk̄ ≤ 42 √ /L2. The rest of the proof is similar to Theorem 3.1 and we have the result.
The following theorem shows that MetaAlgorithm 7 finds an O( )-SOSP w.p. (1− δ) after running INEXACTTRWEAK for Θ(log(1/δ)) times.
Theorem C.1 (Iteration Complexity of MetaAlgorithm 7). In the same setting as Lemma C.1, let T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). Then MetaAlgorithm 7 finds a 600 -SOSP with probability at least (1− δ).
Proof. By Lemma C.1 and our choice of T , with probability (1 − δ/2), at least one of xt is a 500 -SOSP. On the other hand, since ψt(ṽt) ≤ ψt(vt) + √ L2 with probability 1 − δ/4, if ψt(ṽ t) ≥ − √ c2 L2, then, w.p. 1− δ/4,
ψt(v t) ≥ ψt(ṽt)− √ L2 ≥ − √ c2 L2 − √ L2 ≥ − √ 550 L2, (33)
where the last inequality follows from our choice of c2.
Option I: Since Ht = ∇2F (xt) is the full Hessian, ψt(vt) is the smallest eigenvalue of∇2F (xt). Applying the union bound, we conclude that MetaAlgorithm 7 outputs a 600 -SOSP w.p. (1− δ). Option II: Let Bi := ∇2fi(xt)−∇2F (xt) for i ∈ H, then
Ht −∇2F (xt) = 1 s s∑ i=1 Bi. (34)
By Assumption 2.2, we have
‖Bi‖ ≤ ‖∇2fi(xt)‖+ ‖∇2F (xt)‖ ≤ 2L1. (35) Applying the matrix Azuma’s Inequality in Theorem 7.1 of Tropp (2012) leads to
Pr{‖Ht −∇2F (xt)‖ ≥ √ L2} ≤ d · exp(
− L2s 32L21 ). (36)
By taking s = 32L 2 1
L2 log(4d/δ) and applying the union bound, we have with probability 1− δ, ∇2F (xt) < Ht − √ L2I < (ψt(v t)− √ L2)I < − √ 600 L2I, (37)
where the last inequality follows from (33). This completes the proof.
D PROOF OF THEOREM 6.1
Proof. We first analyze the computational cost of Lanczos method. By Corollary 2 in (Carmon & Duchi, 2018), for any desired accuracy ̃, Lanczos method achieves this accuracy in O( r√
̃ log r
√ d ̃p )
Lanczos iterations w.p. at least (1− p). Without loss of generality, we assume that the number of Lanczos iterations is strictly smaller than the dimension d, otherwise the QCQP subproblem can be solved exactly. We note that each Lanczos iteration involves computation of one matrix-vector product. Therefore, to satisfy the condition (24) in Lemma C.1, one needs to evaluate Õ(1/(L2 )0.25) Hessian-vector products of the form Hkv. Similarly, to solve (10) up to accuracy √ L2 w.h.p., one needs to evaluate Õ(1/(L2 )0.25) Hessian-vector products of the form Htv. In MetaAlgorithm 7, to verify whether the candidate solution vt is indeed anO( )-SOSP, one needs at most O(n) stochastic gradient evaluations and O(min{n, log(4d/δ)L21/(L2 )}/(L2 )0.25)) stochastic Hessian-vector product evaluations, where the latter one follows from the proof of Theorem 7. We proceed to analyze the computational complexity of the INEXACTTRWEAK procedure. Recall that the iteration complexity of MetaAlgorithm 7 is O(log(1/δ)/ 1.5). Following Lemma 4.2 and Corollary 4.1, the stochastic first-order oracle complexity is Õ(min{n/ 1.5, √ n/ 2}log(1/δ)). Following the proof of Lemma 4.1 and Corollary 4.1, when p2 = 1, the overall stochastic Hessian sample complexity is Õ(min{n/ 1.5, 1/ 2.5} log(1/δ)). Since it takes Õ(1/ 0.25) Lanczos iterations to meet the condition (24) as stated above, the overall stochastic Hessian-vector product oracle complexity is Õ(min{n/ 1.75, 1/ 2.75}log(1/δ)). Combining the stochastic first-order and Hessian-vector product complexities, the overall runtime is Õ(dmin{n/ 1.75, 1/ 2.75 + √ n/ 2}log(1/δ)).
E A FASTER HESSIAN-VECTOR BASED QCQP SOLVER
We recall from the previous section that, to approximately solve a quadratic subproblem in STRfree, Lanczos method requires Õ(min{n/ 0.25, 1/ 1.25}) stochastic Hessian-vector product evaluations. In this section, we propose a faster QCQP solver with an Õ(min{n+ n0.75/ 0.25, 1/ }) complexity. Replacing Lanczos method with this QCQP solver in STRfree results in a faster Hessian-free method, which we refer to as STRfree+.
E.1 CONVEX REFORMULATION OF QCQP
To begin with, we present a known result that is key to achieve faster algorithm than Lanczos method. We summarize this result in Lemma E.1 which shows that the trust region subproblem is equivalent to a convex QCQP. Lemma E.1. (Convex Reformulation of QCQP (Flippo & Jansen, 1996; Wang & Xia, 2017)) Denote λmin as the smallest eigenvalue of Hk. Let umin be a corresponding eigenvector. W.l.o.g., we assume that 〈gk,umin〉 ≤ 0. Let µ = min{λmin, 0}. Then the QCQP (8) is equivalent to the convex problem
min h∈Rd,‖h‖≤r qk(h) = 〈gk,h〉+ 1 2 〈(Hk − µI)h,h〉+ 1 2 µr2 (38)
in the sense that (8) and (38) have the same minimum function value. Moreover, when λmin < 0, for any optimal solution of (38), denoted by hkc ,
hkc +
√ 〈hkc ,umin〉2 − ‖umin‖2(‖hkc‖2 − r2)− 〈hkc ,umin〉
‖umin‖2 umin (39)
is a global minimizer of the original QCQP (8).
To perform the above reformulation, one needs to compute the exact eigenpair (λmin,umin). Nevertheless, as we shall see, it is sufficient to compute an approximate eigenpair (λ̃, ũ) such that
λmin ≤ λ̃ = ũTHkũ ≤ λmin + ̃, (40) where ̃ is a target accuracy to be determined later. We note that ̃ ≤ 2L2 w.l.o.g. since ‖Hk‖ ≤ L2. With this approximate eigenpair, it remains to solve the following convex problem
min h∈Rd,‖h‖≤r q̃k(h) = 〈gk,h〉+ 1 2 〈(Hk − µ̃I)h,h〉+ 1 2 µ̃r2 (41)
where µ̃ = min{0, λ̃− ̃}. One can check that the problem (41) well approximates (38). Corollary E.1. Let qk∗ and q̃k∗ be the minimum function value of (38) and (41), respectively. Assume λmin ≤ λ̃ = ũTHkũ ≤ λmin + ̃. Then
|qk∗ − q̃k∗ | ≤ ̃r2. (42)
We note that the above convex reformulation approach divides an indefinite QCQP into two subproblems: (i) computation of an approximate eigenpair (λ̃, ũ); (ii) solving the convex problem (41). As we shall see, by exploiting the finite-sum structure of the Hessian Hk, these two subproblems can be efficiently solved. We treat these two subproblems in the following two subsections, respectively.
E.2 FINDING THE SMALLEST EIGENVECTOR
To find a unit vector that satisfies requirement (40), we resort to the AppxPCA method (Allen-Zhu & Li, 2016), which first finds an approximate eigenvalue λ = λmin − ̃ via binary search and then applies Power method to the positive definite matrix (Hk − λI)−1 for a logarithmic number of iterations. Computing (Hk − λI)−1v for any vector v is equivalent to solving the ̃-strongly convex problem (Allen-Zhu & Li, 2018)
min u φk(u) :=
1 2 uT (Hk − λI)u− 〈v,u〉 (43)
We note that Hk = 1|S| ∑ i∈S ∇2fi(xk). Specifically, in STRfree, either |S| = n (i.e., Hk is the full Hessian) or |S| = Õ(L21/(L2 )) by Lemma 4.1. Therefore, φk(·) can be expressed as sum of non-convex functions
φk(u) = 1 |S| ∑ i∈S φki (u) = 1 |S| ∑ i∈S (1 2 uT (∇2fi(xk)− λI)u− 〈v,u〉 ) . (44)
By observing that each φki is non-convex and has (4L2)-Lipschitz gradient, we can use KatyushaXS (Allen-Zhu, 2018a) to solve problem (43) in Õ(|S|+ |S|3/4 √ L2/̃) stochastic Hessianvector product (i.e., ∇2fi(xk)u) evaluations. The following result is taken from (Agarwal et al., 2017, Section G.3), which gives the overall computation complexity of AppxPCA.
Algorithm 9 Fast QCQP Solver
Input: Hk, gk, r, ̃, ̃1 1: Use AppxPCA to find (λ̃, ũ) satisfying (40), in which the matrix inverse is solved by KatyushaXS; 2: Use KatyushaXW to solve (41) up to accuracy ̃1, i.e., find a vector h̃ such that q̃k(h)− q̃k∗ ≤ ̃1
with high probability; 3: Return h̃ + ( √ 〈h̃, ũ〉2 − ‖ũ‖2(‖h̃‖2 − r2)− 〈h̃, ũ〉)ũ/‖ũ‖2.
Algorithm 10 STRfree+ 1: In the same setting as MetaAlgorithm 7, 2: construct gradient estimator gk by Estimator 4; 3: construct Hessian estimator Hk by 4: Option I: Hk := ∇2F (xk); 5: Option II: Draw s samples indexed byH and let Hk := ∇2f(xk;H); 6: use Algorithm 9 to solve QCQP subproblems. Lemma E.2. Let Hk = 1|S| ∑ i∈S ∇2fi(xk) ∈ Rd×d, where ‖∇2fi(xk)‖ ≤ L2. With probability at least 1− p, AppxPCA produces a unit vector u satisfying uTHku ≤ λmin + ̃. The total stochastic Hessian-vector product oracle complexity is Õ(|S|+ |S|3/4 √ L2/̃).
E.3 SOLVING THE CONVEX QCQP
In what follows, we show that the convex problem (41) can be solved efficiently. We first observe that problem (41) has a finite-sum structure and can be rewritten as an unconstrained problem of the form
min h∈Rd
1 |S| ∑ i∈S q̃ki (h)+Ψ(h) = 1 |S| ∑ i∈S ( 〈gk,h〉+ 1 2 〈(∇2fi(xk)−µ̃I)h,h〉+ 1 2 µ̃r2 ) +Ψ(h), (45)
where Ψ(h) = 0 if ‖h‖ ≤ r, otherwise Ψ(h) = +∞. We note that each q̃ki (h) in (45) has (4L2)-Lipschitz continuous gradient since ‖∇2fi(xk)‖ ≤ L2 and ̃ ≤ 2L2. Therefore, we can use KatyushaXW (Allen-Zhu, 2018a) to solve (45). By (Allen-Zhu, 2018a, Theorem 4.6), KatyushaXW finds a point h such that E[q̃k(h)− q̃k∗ ] ≤ ̃1 using Õ(|S|+ |S|3/4 √ L2 · r/ √ ̃1) stochastic Hessianvector products, where ̃1 is the target accuracy to be determined later.
E.4 PUTTING IT ALL TOGETHER
The complete procedure of our fast QCQP solver is summarized in Algorithm 9. Combining all the above results and setting r = √ /L2, ̃ = √ L2/2, and ̃1 = ̃r2, one can find an approximate solution to QCQP (8) satisfying requirement (24) in Õ(|S|+ |S|3/4L0.252 / 0.25) stochastic Hessianvector product evaluations. By replacing Lanczos method with this solver in STRfree, we derive a new Hessian-free method called STRfree+, which is summarized in Algorithm 10. The following theorem establishes the overall runtime complexity of STRfree+ for finding an -SOSP.
Theorem E.1. Consider Algorithm 10 for solving problem (1). Let ζ = 1/3, r = √ /L2, K = 4 √ L2∆/ 1.5, T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). The hyper-parameters in Estimator 4 are set to the same values as those in Lemma 4.2. Besides, let ̃ = √ L2/2 and ̃1 = ̃r 2 in Algorithm 9. To find an O( )-SOSP w.p. at least 1 − δ, the runtime complexity is Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 + √ n/ 2}log(1/δ)).
Proof. The proof directly follows from that in Sec. D.
We compare the runtime complexity of STRfree and STRfree+ with existing Hessian free methods in Table 2. One can see that STRfree strictly outperforms Hessian-free Cubic. Besides, STRfree outperforms Fast-Cubic if n ≥ Ω(1/ 4/3), which is a mild condition for large-scale problems in the moderate accuracy case. STRfree+ strictly outperforms both Hessian-free Cubic and Fast-Cubic. We
note that the runtime analyses in (Tripuraneni et al., 2018; Zhou & Gu, 2019) rely on an additional assumption which states that for all x, with probability 1,
‖∇fi(x)−∇F (x)‖ ≤ σ. (46)
Under this additional assumption, one can use the same argument as in Section B.2 and B.3 to prove that STRfree achieves a runtime complexity of Õ(dmin{n/ 1.75, n0.5/ 2 + 1/ 2.75, 1/ 3})1. Similarly, the runtime complexity of STRfree+ would be Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 +√ n/ 2, 1/ 3}). In this sense, both STRfree and STRfree+ outperform Stochastic Cubic and SRVRCfree.
F ADDITIONAL EXPERIMENTAL RESULTS
F.1 MORE EXPERIMENTAL DETAILS
Descriptions of Testing Datasets. We briefly introduce the seven testing datasets in the manuscript. Among them, six datasets are provided by the LibSVM website2, including (a9a, ijcnn, codrna, phishing, w8a and epsilon). The detailed information is summarized in Table 3. We can observe that these datasets are different from each other in feature dimension, training samples, etc.
1To obtain this complexity, one needs to replace the full gradient in line 2 of Estimator 4 (and line 6 of MetaAlgorithm 7) with a mini-batch stochastic gradient when n ≥ Ω(1/ 2).
2https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
Experimental Settings. In the manuscript, following SVRC (Zhou et al., 2018c) and LiteSVRC (Zhou et al., 2018b), we select hyper parameters from a set, namely s1 from {0.2n, 0.6n, n}, s2 from {0.01n, 0.1n, 0.2n}, p1 and p2 from {0.01n0.5, 0.05n0.5, 0.1n0.5}. For the Hessian estimation at the beginning of each p2 iterations, we use full Hessian. Similarly, for the gradient estimation at the beginning of each p1 iterations, we adopt the full gradient as the gradient estimation.
Memory Analysis. SVRC, Lite-SVRC, and our method need to store the previous and current gradient and Hessian and thus their memory complexity is 2(d2 + d). TR, CR (ARC) and SCR need to compute current and Hessian and thus has complexity d2 + d. So these memory is of the same order but our method is much faster than TR, CR and SCR both validated by theory and experiments.
F.2 MORE EXPERIMENTS
Here we give more experimental results on the gradient norm v.s. the algorithm running time and the Hessian sample complexity. Due to the space limit, in the manuscript we only provide the gradient-norm related results on the codrna dataset. Here we provide results of a9a and ijcnn datasets in Figure 4. One can observe that on both the logistic regression with non-convex regularizer and the nonlinear least square problems, the proposed algorithm always shows sharper convergence behavior in terms of both the running time and the Hessian sample complexity. These observations are consistent with the results in Figure 2 in the manuscript. All these results demonstrate the high efficiency of our proposed algorithm and also confirm our theoretical implication. | 1. How do the proposed methods utilize inexact gradient and Hessian estimation, and how does it impact performance?
2. Can you explain why biased estimators are used, and how they can still guarantee convergence despite being unbiased?
3. Why is the trust region radius fixed in Meta-Algorithm 1, and how does it affect the outcome?
4. What is the space complexity analysis of the proposed approach, and how does it compare to first-order algorithms?
5. How do Lemmas 4.1 and 4.2 relate to the probabilistic nature of the algorithm, and what implications does this have for practical applications?
6. How do the experimental results compare to the theoretical findings, and what might be causing any discrepancies? | Review | Review
In this paper, the authors improved the state-of-the-art result by using inexact gradient and Hessian estimation in the training and proving under this case, it still have good performance. In order to control the difference between gradient and approximation gradient, the author use variance reduced estimator to exploit the correlation between consecutive iterations. In addition, the author consider for two cases with the importance of second-order oracle and use Hessian to improve the approximation of gradient when first-order and second-order oracle are equally important. Furthermore, the authors proposed a refined algorithm in practice which only uses stochastic gradient and Hessian-vector product information and showed better experiment results.
In general, the paper is well written and easy to follow, but I still have some questions about this paper.
First, there lacks explanation about why using biased estimators can still guarantee the convergence. For estimator 3 and estimator 4, it seems like that the stochastic gradient and Hessian are not unbiased approximation of the true value. That is a bit strange, since most popular stochastic estimators including stochastic gradient and stochastic variance-reduced gradient are unbiased approximation, which could guarantee that when the number of samples is large enough, the estimators can approximate the true quantities very well. More discussion about this issue is needed. .
Second, to the best of my knowledge, the trust region radius is changing during the iteration for basic trust region algorithm. However, in meta-algorithm 1, the radius is fixed to a very small quantity which is related to the accuracy \epsilon. I am very surprised that with a fixed small radius, STR can still achieve the best result among all baseline algorithms, especially compared with trust region algorithm with adaptive radius. Can the authors explain this phenomenon?
Third, there is no discussion on space complexity. In practice, it is important to consider the space complexity. However, this work did not provide any space complexity analysis, especially to compare with first-order algorithms. It is interesting to see the trade-off between space complexity and time complexity among both first-order algorithms and second-order algorithms.
Fourth, the results of Lemma 4.1 and 4.2 are all in high probability. When the probability $\delta$ is small, then the number of sample is large. Thus, it is not fair that the authors simply ignored them when they compared their algorithms with deterministic algorithm. Furthermore, in practice, to choose small $\delta$ may cause the total number of samples very big. The authors need to make more comments on this issue.
Finally, I think the experiment results contradict the theoretical results. From figure 1, it can be seen that the STR method including many other baseline algorithms converge to the global minimum with linear convergence speed (a9a, epoch). However, the convergence analysis provided by the authors claim that the convergence speed is sublinear. I believe such a difference is due to a fact that the initialized parameter is near to the global minimum, thus the optimization landscape here is actually convex but not non-convex. That makes the experiment results vacuous. |
ICLR | Title
A Stochastic Trust Region Method for Non-convex Minimization
Abstract
We target the problem of finding a local minimum in non-convex finite-sum minimization. Towards this goal, we first prove that the trust region method with inexact gradient and Hessian estimation can achieve a convergence rate of order O(1/k) as long as those differential estimations are sufficiently accurate. Combining such result with a novel Hessian estimator, we propose a sample-efficient stochastic trust region (STR) algorithm which finds an ( , √ )-approximate local minimum within Õ( √ n/ ) stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor ofO(n). Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.
N/A
√ )-approximate local
minimum within Õ( √ n/ 1.5) stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor ofO(n1/6). Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.
1 INTRODUCTION
We consider the following finite-sum non-convex minimization problem
min x∈Rd
F (x) = 1
n ∑n i=1 fi(x), (1)
where each (non-convex) component function fi : Rd → R is assumed to have L1-Lipschitz continuous gradient and L2-Lipschitz continuous Hessian. Since first-order stationary points could be saddle points with inferior generalization performance (Dauphin et al., 2014), in this work we are particularly interested in computing ( , √ )-approximate second-order stationary points, -SOSP:
‖∇F (x )‖ ≤ and ∇2F (x ) < − √ L2 I. (2)
To find a local minimum of problem (1), the cubic regularization approach (Nesterov & Polyak, 2006) and the trust region algorithm (Conn et al., 2000; Curtis et al., 2017) are two classical methods. Specifically, cubic regularization forms a cubic surrogate function for the objective F (x) by adding a third-order regularization term to the second-order Taylor expansion, and minimizes it iteratively. Such a method is proved to achieve an O(1/k2/3) global convergence rate and thus needs O(n/ 1.5) stochastic first- and second-order oracle queries, namely the evaluation number of stochastic gradient and Hessian, to achieve a point that satisfies (2). On the other hand, trust region algorithms estimate the objective with its second-order Taylor expansion but minimize it only within a local region. Recently, Curtis et al. (2017) proposes a trust region variant to achieve the same convergence rate as the cubic regularization approach. But both methods require computing full gradients and Hessians of F (x) and thus suffer from high computational cost in large-scale problems.
To avoid costly exact differential evaluations, many works explore the finite-sum structure of problem (1) and develop stochastic cubic regularization approaches. Both Kohler & Lucchi (2017b) and Xu et al. (2017) propose to directly subsample the gradient and Hessian in the cubic surrogate function, and achieve Õ(1/ 3.5) and Õ(1/ 2.5) stochastic first- and second-order oracle complexities respectively. By plugging a stochastic variance reduced estimator (Johnson & Zhang, 2013) and the Hessian tracking technique (Gower et al., 2018) into the gradient and Hessian estimation, the approach in (Zhou et al., 2018a) improves both the stochastic first- and second-order oracle complexities to Õ(n0.8/ 1.5). Recently, Zhang et al. (2018) and Zhou et al. (2018b) develop more efficient stochastic cubic regularization variants, which further reduce the stochastic second-order oracle complexity to Õ(n2/3/ 1.5) at the cost of increasing the stochastic first-order oracle complexity to Õ(n2/3/ 2.5).
Contributions: In this paper we propose and exploit a formulation in which we make explicit control of the step size in the trust region method. This idea is leveraged to develop two efficient stochastic trust region (STR) approaches. We tailor our methods to achieve state-of-the-art oracle complexities under the following two measurements: (i) the stochastic second-order oracle complexity is prioritized; (ii) the stochastic first- and second-order oracle complexities are treated equally. Specifically, in Setting (i), our method STR1 employs a newly proposed estimator to approximate the Hessian and adopts the estimator in (Fang et al., 2018) for gradient approximation. Our novel Hessian estimator maintains an accurate second-order differential approximation with lower amortized oracle complexity. In this way, STR1 achieves Õ(min{1/ 2, √ n/ 1.5}) stochastic second-order oracle complexity. This is lower than existing results for solving problem (1). In Setting (ii), our method STR2 substitutes the gradient estimator in STR1 with one that integrates stochastic gradient and Hessian together to maintain an accurate gradient approximation. As a result, STR2 achieves convergence in Õ(n3/4/ 1.5) overall stochastic first- and second-order oracle queries. Finally, based on STR, we further develop Hessian-free STR algorithms, namely STRfree and STRfree+, which outperform existing Hessian-free algorithms theoretically.
1.1 RELATED WORK
Computing local minimum to a non-convex optimization problem is gaining considerable amount of attentions in recent years. Both cubic regularization (CR) approaches (Nesterov & Polyak, 2006) and trust region (TR) algorithms (Conn et al., 2000; Curtis et al., 2017) can escape saddle points and find a local minimum by iterating the variable along the direction related to the eigenvector of the Hessian with the most negative eigenvalue. As the CR heavily depends on the regularization parameter for the cubic term, Cartis et al. (2011) propose an adaptive cubic regularization (ARC) approach to boost the efficiency by adaptively tuning the regularization parameter according to the current objective decrease. Noting the high cost of full gradient and Hessian computation in ARC, sub-sampled cubic regularization (SCR) (Kohler & Lucchi, 2017a) is developed for sampling partial data points to estimate the full gradient and Hessian. Recently, by exploring the finite-sum structure of the target problem, many works incorporate variance reduction technique (Johnson & Zhang, 2013) into CR and propose stochastic variance-reduced methods. For example, Zhou et al. (2018c) propose stochastic variance-reduced cubic (SVRC) in which they integrate the stochastic variance-reduced gradient estimator (Johnson & Zhang, 2013) and the Hessian tracking technique (Gower et al., 2018) with CR. Such a method is proved to be at least O(n1/5) faster than CR and TR. Then Zhou et al. (2018b) use adaptive gradient batch size and constant Hessian batch size, and develop Lite-SVRC to further reduce the stochastic second-order oracle Õ(n4/5/ 1.5) of SVRC to Õ(n2/3/ 1.5) at the cost of higher gradient computation cost. Similarly, except turning the gradient batch size, Zhang et al. (2018) further adaptively sample a certain number of data points to estimate the Hessian and prove the proposed method to have the same stochastic second-order oracle complexity as Lite-SVRC.
2 PRELIMINARY
Notation. We use ‖v‖ to denote the Euclidean norm of vector v and use ‖A‖ to denote the spectral norm of matrix A. Let S be the set of component indices. We define the minibatch average of component functions by f(x;S) def= 1|S| ∑ i∈S fi(x). Then we specify the assumptions that are necessary to the analysis of our methods.
MetaAlgorithm 1 Inexact Trust Region Method
Input: initial point x0, step size r, number of iterations K, construction of differential estimators gk and Hk
1: for k = 0 to K − 1 do 2: Compute hk and λk by solving (8); 3: xk+1 := xk + hk; 4: if λk ≤ 3 √ /L2 then 5: Output x = xk+1; 6: end if 7: end for
Assumption 2.1. F is bounded from below and its global optimal is achieved at x∗. We denote ∆ = F (x0)− F (x∗). Assumption 2.2. Each fi : Rd → R has L1-Lipschitz continuous gradient: for any x,y ∈ Rd
‖∇fi(x)−∇fi(y)‖ ≤ L1‖x− y‖. (3) Assumption 2.3. Each fi : Rd → R has L2-Lipschitz continuous Hessian: for any x,y ∈ Rd
‖∇2fi(x)−∇2fi(y)‖ ≤ L2‖x− y‖. (4)
2.1 TRUST REGION METHOD
Here we briefly introduce the trust region method (Conn et al., 2000). In each step, it first solves the Quadratic Constraint Quadratic Program (QCQP) defined as
hk := argmin h∈Rd,‖h‖≤r 〈∇F (xk),h〉+ 1 2 〈∇2F (xk)h,h〉, (5)
where r is the trust-region radius. Then it updates the new variable as
xk+1 := xk + hk. (6)
Since ∇2F (xk) is indefinite, the trust-region subproblem (5) is non-convex. But its global optimizer can be characterized by the following lemma (Corollary 7.2.2 in (Conn et al., 2000)). Lemma 2.1. Any global minimizer of problem (5) satisfies the equation(
∇2F (xk) + λI ) hk = −∇F (xk), (7)
where the dual variable λ ≥ 0 should satisfy∇2F (xk) + λI < 0 and λ(‖hk‖ − r) = 0.
In particular, the standard QCQP solver returns both the minimizer hk as well as the corresponding dual variable λ of subproblem (5). In the following section, we first prove that the deterministic trust-region update (5) and (6) converges at the rate of O(1/k2/3), much sharper than existing provable convergence rate O(1/ √ k) (Conn et al., 2000), and then develop a more efficient stochastic trust-region approach.
3 METHODOLOGY
Here we first introduce a general inexact trust region method which is summarized in MetaAlgorithm 1. It accepts inexact gradient estimation gk and Hessian estimation Hk as input to the QCQP subproblem
hk := argmin h∈Rd,‖h‖≤r 〈gk,h〉+ 1 2 〈Hkh,h〉. (8)
Similar to (5), Lemma 2.1 characterizes the global optimum to problem (8) which can be efficiently solved by Lanczos method (Gould et al., 1999). Assume the dual variable of the minimizer hk is λk.
We prove that such inexact trust region method achieves the optimal O(1/k2/3) convergence rate when the estimation gk and Hk at each iteration are sufficiently close to their full (exact) counterparts ∇F (xk) and ∇2F (xk) respectively:
‖gk −∇F (xk)‖ ≤ 6 , ‖Hk −∇2F (xk)‖ ≤ √ L2 3 . (9)
Algorithm 2 STR1 Input: initial point x0, step size r, number of iterations K
1: for k = 1 to K do 2: Construct gradient estimator gk by Estimator 4; 3: Construct Hessian estimator Hk by Estimator 3; 4: Compute hk and λk by solving (8); 5: xk+1 := xk + hk; 6: if λk ≤ 3 √ /L2 then 7: Output x = xk+1; 8: end if 9: end for
Such result allows us to derive stochastic trust-region variants with novel differential estimators that are tailored to ensure the optimal convergence rate. We state our formal results in Theorem 3.1, whose proof is deferred to Appendix B.1 due to the space limit.
Theorem 3.1 (Main Result). Consider problem (1) under Assumption 2.1-2.3. If the differential estimators gk and Hk satisfy Eqn. (9) for all k, MetaAlgorithm 1 finds an O( )-SOSP in less than K = O( √ L2∆/ 1.5) iterations by setting the trust-region radius as r = √ /L2.
Remark 3.1. We emphasize that MetaAlgorithm 1 degenerates to the exact trust region method by taking gk = ∇F (xk) and Hk = ∇2F (xk). Such result is of its own interest because this is the first proof to show that the vanilla trust region method has the optimal O(1/k2/3) convergence rate. Similar rate is achieved by Curtis et al. (2017) but with a much more complicated trust region variant.
Theorem 3.1 shows the explicit step size control of the trust region method: Since the dual variable satisfies λk > 3 0.5/ √ L2 > 0 for all but the last iteration, we always find the solution to the trustregion subproblem (8) in the boundary, i.e. ‖hk‖ = r, according to the complementary condition (15) in Appendix B.1. Such exact step size control property is missing in the cubic-regularization method where the step size is implicitly decided by the cubic regularization parameter.
More importantly, we emphasize that such explicit step size control is crucial to the sample efficiency of our variance reduced differential estimators. The essence of variance reduction is to exploit the correlations between the differentials in consecutive iterations. Intuitively, when two neighboring iterates are close, so are their differentials due to the Lipschitz continuity, and hence a smaller number of samples suffice to maintain the accuracy of the estimators. On the other hand, smaller step size reduces the per-iteration objective decrease which harms the convergence rate of the algorithm (see proof of Theorem 3.1). Therefore, the explicit step size control in trust region method allows us to well trade-off the per-iteration sample complexity and convergence rate, from which we can derive stochastic trust region approaches with the state-of-the-art sample efficiency. In contrast, existing trust region methods change the step size at every iteration according to progress made, which requires loss evaluations that can be as expensive as gradient computations (e.g. the non-convex linear model in Section 7) and is thus prohibitive for large-scale problems.
4 STOCHASTIC TRUST REGION METHOD: TYPE I
Having the inexact trust region method as prototype, we now present our first sample-efficient stochastic trust region method, namely STR1, in Algorithm 2 which emphasizes cheaper stochastic second-order oracle complexity. As Theorem 3.1 already guarantees the optimal convergence rate of MetaAlgorithm 1 when the gradient estimator gk and the Hessian estimator Hk meet the requirement (9), here we focus on constructing such novel differential estimators. Specifically, we first present our Hessian estimator in Estimator 3 and our first gradient estimator in Estimator 4, both of which exploit the trust region radius r = √ /L2 to reduce their variances.
4.1 HESSIAN ESTIMATOR
Our epoch-wise Hessian estimator Hk is given in Estimator 3, where p2 controls the epoch length and s2 (and optionally s′2) controls the minibatch size. At the beginning of each epoch, Estimator 3 has two options, designed for different target accuracy: Option I is preferable for the high accuracy
Estimator 3 Hessian Estimator Input: Epoch length p2, sample size s2, s′2 (optional)
1: if mod(k, p2)= 0 then 2: Option I: high accuracy case (small ) 3: Hk := ∇2F (xk); 4: Option II: low accuracy case (moderate ) 5: Draw s′2 samples indexed byH′ and let Hk := ∇2f(xk;H′); 6: else 7: Draw s2 samples indexed byH and let Hk := ∇2f(xk;H)−∇2f(xk−1;H) + Hk−1; 8: end if
Estimator 4 Gradient Estimator: Case (1) 1: if mod(k, p1)= 0 then 2: gk := ∇F (xk); 3: else 4: Draw s1 samples indexed by G and gk = ∇f(xk;G)−∇f(xk−1;G) + gk−1; 5: end if
case ( < O(1/n)) where we compute the full Hessian to avoid approximation error, and Option II is designed for the moderate accuracy case ( > O(1/n)) where we only need an approximate Hessian estimator. Then, p2 iterations follow with Hk defined in a recurrent manner. These recurrent estimators exist for the first-order case (Nguyen et al., 2017; Fang et al., 2018), but their bound only holds under the vector `2 norm. Here we generalize them into Hessian estimation with matrix spectral norm bound.
The following lemma analyzes the amortized stochastic second-order oracle (Hessian) complexity for Algorithm 3 to meet the requirement in Theorem 3.1. As we need an error bound under the spectral norm, we will appeal to the matrix Azuma’s inequality (Tropp, 2012). The proof is deferred to Appendix B.2.
Lemma 4.1. Assume Algorithm 2 takes the trust region radius r = √ /L2 as in Theorem 3.1. For any k ≥ 0, Estimator 3 produces estimator Hk for the second order differential ∇2F (xk) such that ‖Hk −∇2F (xk)‖ ≤ √ L2/3 with probability at least 1 − δ/K0 if we set (1) p2 = √ n and s2 = 32 √ n log(dK0/δ) in option I, or (2) p2 = L1/(2 √ L2), s′2 = 16L 2 1/( L2) log(dK0/δ),
and s2 = 32L1/( √ L2) log(dK0/δ) in option II. Here K0 is a constant to be determined later. Consequently the amortized per-iteration stochastic second-order oracle complexity to construct Hk is no more than 2s2 = min{64 √ n log dK0δ , 64L1√ L2 log dK0δ }.
4.2 GRADIENT ESTIMATOR: CASE (1)
When the stochastic second-order oracle complexity is prioritized, we directly employ the SPIDER gradient estimator to construct gk (Fang et al., 2018). Similar to the construction for Hk, the estimator gk is also constructed in an epoch-wise manner as presented in Estimator 4, where p1 controls the epoch length and s1 controls the minibatch size.
We now analyze the stochastic first-order oracle complexity to meet the requirement in Theorem 3.1. Lemma 4.2. Assume Algorithm 2 takes the trust region radius r = √ /L2. Estimator 4 produces estimator gk of the first order differential ∇F (xk) such that ‖gk − ∇F (xk)‖ ≤ /6 with probability at least 1 − δ/K0 for any k ≥ 0, if we set p1 = max{1, √ n L2/(cL21 log K0 δ )} and
s1 = min{n, √ cnL21 log(K0/δ)/( L2)}, where the constant c = 1152 and K0 is a constant to be determined later. Consequently, the amortized per-iteration stochastic first-order oracle complexity to construct gk is min{n, √ 4cnL21 log (K0/δ)/( L2)}.
The proof of Lemma 4.2 is similar to the one of Lemma 4.1 and is deferred to Appendix B.3. These two lemmas only guarantee that the differential estimators satisfy the requirement (9) in a single iteration and can be extended to hold for all k by using the union bound with K0 = 2K, where K denotes the number of iterations. Combining such lifted result with Theorem 3.1, we can establish the computational complexity bound as follows.
Algorithm 5 STR2 Input: initial point x0, step size r, number of iterations K
1: for k = 1 to K do 2: Construct gradient estimator gk by Estimator 6; 3: Construct Hessian estimator Hk by Estimator 3; 4: Compute hk and λk by solving (8); 5: xk+1 := xk + hk; 6: if λk ≤ 3 √ /L2 then 7: Output x = xk+1; 8: end if 9: end for
Estimator 6 Gradient Estimator: Case (2) 1: if mod(k, p1)= 0 then 2: Let x̃ := xk, gk := ∇F (x̃) 3: else 4: Draw s1 samples indexed by G; 5: gk = ∇f(xk;G)−∇f(xk−1;G) + gk−1 + [∇2F (x̃)−∇2f(x̃;G)](xk − xk−1); 6: end if
Corollary 4.1 (Computational Complexity of STR1). Assume Algorithm 2 will use Estimator 4 to construct the first-order differential estimator gk and use Estimator 3 to construct the second-order differential estimator Hk. To find a 12 -SOSP with probability at least 1− δ, the overall stochastic first-order oracle complexity isO(min{n √ L2∆ 1.5 , √ nL1 2 log( L2∆ δ 1.5 )}) and the overall stochastic secondorder oracle complexity is O(min{ √ nL2∆ 1.5 , L1∆ 2 } log( d √ L2∆
δ 1.5 )).
From Corollary 4.1 we see that Õ(min{ √ n/ 1.5, 1/ 2}) stochastic second-order oracle queries are sufficient for STR1 to find an -SOSP which is significantly better than both the subsampled cubic regularization method Õ(1/ 2.5) (Kohler & Lucchi, 2017a) and the variance reduction based ones Õ(n2/3/ 1.5) (Zhou et al., 2018b; Zhang et al., 2018). Recently, Zhou & Gu (2019) developed a stochastic recursive variance-reduced cubic regularization (SRVRC) method which finds an ( , √ )- approximate local minimum with Õ(n/ 1.5, 1/ 3) SFO and Õ( √ n/ 1.5, 1/ 2) SSO. But the result of SRVRC needs to assume stochastic gradient to be bounded, i.e., ‖∇fi(x)−∇F (x)‖ ≤ σ. With this extra assumption, STR1 enjoys Õ(n/ 1.5, n/ 2, 1/ 3) SFO and Õ( √ n/ 1.5, 1/ 2) SSO. Thus, if 1/ ≤ n ≤ 1/ 2, STR1 outperforms SRVRC; otherwise they have the same complexities.
5 STOCHASTIC TRUST REGION METHOD: TYPE II
In the above section, we focus on the setting where the stochastic second-order oracle complexity is prioritized over the stochastic first-order oracle complexity. In this setting, STR1 achieves the stateof-the-art efficiency. In this section, we consider a different complexity measure where first-order and second-order oracle complexities are treated equally and our goal is to minimize the maximum of them. We note that, currently the best result is Õ(n4/5/ 1.5) of the SVRC method (Zhou et al., 2018c). Since the Hessian estimator Hk of STR1 already delivers the superior Õ( √ n/ 1.5) stochastic Hessian complexity, in STR2 (see Algorithm 5), we retain Estimator 3 for second-order differential estimation and use Estimator 6 to further reduce the stochastic gradient complexity.
5.1 GRADIENT ESTIMATOR: CASE (2)
When stochastic gradient and Hessian complexities are equally important, we use Hessian to improve the gradient estimation. Denote x(a) = axt + (1− a)x̃. From Assumption 2.3, we have
‖∇fi(xt)−∇fi(x̃)−∇2fi(x̃)(xt − x̃)‖=‖ ∫ 1
0
[∇2fi(x(a))−∇2fi(x̃)](xt − x̃)da‖≤ L2 2 ‖xt − x̃‖2.
Such property can be used to improve Lemma 4.2 of Estimator 4. Specifically, define the correction
ck = [∇2F (x̃)−∇2f(x̃;G)](xk − xk−1),
where x̃ is some reference point updated in an epoch-wise manner. Estimator 6 adds ck to the estimator in Estimator 4. Note that in Estimator 6, the first- and second-order oracle complexities are the same. We now analyze the first-order (and second-order) oracle complexity to meet requirement (9).
Lemma 5.1. Assume Algorithm 5 takes the trust region radius r = √ /L2 as in Theorem 3.1. For any k ≥ 0, Estimator 6 produces estimator gk for the first order differential∇F (xk) such that ‖gk− ∇F (xk)‖ ≤ /6 with probability at least 1−δ/K0, if we set p1 = n0.25 and s1 = n0.75c log(K0/δ), where c = 1152 and K0 is a constant to be determined. Consequently, the amortized per-iteration stochastic first-order oracle complexity to construct gk is 2s1 = 2n0.75c log(K0/δ).
The proof of Lemma 5.1 is similar to the one of Lemma 4.1 and is deferred to Appendix B.4. Similar to the previous section, Lemma 5.1 only guarantees that the gradient estimator satisfies the requirement (9) in a single iteration. Such result can be extended to hold for all k by using the union bound with K0 = 2K, which together with Theorem 3.1 gives the following corollary. Corollary 5.1 (Computational Complexity of STR2). Algorithm 5 finds a 12 -SOSP with probability at least 1 − δ, within O(n 0.75√L2∆ 1.5 log( √ L2∆ δ 1.5 )) overall stochastic first-order oracle queries and O(n 0.75√L2∆ 1.5 log( d √ L2∆
δ 1.5 )) overall stochastic second-order oracle queries.
Corollary 5.1 shows that to find an -SOSP, both SFO and SSO of STR2 are Õ(n3/4/ 1.5) which surpasses the best existing one Õ(n4/5/ 1.5) in (Zhou et al., 2018c).
6 PRACTICAL STOCHASTIC TRUST REGION VARIANTS
6.1 HANDLING INEXACT QCQP SOLUTIONS
One drawback of MetaAlgorithm 1 is that it requires the exact solution to the QCQP subproblem (8) and uses the dual variable as stopping criterion. We address this problem by developing a practical variant, MetaAlgorithm 7, which admits inexact QCQP solutions without access to the dual variable. This algorithm repeatedly invokes a procedure called INEXACTTRWEAK, which, as we shall see, outputs an O( )-SOSP with a constant probability of 2/3 in O(1/ 1.5) iterations. By repeatedly invoking INEXACTTRWEAK for Θ(log(1/δ)) times, MetaAlgorithm 7 boosts the probability to (1− δ) for any desired δ. This repeating technique has been studied by, e.g., (Allen-Zhu & Li, 2018; AllenZhu, 2018b). To test whether the t-th run outputs an O( )-SOSP, we need to compute ‖∇F (xt)‖ and the smallest eigenvalue of∇2F (xt). The latter one can be approximated by solving the QCQP
vt := argmin ‖v‖≤1 ψt(v) = 〈Htv,v〉, (10)
where Ht is the full Hessian∇2F (xt) or its estimation. One can show that MetaAlgorithm 7 finds an O( )-SOSP w.p. at least (1−δ) in Õ(1/ 1.5) iterations. We defer the detailed analysis to Appendix C.
6.2 HESSIAN-FREE IMPLEMENTATION
Based on MetaAlgorithm 7, we propose a Hessian-free method named STRfree, which is summarized in Algorithm 8. STRfree leverages the full/stochastic Hessian and Estimator 4 to construct Hk and gk, respectively. Besides, it uses Lanczos method (Gould et al., 1999; Carmon & Duchi, 2018) as the QCQP solver, which can be implemented in a Hessian-free manner (i.e., using only Hessian-vector products without explicit Hessian matrix evaluations). Thus, Hk is only accessed through Hessianvector products and is never explicitly constructed. Since Hessian-vector products can be computed in linear time (in terms of the dimension d) for many machine learning problems (Allen-Zhu, 2018b; Agarwal et al., 2017), Hessian-free methods are usually more practical than Hessian based ones for high dimensional problems. The following theorem, whose proof can be found in Appendix D, establishes the runtime complexity (i.e., the total complexity of stochastic gradient and Hessian-vector product evaluations (Zhou & Gu, 2019)) of STRfree.
MetaAlgorithm 7 Inexact Trust Region Method II
Input: initial point x0, step size r, number of inner iterations K, constants δ, ζ ∈ (0, 1), c1, c2, number of outer iterations T = Θ(log(1/δ)), sample size s (optional)
1: for t = 1 to T do 2: xt ← INEXACTTRWEAK(x0, r,K, ζ); 3: Option I: high accuracy case (small ) 4: Ht := ∇2F (xt); 5: Option II: low accuracy case (moderate ) 6: Draw s samples indexed byH and let Ht := ∇2f(xt;H); 7: Compute ṽt by solving (10) up to accuracy √ L2 with probability 1− δ/4;
8: if ‖∇F (xt)‖ ≤ c1 and ψt(ṽt) ≥ − √ c2 L2 then
9: return x := xt; 10: end if 11: end for
12: procedure INEXACTTRWEAK(x0, r, K, ζ) 13: for k = 0 to K − 1 do 14: Compute gk and Hk such that (9) holds with probability 1− ζ4K ; 15: Compute h̃k by solving (8) up to accuracy 1.5/ √ L2 with probability 1− ζ4K ; 16: xk+1 := xk + h̃k; 17: end for 18: Randomly select k̄ from {0, . . . ,K − 1}; 19: return xk̄+1; 20: end procedure
Algorithm 8 STRfree 1: In the same setting as MetaAlgorithm 7, 2: construct gradient estimator gk by Estimator 4; 3: construct Hessian estimator Hk by 4: Option I: Hk := ∇2F (xk); 5: Option II: Draw s samples indexed byH and let Hk := ∇2f(xk;H); 6: use Lanczos method to solve QCQP subproblems.
Theorem 6.1. Consider Algorithm 8 for solving problem (1). Let ζ = 1/3, r = √ /L2, K = 4 √ L2∆/ 1.5, T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). The hyper-parameters in Estimator 4 are set to the same values as those in Lemma 4.2. The number of iterations of Lanczos method is set to Õ(1/(L2 )0.25). To find an O( )-SOSP w.p. at least 1− δ, the runtime complexity is Õ(dmin{n/ 1.75, 1/ 2.75 + √ n/ 2}log(1/δ)).
To solve the QCQP more efficiently, here we develop a faster solver which is based on the AppxPCA method (Allen-Zhu & Li, 2016) and KatyushaXW (Allen-Zhu, 2018a). See details in Appendix E. By replacing Lanczos method with this solver in STRfree, we further improve the runtime complexity to Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 + √ n/ 2}). We call this new algorithm STRfree+ whose details can be found in Appendix E. Table 2 shows that for the runtime complexities, both STRfree and STRfree+ outperform existing methods. See more comparison and discussion in Appendix E.4.
7 EXPERIMENTS
Here we compare the proposed STR with several state-of-the-art (stochastic) cubic regularized algorithms and trust region approaches, including trust region (TR) algorithm (Conn et al., 2000), adaptive cubic regularization (ARC) (Cartis et al., 2011), sub-sampled cubic regularization (SCR) (Kohler & Lucchi, 2017a), stochastic variance-reduced cubic (SVRC) (Zhou et al., 2018c), Lite-SVRC (Zhou et al., 2018b), and SRVRC (Zhou & Gu, 2019). For STR, we estimate the gradient as the way in case (1). This is because such a method enjoys lower Hessian computational complexity over the way in case (2) and for most problems, computing their Hessian matrices is much more time-consuming than computing their gradients. For the subproblems in these compared methods, we use Lanczos
method (Gould et al., 1999; Kohler & Lucchi, 2017a) to solve the subproblem approximately in a Hessian-related Krylov subspace. We run simulations on seven datasets from LibSVM (a9a, ijcnn, codrna, phishing, w8a, epsilon and mnist). We run our algorithm for 40 epochs and use the output as the optimal value f∗ for sub-optimality estimation. Note the output has very small gradient already verified by Figure 2 and 4 in appendix. For all the considered algorithms, we set their initializations as zeros and tune their hyper-parameters optimally. For more experimental settings, e.g. details of testing datasets and algorithm parameter settings, please refer to Appendix F.
Two evaluation non-convex problems. Following (Kohler & Lucchi, 2017a; Zhou et al., 2018c), we evaluate all considered algorithms on two learning tasks: the logistic regression with nonconvex regularizer and the nonlinear least square. Given n data points (xi, yi) where xi ∈ Rd is the sample vector and yi ∈ {−1, 1} is the label, logistic regression with non-convex regularizer aims at distinguishing these two kinds of samples by solving the following problem minw 1 n ∑n i=1 log(1 + exp(−yiwTxi)) + λR(w;α), where the non-convex regularizer R(w;α) is
defined as R(w;α) = ∑d i=1 αw 2 i /(1 + αw 2 i ). The nonlinear least square problem fits the nonlinear
data by minimizing minw 12n ∑n i=1 [ yi − φ(wTxi) ]2 +λR(w, α). For these two kinds of problems, we set the parameters λ = 10−3 and α = 10 for all testing datasets.
Comparison of Hessian based algorithms. Figure 1 summarizes testing results on the non-convex logistic regression problem. For each dataset, we report the function value gap v.s. the overall algorithm running time which can reflect the overall computational complexity of an algorithm, and also show the function value gap v.s. Hessian sample complexity which reveals the complexity of Hessian computation. From Figure 1, one can observe that our proposed STR algorithm runs faster than the compared algorithms in terms of the algorithm running time, showing the overall superiority of STR. Furthermore, STR also reveals much sharper convergence curves in terms of the Hessian
sample complexity which is consistent with our theory. This is because to achieve an -accuracy local minimum, the Hessian sample complexity of the proposed STR is Õ(n0.5/ 1.5) and is superior over the complexity of the compared methods (see the comparison in Sec. 4.2). Indeed, this also explains why our algorithm is also faster in terms of algorithm running time, since for most optimization problems, Hessian matrix is much more computationally expensive than the gradient and thus more efficient Hessian sample complexity means faster overall convergence speed. Note, as all compared methods need to compute the Hessian and gradient, their memory complexity are all O(d2 + d). Figure 2 displays results of the compared algorithms on the nonlinear least square problem. STR shows very similar behaviors as those in Figure 1. Specifically, STR achieves fastest convergence rate in terms of both algorithm running time and Hessian sample complexity. On the codrna dataset we further plot the gradient norm versus running time and Hessian sample complexity. One can obverse that the gradient in STR vanishes significantly faster than other algorithms which means that STR can find the stationary point with high efficiency. See Figure 4 in Appendix F.2 for more experimental results on gradient norm comparison. All these results confirm the superiority of the proposed STR.
Comparison of Hessian-free algorithms. Here we compare our proposed Hessian-free STR, namely STRfree, with other state-ofthe-art Hessian-free algorithms on the two high-dimensional datasets, including epsilon and mnist (see details in Appendix F). Here we do not compare STRfree+, as it is based on AppxPCA method (Allen-Zhu & Li, 2016) and KatyushaXW (Allen-Zhu, 2018a) which require tuning a lot of hyper-parameters. From the results in Figure 3, one can observe that compared with other algorithms, our STRfree
achieves the best convergence speed which demonstrates its high efficiency in realistic applications. Besides, one also can find that STRfree is much faster than Hessian based STR since computing full Hessian is actually much computationally expensive than the computation of the Hessian vector.
8 CONCLUSION
We proposed two stochastic trust region variants. Under two settings (whether stochastic first- and second-order oracle complexities are treated equally), the proposed methods achieve state-of-theart oracle complexities. We also propose Hessian-free variants with lowest runtime complexity. Experimental results testify our theoretical implications and the efficiency of the proposed algorithms.
A APPENDIX
In this appendix, Sec. B first provides the proofs for the results in the manuscript. Then, we analyze MetaAlgorithm 7 and STRfree in Sec. C and Sec. D, respectively. Next, in Sec. E, we develop a fast QCQP solver to further improve the computational complexity of STRfree. Finally, more experimental details and results are presented in Sec. F.
B DEFERRED PROOFS
B.1 PROOF OF THEOREM 3.1
Proof. For simplicity of notation, we denote
∇k def = ∇F (xk)− gk and ∇2k def = ∇2F (xk)−Hk.
From Assumption 2.3 we have
F (xk+1) ≤F (xk)+〈∇F (xk),hk〉+ 1 2 〈∇2F (xk)hk,hk〉+ L2 6 ‖hk‖3
=F (xk)+〈∇k+gk,hk〉+ 1
2 〈[∇2k + Hk]hk,hk〉+ L2 6 ‖hk‖3.
Use the CauchySchwarz inequality to obtain
F (xk+1) ≤ F (xk)+〈gk,hk〉+ 1 2 〈Hkhk,hk〉+L2 6 ‖hk‖3+‖∇k‖‖hk‖+ 1 2 ‖∇2k‖‖hk‖2. (11)
The requirement (9) together with the trust region radius ‖h‖ ≤ r = √ /L2 allows us to bound
‖∇k‖‖hk‖+ 1
2 ‖∇2k‖‖hk‖2 ≤
1 3 ·
1.5
√ L2 . (12)
The optimality of (5) indicates that there exists a dual variable λk ≥ 0 so that (Corollary 7.2.2 in (Conn et al., 2000))
First Order : gk + Hkhk + λkL2
2 hk = 0, (13)
Second Order : Hk + λkL2
2 · I < 0, (14)
Complementary : λk · (‖hk‖ − r) = 0. (15)
Multiplying (13) by hk, we have
〈gk + Hkhk + λ kL2 2 hk,hk〉 = 0. (16)
Additionally, using (14) we have
〈(Hk + λ kL2 2 I)hk,hk〉 ≥ 0,
which together with (16) gives 〈gk,hk〉 ≤ 0. (17)
Moreover, the complementary property (15) indicates ‖hk‖= √ /L2 as we have λk > 3 √ /L2 > 0
before MetaAlgorithm 1 terminates. Plug (12), (16), and (17) into (11) and use ‖hk‖ = √ /L2:
F (xk+1) ≤ F (xk)− L2λ k
4 · L2
+ 1
2 ·
1.5
√ L2 . (18)
Therefore, if we have λk > 3 0.5/ √ L2, then
F (xk+1) ≤ F (xk)− 1 4 √ L2 · 1.5. (19)
Using Assumption 2.1, we find λk ≤ 3 0.5/ √ L2 in no more than 4 √ L2 · (F (x0)− F (x∗))/ 1.5 iterations. We now show that once λk ≤ 3 0.5/ √ L2, then xk+1 is already an O( )-SOSP: From (13), we have
‖gk + Hkhk‖ = L2λ k
2 · ‖hk‖ ≤ 2 . (20)
The assumptions ‖∇k‖ ≤ /6 and ‖∇2k‖ ≤ √ L2/3 together with the trust region radius ‖h‖ ≤√
/L2 imply
‖∇F (xk) +∇2F (xk)hk‖ ≤ ‖gk + Hkhk‖+ ‖∇k‖+ ‖∇2k · hk‖ ≤ 2.5 . (21)
On the other hand use Assumption 2.3 to bound
‖∇F (xk+1)−∇F (xk)−∇2F (xk)hk‖ ≤ L2 2 ‖hk‖2 ≤ 2 .
Combining these two results gives ‖∇F (xk+1)‖ ≤ 3 . Besides, using Assumption 2.3, ‖∇2k‖ ≤ √ L2/3, and (14), we derive the Hessian lower bound
∇2F (xk+1) < ∇2F (xk)− L2 · ‖hk‖I < Hk− √ L2/3I−L2‖hk‖I <− √ 12 L2I.
Hence xk+1 is a 12 -stationary point. Additionally, we have ‖hk‖ = r according to the complementary condition (15) for all but the last iteration.
B.2 PROOF OF LEMMA 4.1
Proof. Without loss of generality, we analyze the case 0 ≤ k < q2 for ease of notation. We first focus on Option II. The proof for Option I follows the similar argument. Option II: Define for k = 0 and i ∈ [s′2]
B0i def = ∇2fi(x0)−∇2F (x0),
and define for k ≥ 1 and i ∈ [s2]
Bki def = ∇2fi(xk)−∇2fi(xk−1)− (∇2F (xk)−∇2F (xk−1)).
{Bki } is a martingale difference sequence. We have for all k and i,
E[Bki |xk] = 0.
Besides, we use Assumption 2.2 for k = 0 to bound
‖B0i ‖ ≤ ‖∇2fi(x0)‖+ ‖∇2F (x0)‖ = 2L1, (22)
and use Assumption 2.3 for k ≥ 1 to bound ‖Bki ‖ ≤‖∇2fi(xk)−∇2fi(xk−1)‖+ ‖∇2F (xk)−∇2F (xk−1)‖ ≤ 2 √ L2.
From the construction of Hk, we have
Hk −∇2F (xk) = s′2∑ i=1 B0i s′2 + k∑ j=1 s2∑ i=1 Bji s2 .
Thus using the matrix Azuma’s Inequality in Theorem 7.1 of (Tropp, 2012) and k ≤ p2, we have
Pr{‖Hk −∇2F (xk)‖ ≥ t} ≤d · exp{− t 2/8∑s′2
i=1 4L 2 1/s ′2 2 + ∑k j=1 ∑s2 i=1 4 L2/s 2 2 }
≤d · exp{− t 2/8
4L21/s ′ 2 + 4p2 L2/s2
}.
Consequently, we have Pr{‖Hk −∇2F (xk)‖ ≤ √ L2} ≥ 1− δ/K0.
by taking t = √ L2, s′2 = 16L 2 1/( L2) log(dK0/δ), s2 = 32L1/( √ L2) log(dK0/δ), and p2 =
L1/(2 √ L2).
Option I: The proof is similar to the one of Option II except that we replace B0i with zero matrix. In such case, the matrix Azuma’s Inequality implies
Pr{‖Hk −∇2F (xk)‖ ≥ t} ≤ d · exp{− t 2/8∑k
j=1 ∑s2 i=1 4 L2/s 2 2 } ≤ d · exp{− t 2/8 4p2 L2/s2 }.
Thus by taking t = √ L2, s2 = 32 √ n log(dK0/δ), and p2 = √ n, we have the result.
Amortized Complexity: In option I, the choice of parameters ensures that: s′2 ≤ p2 × s2 and in option II: n ≤ p2 × s2. Consequently, the amortized stochastic second-order oracle complexity is bounded from above by 2s2.
B.3 PROOF OF LEMMA 4.2
Without loss of generality, we analyze the case 0 ≤ k < q1 for ease of notation. Define for k ≥ 1 and i ∈ [s1]
aki def = ∇fi(xk)−∇fi(xk−1)− (∇F (xk)−∇F (xk−1)).
{aki } is a martingale difference sequence: for all k and i
E[aki |xk] = 0.
Besides, aki has bounded norm:
‖aki ‖≤‖∇fi(xk)−∇fi(xk−1)‖+‖∇F (xk)−∇F (xk−1)‖ ≤L1‖xk − xk−1‖+L1‖xk − xk−1‖
≤2L1 √ /L2. (23)
From the construction of gk, we have
gk −∇F (xk) = k∑ j=1 s1∑ i=1 aji s1 .
Recall the Azuma’s Inequality. Using k ≤ p1, we have
Pr{‖gk −∇F (xk)‖ ≥ t}
≤ exp{− t 2/8∑k
j=1 ∑s1 i=1 4 L21 L2s21 } ≤ exp{− t 2/8 4 L21p1/(s1L2) }.
Take t = /6 and denote c = 1152. To ensure that
Pr{‖gk −∇F (xk)‖ ≥ /6} ≤ δ/K0,
we need cL 2 1
L2 log K0δ ≤ s1 p1 . The best amortized stochastic first-order oracle complexity can be obtained by solving the following two-dimensional programming:
min p1≥1,s1≥1
(n+ s1(p1 − 1))/p1
s.t. cL21 L2 log K0 δ ≤ s1 p1 ,
which has the solution s1 = min{n, √ n · cL21 log K0 δ
L2 }, and p1 = max{1,
√ n · L2
cL21 log K0 δ
}. Note
that when we take s1 = n, we directly compute gk = ∇F (xk) without sampling. The amortized stochastic first-order oracle complexity is obtained by plugging in the choice of s1 and p1, which completes the proof.
B.4 PROOF OF LEMMA 5.1
Without loss of generality, we analyze the case 0 ≤ k < q1 for ease of notation. Define for k ≥ 1 and i ∈ [s1]
bki def =∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1) − [∇F (xk)−∇F (xk−1)−∇2F (x̃)(xk − xk−1)].
{bki } is a martingale difference sequence: for all k and i E[bki |xk] = 0. Besides, bki has bounded norm:
‖bki ‖≤‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖ +‖∇F (xk)−∇F (xk−1)−∇2F (x̃)(xk − xk−1)‖.
We can bound ‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖ as follows. ‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖
= ‖ ∫ 1
0
[ ∇2fi(xk−1 + t(xk − xk−1))−∇2fi(x̃) ] (xk − xk−1)dt‖
≤ ∫ 1
0
L2 ∥∥∥txk + (1− t)xk−1 − x̃∥∥∥dt · ‖xk − xk−1‖ ≤ ∫ 1
0
( t‖xk − x̃‖+ (1− t)‖xk−1 − x̃‖ ) dt · L2r
≤ L2kr2, where the first inequality follows from Assumption 2.3 and the last inequality holds because ‖xk − x̃‖ ≤ kr and ‖xk−1 − x̃‖ ≤ kr, where r is the trust region radius. Similarly, we have ‖∇F (xk)− ∇F (xk−1)−∇2F (x̃)(xk − xk−1)‖ ≤ L2kr2. Thus, we bound
‖bki ‖ ≤ 2L2kr2 ≤ 2p1 From the construction of gk, we have
gk −∇F (xk) = k∑ j=1 s1∑ i=1 bji s1 .
We use k ≤ p1 and the Azuma’s inequality to bound Pr{‖gk −∇F (xk)‖ ≥ t}
≤ exp{− t 2/8∑k
j=1 ∑s1 i=1 4p21 2
s21
} ≤ exp{− t 2/8
4 2p31/s1 }.
Thus, by taking t = /6 and c = 1152, we need s1 p31 ≥ c log K0δ . Further we want s1p1 ' O(n) and hence we take p1 = n0.25 and s1 = n0.75c log K0δ . The amortized stochastic first-order oracle complexity is bounded by 2s1.
C ANALYSIS OF METAALGORITHM 7
We first show that INEXACTTRWEAK finds an O( )-SOSP in O(1/ 1.5) iterations with probability at least 2/3 as stated in the following lemma. Lemma C.1. Consider problem (1) under Assumptions 2.1-2.3. Suppose that the differential estimators gk and Hk satisfy Eqn. (9) with probability at least (1− ζ4K ). Besides, suppose that h̃
k is an approximate solution to (8) such that w.p. (1− ζ4K ),
〈gk, h̃k〉+ 1 2 〈Hkh̃k, h̃k〉 ≤ 〈gk,hk〉+ 1 2 〈Hkhk,hk〉+
1.5
√ L2 , (24) where hk is a global solution to (8). By setting ζ = 1/3, r = √ /L2, and K = 4 √ L2∆/
1.5, INEXACTTRWEAK outputs a 500 -SOSP w.p. at least 2/3.
Proof. Combining (11) and (24), we have w.p. (1− ζ4K ),
F (xk+1) ≤ F (xk) + 〈gk,hk〉+ 1 2 〈Hkhk,hk〉+
1.5
√ L2
+ L2 6 ‖h̃k‖3 + ‖∇k‖‖h̃k‖+ 1 2 ‖∇2k‖‖h̃k‖2,
(25)
where hk is a global solution to the QCQP (8) and h̃k is an approximate solution satisfying (24). We let λk denote the dual variable corresponding to the global solution hk as defined in Lemma 2.1. We note that hk and λk are used only in our analysis. The INEXACTWEAK algorithm only requires the approximate solution x̃k without knowledge of hk or λk.
By the assumption that (9) holds with probability (1− ζ4K ) and the fact that ‖h̃ k‖ ≤ r = √ L2 , we have w.p. (1− ζ4K ),
L2 6 ‖h̃k‖3 + ‖∇k‖‖h̃k‖+ 1 2 ‖∇2k‖‖h̃k‖2 ≤
1.5
2 √ L2 . (26)
Plugging (16), (17), and (26) into (25) and applying the union bound, we have w.p. at least (1− ζ2K ),
F (xk+1) ≤ F (xk)− L2λ k‖hk‖2
4 +
3 1.5
2 √ L2
= F (xk)− L2λ kr2
4 +
3 1.5
2 √ L2 , (27)
where the second inequality follows from (15):
0 = λk(‖hk‖ − r) = λk(‖hk‖ − r)(‖hk‖+ r) = λk(‖hk‖2 − r2). (28)
Summing inequality (27) from k = 0 to K − 1 and applying the union bound, we have w.p. at least (1− ζ/2),
1
K K−1∑ k=0 λk ≤ 4(F (x 0)− F (xK+1)) L2r2K + 6 1.5 L1.52 r 2 ≤ 4∆ K + 6 √ √ L2 , (29)
where the second inequality follows from Assumption 2.1 and our choice of the trust region radius.
By sampling k̄ uniformly from {0, . . . ,K − 1}, we obtain
E[λk̄] = 1
K K−1∑ k=0 λk, (30)
where the expectation is taken over the randomness of k̄. Combining (29) and (30) and taking K = 4∆ √ L2/ 1.5, we have w.p. at least (1− ζ/2)
E[λk̄] ≤ 7 √ √ L2 . (31)
Since λk is always no-negative, by Markov’s inequality and the union bound, with probability at least 1− ζ, we have
λk̄ ≤ 14 √
ζ √ L2 . (32)
By taking ζ = 1/3, we have w.p. at least 2/3, λk̄ ≤ 42 √ /L2. The rest of the proof is similar to Theorem 3.1 and we have the result.
The following theorem shows that MetaAlgorithm 7 finds an O( )-SOSP w.p. (1− δ) after running INEXACTTRWEAK for Θ(log(1/δ)) times.
Theorem C.1 (Iteration Complexity of MetaAlgorithm 7). In the same setting as Lemma C.1, let T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). Then MetaAlgorithm 7 finds a 600 -SOSP with probability at least (1− δ).
Proof. By Lemma C.1 and our choice of T , with probability (1 − δ/2), at least one of xt is a 500 -SOSP. On the other hand, since ψt(ṽt) ≤ ψt(vt) + √ L2 with probability 1 − δ/4, if ψt(ṽ t) ≥ − √ c2 L2, then, w.p. 1− δ/4,
ψt(v t) ≥ ψt(ṽt)− √ L2 ≥ − √ c2 L2 − √ L2 ≥ − √ 550 L2, (33)
where the last inequality follows from our choice of c2.
Option I: Since Ht = ∇2F (xt) is the full Hessian, ψt(vt) is the smallest eigenvalue of∇2F (xt). Applying the union bound, we conclude that MetaAlgorithm 7 outputs a 600 -SOSP w.p. (1− δ). Option II: Let Bi := ∇2fi(xt)−∇2F (xt) for i ∈ H, then
Ht −∇2F (xt) = 1 s s∑ i=1 Bi. (34)
By Assumption 2.2, we have
‖Bi‖ ≤ ‖∇2fi(xt)‖+ ‖∇2F (xt)‖ ≤ 2L1. (35) Applying the matrix Azuma’s Inequality in Theorem 7.1 of Tropp (2012) leads to
Pr{‖Ht −∇2F (xt)‖ ≥ √ L2} ≤ d · exp(
− L2s 32L21 ). (36)
By taking s = 32L 2 1
L2 log(4d/δ) and applying the union bound, we have with probability 1− δ, ∇2F (xt) < Ht − √ L2I < (ψt(v t)− √ L2)I < − √ 600 L2I, (37)
where the last inequality follows from (33). This completes the proof.
D PROOF OF THEOREM 6.1
Proof. We first analyze the computational cost of Lanczos method. By Corollary 2 in (Carmon & Duchi, 2018), for any desired accuracy ̃, Lanczos method achieves this accuracy in O( r√
̃ log r
√ d ̃p )
Lanczos iterations w.p. at least (1− p). Without loss of generality, we assume that the number of Lanczos iterations is strictly smaller than the dimension d, otherwise the QCQP subproblem can be solved exactly. We note that each Lanczos iteration involves computation of one matrix-vector product. Therefore, to satisfy the condition (24) in Lemma C.1, one needs to evaluate Õ(1/(L2 )0.25) Hessian-vector products of the form Hkv. Similarly, to solve (10) up to accuracy √ L2 w.h.p., one needs to evaluate Õ(1/(L2 )0.25) Hessian-vector products of the form Htv. In MetaAlgorithm 7, to verify whether the candidate solution vt is indeed anO( )-SOSP, one needs at most O(n) stochastic gradient evaluations and O(min{n, log(4d/δ)L21/(L2 )}/(L2 )0.25)) stochastic Hessian-vector product evaluations, where the latter one follows from the proof of Theorem 7. We proceed to analyze the computational complexity of the INEXACTTRWEAK procedure. Recall that the iteration complexity of MetaAlgorithm 7 is O(log(1/δ)/ 1.5). Following Lemma 4.2 and Corollary 4.1, the stochastic first-order oracle complexity is Õ(min{n/ 1.5, √ n/ 2}log(1/δ)). Following the proof of Lemma 4.1 and Corollary 4.1, when p2 = 1, the overall stochastic Hessian sample complexity is Õ(min{n/ 1.5, 1/ 2.5} log(1/δ)). Since it takes Õ(1/ 0.25) Lanczos iterations to meet the condition (24) as stated above, the overall stochastic Hessian-vector product oracle complexity is Õ(min{n/ 1.75, 1/ 2.75}log(1/δ)). Combining the stochastic first-order and Hessian-vector product complexities, the overall runtime is Õ(dmin{n/ 1.75, 1/ 2.75 + √ n/ 2}log(1/δ)).
E A FASTER HESSIAN-VECTOR BASED QCQP SOLVER
We recall from the previous section that, to approximately solve a quadratic subproblem in STRfree, Lanczos method requires Õ(min{n/ 0.25, 1/ 1.25}) stochastic Hessian-vector product evaluations. In this section, we propose a faster QCQP solver with an Õ(min{n+ n0.75/ 0.25, 1/ }) complexity. Replacing Lanczos method with this QCQP solver in STRfree results in a faster Hessian-free method, which we refer to as STRfree+.
E.1 CONVEX REFORMULATION OF QCQP
To begin with, we present a known result that is key to achieve faster algorithm than Lanczos method. We summarize this result in Lemma E.1 which shows that the trust region subproblem is equivalent to a convex QCQP. Lemma E.1. (Convex Reformulation of QCQP (Flippo & Jansen, 1996; Wang & Xia, 2017)) Denote λmin as the smallest eigenvalue of Hk. Let umin be a corresponding eigenvector. W.l.o.g., we assume that 〈gk,umin〉 ≤ 0. Let µ = min{λmin, 0}. Then the QCQP (8) is equivalent to the convex problem
min h∈Rd,‖h‖≤r qk(h) = 〈gk,h〉+ 1 2 〈(Hk − µI)h,h〉+ 1 2 µr2 (38)
in the sense that (8) and (38) have the same minimum function value. Moreover, when λmin < 0, for any optimal solution of (38), denoted by hkc ,
hkc +
√ 〈hkc ,umin〉2 − ‖umin‖2(‖hkc‖2 − r2)− 〈hkc ,umin〉
‖umin‖2 umin (39)
is a global minimizer of the original QCQP (8).
To perform the above reformulation, one needs to compute the exact eigenpair (λmin,umin). Nevertheless, as we shall see, it is sufficient to compute an approximate eigenpair (λ̃, ũ) such that
λmin ≤ λ̃ = ũTHkũ ≤ λmin + ̃, (40) where ̃ is a target accuracy to be determined later. We note that ̃ ≤ 2L2 w.l.o.g. since ‖Hk‖ ≤ L2. With this approximate eigenpair, it remains to solve the following convex problem
min h∈Rd,‖h‖≤r q̃k(h) = 〈gk,h〉+ 1 2 〈(Hk − µ̃I)h,h〉+ 1 2 µ̃r2 (41)
where µ̃ = min{0, λ̃− ̃}. One can check that the problem (41) well approximates (38). Corollary E.1. Let qk∗ and q̃k∗ be the minimum function value of (38) and (41), respectively. Assume λmin ≤ λ̃ = ũTHkũ ≤ λmin + ̃. Then
|qk∗ − q̃k∗ | ≤ ̃r2. (42)
We note that the above convex reformulation approach divides an indefinite QCQP into two subproblems: (i) computation of an approximate eigenpair (λ̃, ũ); (ii) solving the convex problem (41). As we shall see, by exploiting the finite-sum structure of the Hessian Hk, these two subproblems can be efficiently solved. We treat these two subproblems in the following two subsections, respectively.
E.2 FINDING THE SMALLEST EIGENVECTOR
To find a unit vector that satisfies requirement (40), we resort to the AppxPCA method (Allen-Zhu & Li, 2016), which first finds an approximate eigenvalue λ = λmin − ̃ via binary search and then applies Power method to the positive definite matrix (Hk − λI)−1 for a logarithmic number of iterations. Computing (Hk − λI)−1v for any vector v is equivalent to solving the ̃-strongly convex problem (Allen-Zhu & Li, 2018)
min u φk(u) :=
1 2 uT (Hk − λI)u− 〈v,u〉 (43)
We note that Hk = 1|S| ∑ i∈S ∇2fi(xk). Specifically, in STRfree, either |S| = n (i.e., Hk is the full Hessian) or |S| = Õ(L21/(L2 )) by Lemma 4.1. Therefore, φk(·) can be expressed as sum of non-convex functions
φk(u) = 1 |S| ∑ i∈S φki (u) = 1 |S| ∑ i∈S (1 2 uT (∇2fi(xk)− λI)u− 〈v,u〉 ) . (44)
By observing that each φki is non-convex and has (4L2)-Lipschitz gradient, we can use KatyushaXS (Allen-Zhu, 2018a) to solve problem (43) in Õ(|S|+ |S|3/4 √ L2/̃) stochastic Hessianvector product (i.e., ∇2fi(xk)u) evaluations. The following result is taken from (Agarwal et al., 2017, Section G.3), which gives the overall computation complexity of AppxPCA.
Algorithm 9 Fast QCQP Solver
Input: Hk, gk, r, ̃, ̃1 1: Use AppxPCA to find (λ̃, ũ) satisfying (40), in which the matrix inverse is solved by KatyushaXS; 2: Use KatyushaXW to solve (41) up to accuracy ̃1, i.e., find a vector h̃ such that q̃k(h)− q̃k∗ ≤ ̃1
with high probability; 3: Return h̃ + ( √ 〈h̃, ũ〉2 − ‖ũ‖2(‖h̃‖2 − r2)− 〈h̃, ũ〉)ũ/‖ũ‖2.
Algorithm 10 STRfree+ 1: In the same setting as MetaAlgorithm 7, 2: construct gradient estimator gk by Estimator 4; 3: construct Hessian estimator Hk by 4: Option I: Hk := ∇2F (xk); 5: Option II: Draw s samples indexed byH and let Hk := ∇2f(xk;H); 6: use Algorithm 9 to solve QCQP subproblems. Lemma E.2. Let Hk = 1|S| ∑ i∈S ∇2fi(xk) ∈ Rd×d, where ‖∇2fi(xk)‖ ≤ L2. With probability at least 1− p, AppxPCA produces a unit vector u satisfying uTHku ≤ λmin + ̃. The total stochastic Hessian-vector product oracle complexity is Õ(|S|+ |S|3/4 √ L2/̃).
E.3 SOLVING THE CONVEX QCQP
In what follows, we show that the convex problem (41) can be solved efficiently. We first observe that problem (41) has a finite-sum structure and can be rewritten as an unconstrained problem of the form
min h∈Rd
1 |S| ∑ i∈S q̃ki (h)+Ψ(h) = 1 |S| ∑ i∈S ( 〈gk,h〉+ 1 2 〈(∇2fi(xk)−µ̃I)h,h〉+ 1 2 µ̃r2 ) +Ψ(h), (45)
where Ψ(h) = 0 if ‖h‖ ≤ r, otherwise Ψ(h) = +∞. We note that each q̃ki (h) in (45) has (4L2)-Lipschitz continuous gradient since ‖∇2fi(xk)‖ ≤ L2 and ̃ ≤ 2L2. Therefore, we can use KatyushaXW (Allen-Zhu, 2018a) to solve (45). By (Allen-Zhu, 2018a, Theorem 4.6), KatyushaXW finds a point h such that E[q̃k(h)− q̃k∗ ] ≤ ̃1 using Õ(|S|+ |S|3/4 √ L2 · r/ √ ̃1) stochastic Hessianvector products, where ̃1 is the target accuracy to be determined later.
E.4 PUTTING IT ALL TOGETHER
The complete procedure of our fast QCQP solver is summarized in Algorithm 9. Combining all the above results and setting r = √ /L2, ̃ = √ L2/2, and ̃1 = ̃r2, one can find an approximate solution to QCQP (8) satisfying requirement (24) in Õ(|S|+ |S|3/4L0.252 / 0.25) stochastic Hessianvector product evaluations. By replacing Lanczos method with this solver in STRfree, we derive a new Hessian-free method called STRfree+, which is summarized in Algorithm 10. The following theorem establishes the overall runtime complexity of STRfree+ for finding an -SOSP.
Theorem E.1. Consider Algorithm 10 for solving problem (1). Let ζ = 1/3, r = √ /L2, K = 4 √ L2∆/ 1.5, T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). The hyper-parameters in Estimator 4 are set to the same values as those in Lemma 4.2. Besides, let ̃ = √ L2/2 and ̃1 = ̃r 2 in Algorithm 9. To find an O( )-SOSP w.p. at least 1 − δ, the runtime complexity is Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 + √ n/ 2}log(1/δ)).
Proof. The proof directly follows from that in Sec. D.
We compare the runtime complexity of STRfree and STRfree+ with existing Hessian free methods in Table 2. One can see that STRfree strictly outperforms Hessian-free Cubic. Besides, STRfree outperforms Fast-Cubic if n ≥ Ω(1/ 4/3), which is a mild condition for large-scale problems in the moderate accuracy case. STRfree+ strictly outperforms both Hessian-free Cubic and Fast-Cubic. We
note that the runtime analyses in (Tripuraneni et al., 2018; Zhou & Gu, 2019) rely on an additional assumption which states that for all x, with probability 1,
‖∇fi(x)−∇F (x)‖ ≤ σ. (46)
Under this additional assumption, one can use the same argument as in Section B.2 and B.3 to prove that STRfree achieves a runtime complexity of Õ(dmin{n/ 1.75, n0.5/ 2 + 1/ 2.75, 1/ 3})1. Similarly, the runtime complexity of STRfree+ would be Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 +√ n/ 2, 1/ 3}). In this sense, both STRfree and STRfree+ outperform Stochastic Cubic and SRVRCfree.
F ADDITIONAL EXPERIMENTAL RESULTS
F.1 MORE EXPERIMENTAL DETAILS
Descriptions of Testing Datasets. We briefly introduce the seven testing datasets in the manuscript. Among them, six datasets are provided by the LibSVM website2, including (a9a, ijcnn, codrna, phishing, w8a and epsilon). The detailed information is summarized in Table 3. We can observe that these datasets are different from each other in feature dimension, training samples, etc.
1To obtain this complexity, one needs to replace the full gradient in line 2 of Estimator 4 (and line 6 of MetaAlgorithm 7) with a mini-batch stochastic gradient when n ≥ Ω(1/ 2).
2https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
Experimental Settings. In the manuscript, following SVRC (Zhou et al., 2018c) and LiteSVRC (Zhou et al., 2018b), we select hyper parameters from a set, namely s1 from {0.2n, 0.6n, n}, s2 from {0.01n, 0.1n, 0.2n}, p1 and p2 from {0.01n0.5, 0.05n0.5, 0.1n0.5}. For the Hessian estimation at the beginning of each p2 iterations, we use full Hessian. Similarly, for the gradient estimation at the beginning of each p1 iterations, we adopt the full gradient as the gradient estimation.
Memory Analysis. SVRC, Lite-SVRC, and our method need to store the previous and current gradient and Hessian and thus their memory complexity is 2(d2 + d). TR, CR (ARC) and SCR need to compute current and Hessian and thus has complexity d2 + d. So these memory is of the same order but our method is much faster than TR, CR and SCR both validated by theory and experiments.
F.2 MORE EXPERIMENTS
Here we give more experimental results on the gradient norm v.s. the algorithm running time and the Hessian sample complexity. Due to the space limit, in the manuscript we only provide the gradient-norm related results on the codrna dataset. Here we provide results of a9a and ijcnn datasets in Figure 4. One can observe that on both the logistic regression with non-convex regularizer and the nonlinear least square problems, the proposed algorithm always shows sharper convergence behavior in terms of both the running time and the Hessian sample complexity. These observations are consistent with the results in Figure 2 in the manuscript. All these results demonstrate the high efficiency of our proposed algorithm and also confirm our theoretical implication. | 1. What are the key contributions and novel aspects introduced by the paper in trust region algorithms?
2. What are the strengths of the proposed algorithms, particularly in terms of their oracle complexities?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. Are there any concerns or limitations regarding the proposed methods, their implementations, or their applicability to real-world scenarios? | Review | Review
This paper proposes new stochastic trust region algorithms for non-convex finite-sum minimization problems. The first algorithm STR1 has lower second order oracle complexity, while STR2 has lower first order + second order oracle complexity. The authors also give a Hessian-free implementation of stochastic trust region algorithm. Technically, the authors first analyze trust region methods with inexact gradient and Hessian estimation, and then implement efficient gradient and Hessian estimators.
Overall this paper is well-written and easy to follow. I would recommend acceptance. |
ICLR | Title
A Stochastic Trust Region Method for Non-convex Minimization
Abstract
We target the problem of finding a local minimum in non-convex finite-sum minimization. Towards this goal, we first prove that the trust region method with inexact gradient and Hessian estimation can achieve a convergence rate of order O(1/k) as long as those differential estimations are sufficiently accurate. Combining such result with a novel Hessian estimator, we propose a sample-efficient stochastic trust region (STR) algorithm which finds an ( , √ )-approximate local minimum within Õ( √ n/ ) stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor ofO(n). Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.
N/A
√ )-approximate local
minimum within Õ( √ n/ 1.5) stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor ofO(n1/6). Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.
1 INTRODUCTION
We consider the following finite-sum non-convex minimization problem
min x∈Rd
F (x) = 1
n ∑n i=1 fi(x), (1)
where each (non-convex) component function fi : Rd → R is assumed to have L1-Lipschitz continuous gradient and L2-Lipschitz continuous Hessian. Since first-order stationary points could be saddle points with inferior generalization performance (Dauphin et al., 2014), in this work we are particularly interested in computing ( , √ )-approximate second-order stationary points, -SOSP:
‖∇F (x )‖ ≤ and ∇2F (x ) < − √ L2 I. (2)
To find a local minimum of problem (1), the cubic regularization approach (Nesterov & Polyak, 2006) and the trust region algorithm (Conn et al., 2000; Curtis et al., 2017) are two classical methods. Specifically, cubic regularization forms a cubic surrogate function for the objective F (x) by adding a third-order regularization term to the second-order Taylor expansion, and minimizes it iteratively. Such a method is proved to achieve an O(1/k2/3) global convergence rate and thus needs O(n/ 1.5) stochastic first- and second-order oracle queries, namely the evaluation number of stochastic gradient and Hessian, to achieve a point that satisfies (2). On the other hand, trust region algorithms estimate the objective with its second-order Taylor expansion but minimize it only within a local region. Recently, Curtis et al. (2017) proposes a trust region variant to achieve the same convergence rate as the cubic regularization approach. But both methods require computing full gradients and Hessians of F (x) and thus suffer from high computational cost in large-scale problems.
To avoid costly exact differential evaluations, many works explore the finite-sum structure of problem (1) and develop stochastic cubic regularization approaches. Both Kohler & Lucchi (2017b) and Xu et al. (2017) propose to directly subsample the gradient and Hessian in the cubic surrogate function, and achieve Õ(1/ 3.5) and Õ(1/ 2.5) stochastic first- and second-order oracle complexities respectively. By plugging a stochastic variance reduced estimator (Johnson & Zhang, 2013) and the Hessian tracking technique (Gower et al., 2018) into the gradient and Hessian estimation, the approach in (Zhou et al., 2018a) improves both the stochastic first- and second-order oracle complexities to Õ(n0.8/ 1.5). Recently, Zhang et al. (2018) and Zhou et al. (2018b) develop more efficient stochastic cubic regularization variants, which further reduce the stochastic second-order oracle complexity to Õ(n2/3/ 1.5) at the cost of increasing the stochastic first-order oracle complexity to Õ(n2/3/ 2.5).
Contributions: In this paper we propose and exploit a formulation in which we make explicit control of the step size in the trust region method. This idea is leveraged to develop two efficient stochastic trust region (STR) approaches. We tailor our methods to achieve state-of-the-art oracle complexities under the following two measurements: (i) the stochastic second-order oracle complexity is prioritized; (ii) the stochastic first- and second-order oracle complexities are treated equally. Specifically, in Setting (i), our method STR1 employs a newly proposed estimator to approximate the Hessian and adopts the estimator in (Fang et al., 2018) for gradient approximation. Our novel Hessian estimator maintains an accurate second-order differential approximation with lower amortized oracle complexity. In this way, STR1 achieves Õ(min{1/ 2, √ n/ 1.5}) stochastic second-order oracle complexity. This is lower than existing results for solving problem (1). In Setting (ii), our method STR2 substitutes the gradient estimator in STR1 with one that integrates stochastic gradient and Hessian together to maintain an accurate gradient approximation. As a result, STR2 achieves convergence in Õ(n3/4/ 1.5) overall stochastic first- and second-order oracle queries. Finally, based on STR, we further develop Hessian-free STR algorithms, namely STRfree and STRfree+, which outperform existing Hessian-free algorithms theoretically.
1.1 RELATED WORK
Computing local minimum to a non-convex optimization problem is gaining considerable amount of attentions in recent years. Both cubic regularization (CR) approaches (Nesterov & Polyak, 2006) and trust region (TR) algorithms (Conn et al., 2000; Curtis et al., 2017) can escape saddle points and find a local minimum by iterating the variable along the direction related to the eigenvector of the Hessian with the most negative eigenvalue. As the CR heavily depends on the regularization parameter for the cubic term, Cartis et al. (2011) propose an adaptive cubic regularization (ARC) approach to boost the efficiency by adaptively tuning the regularization parameter according to the current objective decrease. Noting the high cost of full gradient and Hessian computation in ARC, sub-sampled cubic regularization (SCR) (Kohler & Lucchi, 2017a) is developed for sampling partial data points to estimate the full gradient and Hessian. Recently, by exploring the finite-sum structure of the target problem, many works incorporate variance reduction technique (Johnson & Zhang, 2013) into CR and propose stochastic variance-reduced methods. For example, Zhou et al. (2018c) propose stochastic variance-reduced cubic (SVRC) in which they integrate the stochastic variance-reduced gradient estimator (Johnson & Zhang, 2013) and the Hessian tracking technique (Gower et al., 2018) with CR. Such a method is proved to be at least O(n1/5) faster than CR and TR. Then Zhou et al. (2018b) use adaptive gradient batch size and constant Hessian batch size, and develop Lite-SVRC to further reduce the stochastic second-order oracle Õ(n4/5/ 1.5) of SVRC to Õ(n2/3/ 1.5) at the cost of higher gradient computation cost. Similarly, except turning the gradient batch size, Zhang et al. (2018) further adaptively sample a certain number of data points to estimate the Hessian and prove the proposed method to have the same stochastic second-order oracle complexity as Lite-SVRC.
2 PRELIMINARY
Notation. We use ‖v‖ to denote the Euclidean norm of vector v and use ‖A‖ to denote the spectral norm of matrix A. Let S be the set of component indices. We define the minibatch average of component functions by f(x;S) def= 1|S| ∑ i∈S fi(x). Then we specify the assumptions that are necessary to the analysis of our methods.
MetaAlgorithm 1 Inexact Trust Region Method
Input: initial point x0, step size r, number of iterations K, construction of differential estimators gk and Hk
1: for k = 0 to K − 1 do 2: Compute hk and λk by solving (8); 3: xk+1 := xk + hk; 4: if λk ≤ 3 √ /L2 then 5: Output x = xk+1; 6: end if 7: end for
Assumption 2.1. F is bounded from below and its global optimal is achieved at x∗. We denote ∆ = F (x0)− F (x∗). Assumption 2.2. Each fi : Rd → R has L1-Lipschitz continuous gradient: for any x,y ∈ Rd
‖∇fi(x)−∇fi(y)‖ ≤ L1‖x− y‖. (3) Assumption 2.3. Each fi : Rd → R has L2-Lipschitz continuous Hessian: for any x,y ∈ Rd
‖∇2fi(x)−∇2fi(y)‖ ≤ L2‖x− y‖. (4)
2.1 TRUST REGION METHOD
Here we briefly introduce the trust region method (Conn et al., 2000). In each step, it first solves the Quadratic Constraint Quadratic Program (QCQP) defined as
hk := argmin h∈Rd,‖h‖≤r 〈∇F (xk),h〉+ 1 2 〈∇2F (xk)h,h〉, (5)
where r is the trust-region radius. Then it updates the new variable as
xk+1 := xk + hk. (6)
Since ∇2F (xk) is indefinite, the trust-region subproblem (5) is non-convex. But its global optimizer can be characterized by the following lemma (Corollary 7.2.2 in (Conn et al., 2000)). Lemma 2.1. Any global minimizer of problem (5) satisfies the equation(
∇2F (xk) + λI ) hk = −∇F (xk), (7)
where the dual variable λ ≥ 0 should satisfy∇2F (xk) + λI < 0 and λ(‖hk‖ − r) = 0.
In particular, the standard QCQP solver returns both the minimizer hk as well as the corresponding dual variable λ of subproblem (5). In the following section, we first prove that the deterministic trust-region update (5) and (6) converges at the rate of O(1/k2/3), much sharper than existing provable convergence rate O(1/ √ k) (Conn et al., 2000), and then develop a more efficient stochastic trust-region approach.
3 METHODOLOGY
Here we first introduce a general inexact trust region method which is summarized in MetaAlgorithm 1. It accepts inexact gradient estimation gk and Hessian estimation Hk as input to the QCQP subproblem
hk := argmin h∈Rd,‖h‖≤r 〈gk,h〉+ 1 2 〈Hkh,h〉. (8)
Similar to (5), Lemma 2.1 characterizes the global optimum to problem (8) which can be efficiently solved by Lanczos method (Gould et al., 1999). Assume the dual variable of the minimizer hk is λk.
We prove that such inexact trust region method achieves the optimal O(1/k2/3) convergence rate when the estimation gk and Hk at each iteration are sufficiently close to their full (exact) counterparts ∇F (xk) and ∇2F (xk) respectively:
‖gk −∇F (xk)‖ ≤ 6 , ‖Hk −∇2F (xk)‖ ≤ √ L2 3 . (9)
Algorithm 2 STR1 Input: initial point x0, step size r, number of iterations K
1: for k = 1 to K do 2: Construct gradient estimator gk by Estimator 4; 3: Construct Hessian estimator Hk by Estimator 3; 4: Compute hk and λk by solving (8); 5: xk+1 := xk + hk; 6: if λk ≤ 3 √ /L2 then 7: Output x = xk+1; 8: end if 9: end for
Such result allows us to derive stochastic trust-region variants with novel differential estimators that are tailored to ensure the optimal convergence rate. We state our formal results in Theorem 3.1, whose proof is deferred to Appendix B.1 due to the space limit.
Theorem 3.1 (Main Result). Consider problem (1) under Assumption 2.1-2.3. If the differential estimators gk and Hk satisfy Eqn. (9) for all k, MetaAlgorithm 1 finds an O( )-SOSP in less than K = O( √ L2∆/ 1.5) iterations by setting the trust-region radius as r = √ /L2.
Remark 3.1. We emphasize that MetaAlgorithm 1 degenerates to the exact trust region method by taking gk = ∇F (xk) and Hk = ∇2F (xk). Such result is of its own interest because this is the first proof to show that the vanilla trust region method has the optimal O(1/k2/3) convergence rate. Similar rate is achieved by Curtis et al. (2017) but with a much more complicated trust region variant.
Theorem 3.1 shows the explicit step size control of the trust region method: Since the dual variable satisfies λk > 3 0.5/ √ L2 > 0 for all but the last iteration, we always find the solution to the trustregion subproblem (8) in the boundary, i.e. ‖hk‖ = r, according to the complementary condition (15) in Appendix B.1. Such exact step size control property is missing in the cubic-regularization method where the step size is implicitly decided by the cubic regularization parameter.
More importantly, we emphasize that such explicit step size control is crucial to the sample efficiency of our variance reduced differential estimators. The essence of variance reduction is to exploit the correlations between the differentials in consecutive iterations. Intuitively, when two neighboring iterates are close, so are their differentials due to the Lipschitz continuity, and hence a smaller number of samples suffice to maintain the accuracy of the estimators. On the other hand, smaller step size reduces the per-iteration objective decrease which harms the convergence rate of the algorithm (see proof of Theorem 3.1). Therefore, the explicit step size control in trust region method allows us to well trade-off the per-iteration sample complexity and convergence rate, from which we can derive stochastic trust region approaches with the state-of-the-art sample efficiency. In contrast, existing trust region methods change the step size at every iteration according to progress made, which requires loss evaluations that can be as expensive as gradient computations (e.g. the non-convex linear model in Section 7) and is thus prohibitive for large-scale problems.
4 STOCHASTIC TRUST REGION METHOD: TYPE I
Having the inexact trust region method as prototype, we now present our first sample-efficient stochastic trust region method, namely STR1, in Algorithm 2 which emphasizes cheaper stochastic second-order oracle complexity. As Theorem 3.1 already guarantees the optimal convergence rate of MetaAlgorithm 1 when the gradient estimator gk and the Hessian estimator Hk meet the requirement (9), here we focus on constructing such novel differential estimators. Specifically, we first present our Hessian estimator in Estimator 3 and our first gradient estimator in Estimator 4, both of which exploit the trust region radius r = √ /L2 to reduce their variances.
4.1 HESSIAN ESTIMATOR
Our epoch-wise Hessian estimator Hk is given in Estimator 3, where p2 controls the epoch length and s2 (and optionally s′2) controls the minibatch size. At the beginning of each epoch, Estimator 3 has two options, designed for different target accuracy: Option I is preferable for the high accuracy
Estimator 3 Hessian Estimator Input: Epoch length p2, sample size s2, s′2 (optional)
1: if mod(k, p2)= 0 then 2: Option I: high accuracy case (small ) 3: Hk := ∇2F (xk); 4: Option II: low accuracy case (moderate ) 5: Draw s′2 samples indexed byH′ and let Hk := ∇2f(xk;H′); 6: else 7: Draw s2 samples indexed byH and let Hk := ∇2f(xk;H)−∇2f(xk−1;H) + Hk−1; 8: end if
Estimator 4 Gradient Estimator: Case (1) 1: if mod(k, p1)= 0 then 2: gk := ∇F (xk); 3: else 4: Draw s1 samples indexed by G and gk = ∇f(xk;G)−∇f(xk−1;G) + gk−1; 5: end if
case ( < O(1/n)) where we compute the full Hessian to avoid approximation error, and Option II is designed for the moderate accuracy case ( > O(1/n)) where we only need an approximate Hessian estimator. Then, p2 iterations follow with Hk defined in a recurrent manner. These recurrent estimators exist for the first-order case (Nguyen et al., 2017; Fang et al., 2018), but their bound only holds under the vector `2 norm. Here we generalize them into Hessian estimation with matrix spectral norm bound.
The following lemma analyzes the amortized stochastic second-order oracle (Hessian) complexity for Algorithm 3 to meet the requirement in Theorem 3.1. As we need an error bound under the spectral norm, we will appeal to the matrix Azuma’s inequality (Tropp, 2012). The proof is deferred to Appendix B.2.
Lemma 4.1. Assume Algorithm 2 takes the trust region radius r = √ /L2 as in Theorem 3.1. For any k ≥ 0, Estimator 3 produces estimator Hk for the second order differential ∇2F (xk) such that ‖Hk −∇2F (xk)‖ ≤ √ L2/3 with probability at least 1 − δ/K0 if we set (1) p2 = √ n and s2 = 32 √ n log(dK0/δ) in option I, or (2) p2 = L1/(2 √ L2), s′2 = 16L 2 1/( L2) log(dK0/δ),
and s2 = 32L1/( √ L2) log(dK0/δ) in option II. Here K0 is a constant to be determined later. Consequently the amortized per-iteration stochastic second-order oracle complexity to construct Hk is no more than 2s2 = min{64 √ n log dK0δ , 64L1√ L2 log dK0δ }.
4.2 GRADIENT ESTIMATOR: CASE (1)
When the stochastic second-order oracle complexity is prioritized, we directly employ the SPIDER gradient estimator to construct gk (Fang et al., 2018). Similar to the construction for Hk, the estimator gk is also constructed in an epoch-wise manner as presented in Estimator 4, where p1 controls the epoch length and s1 controls the minibatch size.
We now analyze the stochastic first-order oracle complexity to meet the requirement in Theorem 3.1. Lemma 4.2. Assume Algorithm 2 takes the trust region radius r = √ /L2. Estimator 4 produces estimator gk of the first order differential ∇F (xk) such that ‖gk − ∇F (xk)‖ ≤ /6 with probability at least 1 − δ/K0 for any k ≥ 0, if we set p1 = max{1, √ n L2/(cL21 log K0 δ )} and
s1 = min{n, √ cnL21 log(K0/δ)/( L2)}, where the constant c = 1152 and K0 is a constant to be determined later. Consequently, the amortized per-iteration stochastic first-order oracle complexity to construct gk is min{n, √ 4cnL21 log (K0/δ)/( L2)}.
The proof of Lemma 4.2 is similar to the one of Lemma 4.1 and is deferred to Appendix B.3. These two lemmas only guarantee that the differential estimators satisfy the requirement (9) in a single iteration and can be extended to hold for all k by using the union bound with K0 = 2K, where K denotes the number of iterations. Combining such lifted result with Theorem 3.1, we can establish the computational complexity bound as follows.
Algorithm 5 STR2 Input: initial point x0, step size r, number of iterations K
1: for k = 1 to K do 2: Construct gradient estimator gk by Estimator 6; 3: Construct Hessian estimator Hk by Estimator 3; 4: Compute hk and λk by solving (8); 5: xk+1 := xk + hk; 6: if λk ≤ 3 √ /L2 then 7: Output x = xk+1; 8: end if 9: end for
Estimator 6 Gradient Estimator: Case (2) 1: if mod(k, p1)= 0 then 2: Let x̃ := xk, gk := ∇F (x̃) 3: else 4: Draw s1 samples indexed by G; 5: gk = ∇f(xk;G)−∇f(xk−1;G) + gk−1 + [∇2F (x̃)−∇2f(x̃;G)](xk − xk−1); 6: end if
Corollary 4.1 (Computational Complexity of STR1). Assume Algorithm 2 will use Estimator 4 to construct the first-order differential estimator gk and use Estimator 3 to construct the second-order differential estimator Hk. To find a 12 -SOSP with probability at least 1− δ, the overall stochastic first-order oracle complexity isO(min{n √ L2∆ 1.5 , √ nL1 2 log( L2∆ δ 1.5 )}) and the overall stochastic secondorder oracle complexity is O(min{ √ nL2∆ 1.5 , L1∆ 2 } log( d √ L2∆
δ 1.5 )).
From Corollary 4.1 we see that Õ(min{ √ n/ 1.5, 1/ 2}) stochastic second-order oracle queries are sufficient for STR1 to find an -SOSP which is significantly better than both the subsampled cubic regularization method Õ(1/ 2.5) (Kohler & Lucchi, 2017a) and the variance reduction based ones Õ(n2/3/ 1.5) (Zhou et al., 2018b; Zhang et al., 2018). Recently, Zhou & Gu (2019) developed a stochastic recursive variance-reduced cubic regularization (SRVRC) method which finds an ( , √ )- approximate local minimum with Õ(n/ 1.5, 1/ 3) SFO and Õ( √ n/ 1.5, 1/ 2) SSO. But the result of SRVRC needs to assume stochastic gradient to be bounded, i.e., ‖∇fi(x)−∇F (x)‖ ≤ σ. With this extra assumption, STR1 enjoys Õ(n/ 1.5, n/ 2, 1/ 3) SFO and Õ( √ n/ 1.5, 1/ 2) SSO. Thus, if 1/ ≤ n ≤ 1/ 2, STR1 outperforms SRVRC; otherwise they have the same complexities.
5 STOCHASTIC TRUST REGION METHOD: TYPE II
In the above section, we focus on the setting where the stochastic second-order oracle complexity is prioritized over the stochastic first-order oracle complexity. In this setting, STR1 achieves the stateof-the-art efficiency. In this section, we consider a different complexity measure where first-order and second-order oracle complexities are treated equally and our goal is to minimize the maximum of them. We note that, currently the best result is Õ(n4/5/ 1.5) of the SVRC method (Zhou et al., 2018c). Since the Hessian estimator Hk of STR1 already delivers the superior Õ( √ n/ 1.5) stochastic Hessian complexity, in STR2 (see Algorithm 5), we retain Estimator 3 for second-order differential estimation and use Estimator 6 to further reduce the stochastic gradient complexity.
5.1 GRADIENT ESTIMATOR: CASE (2)
When stochastic gradient and Hessian complexities are equally important, we use Hessian to improve the gradient estimation. Denote x(a) = axt + (1− a)x̃. From Assumption 2.3, we have
‖∇fi(xt)−∇fi(x̃)−∇2fi(x̃)(xt − x̃)‖=‖ ∫ 1
0
[∇2fi(x(a))−∇2fi(x̃)](xt − x̃)da‖≤ L2 2 ‖xt − x̃‖2.
Such property can be used to improve Lemma 4.2 of Estimator 4. Specifically, define the correction
ck = [∇2F (x̃)−∇2f(x̃;G)](xk − xk−1),
where x̃ is some reference point updated in an epoch-wise manner. Estimator 6 adds ck to the estimator in Estimator 4. Note that in Estimator 6, the first- and second-order oracle complexities are the same. We now analyze the first-order (and second-order) oracle complexity to meet requirement (9).
Lemma 5.1. Assume Algorithm 5 takes the trust region radius r = √ /L2 as in Theorem 3.1. For any k ≥ 0, Estimator 6 produces estimator gk for the first order differential∇F (xk) such that ‖gk− ∇F (xk)‖ ≤ /6 with probability at least 1−δ/K0, if we set p1 = n0.25 and s1 = n0.75c log(K0/δ), where c = 1152 and K0 is a constant to be determined. Consequently, the amortized per-iteration stochastic first-order oracle complexity to construct gk is 2s1 = 2n0.75c log(K0/δ).
The proof of Lemma 5.1 is similar to the one of Lemma 4.1 and is deferred to Appendix B.4. Similar to the previous section, Lemma 5.1 only guarantees that the gradient estimator satisfies the requirement (9) in a single iteration. Such result can be extended to hold for all k by using the union bound with K0 = 2K, which together with Theorem 3.1 gives the following corollary. Corollary 5.1 (Computational Complexity of STR2). Algorithm 5 finds a 12 -SOSP with probability at least 1 − δ, within O(n 0.75√L2∆ 1.5 log( √ L2∆ δ 1.5 )) overall stochastic first-order oracle queries and O(n 0.75√L2∆ 1.5 log( d √ L2∆
δ 1.5 )) overall stochastic second-order oracle queries.
Corollary 5.1 shows that to find an -SOSP, both SFO and SSO of STR2 are Õ(n3/4/ 1.5) which surpasses the best existing one Õ(n4/5/ 1.5) in (Zhou et al., 2018c).
6 PRACTICAL STOCHASTIC TRUST REGION VARIANTS
6.1 HANDLING INEXACT QCQP SOLUTIONS
One drawback of MetaAlgorithm 1 is that it requires the exact solution to the QCQP subproblem (8) and uses the dual variable as stopping criterion. We address this problem by developing a practical variant, MetaAlgorithm 7, which admits inexact QCQP solutions without access to the dual variable. This algorithm repeatedly invokes a procedure called INEXACTTRWEAK, which, as we shall see, outputs an O( )-SOSP with a constant probability of 2/3 in O(1/ 1.5) iterations. By repeatedly invoking INEXACTTRWEAK for Θ(log(1/δ)) times, MetaAlgorithm 7 boosts the probability to (1− δ) for any desired δ. This repeating technique has been studied by, e.g., (Allen-Zhu & Li, 2018; AllenZhu, 2018b). To test whether the t-th run outputs an O( )-SOSP, we need to compute ‖∇F (xt)‖ and the smallest eigenvalue of∇2F (xt). The latter one can be approximated by solving the QCQP
vt := argmin ‖v‖≤1 ψt(v) = 〈Htv,v〉, (10)
where Ht is the full Hessian∇2F (xt) or its estimation. One can show that MetaAlgorithm 7 finds an O( )-SOSP w.p. at least (1−δ) in Õ(1/ 1.5) iterations. We defer the detailed analysis to Appendix C.
6.2 HESSIAN-FREE IMPLEMENTATION
Based on MetaAlgorithm 7, we propose a Hessian-free method named STRfree, which is summarized in Algorithm 8. STRfree leverages the full/stochastic Hessian and Estimator 4 to construct Hk and gk, respectively. Besides, it uses Lanczos method (Gould et al., 1999; Carmon & Duchi, 2018) as the QCQP solver, which can be implemented in a Hessian-free manner (i.e., using only Hessian-vector products without explicit Hessian matrix evaluations). Thus, Hk is only accessed through Hessianvector products and is never explicitly constructed. Since Hessian-vector products can be computed in linear time (in terms of the dimension d) for many machine learning problems (Allen-Zhu, 2018b; Agarwal et al., 2017), Hessian-free methods are usually more practical than Hessian based ones for high dimensional problems. The following theorem, whose proof can be found in Appendix D, establishes the runtime complexity (i.e., the total complexity of stochastic gradient and Hessian-vector product evaluations (Zhou & Gu, 2019)) of STRfree.
MetaAlgorithm 7 Inexact Trust Region Method II
Input: initial point x0, step size r, number of inner iterations K, constants δ, ζ ∈ (0, 1), c1, c2, number of outer iterations T = Θ(log(1/δ)), sample size s (optional)
1: for t = 1 to T do 2: xt ← INEXACTTRWEAK(x0, r,K, ζ); 3: Option I: high accuracy case (small ) 4: Ht := ∇2F (xt); 5: Option II: low accuracy case (moderate ) 6: Draw s samples indexed byH and let Ht := ∇2f(xt;H); 7: Compute ṽt by solving (10) up to accuracy √ L2 with probability 1− δ/4;
8: if ‖∇F (xt)‖ ≤ c1 and ψt(ṽt) ≥ − √ c2 L2 then
9: return x := xt; 10: end if 11: end for
12: procedure INEXACTTRWEAK(x0, r, K, ζ) 13: for k = 0 to K − 1 do 14: Compute gk and Hk such that (9) holds with probability 1− ζ4K ; 15: Compute h̃k by solving (8) up to accuracy 1.5/ √ L2 with probability 1− ζ4K ; 16: xk+1 := xk + h̃k; 17: end for 18: Randomly select k̄ from {0, . . . ,K − 1}; 19: return xk̄+1; 20: end procedure
Algorithm 8 STRfree 1: In the same setting as MetaAlgorithm 7, 2: construct gradient estimator gk by Estimator 4; 3: construct Hessian estimator Hk by 4: Option I: Hk := ∇2F (xk); 5: Option II: Draw s samples indexed byH and let Hk := ∇2f(xk;H); 6: use Lanczos method to solve QCQP subproblems.
Theorem 6.1. Consider Algorithm 8 for solving problem (1). Let ζ = 1/3, r = √ /L2, K = 4 √ L2∆/ 1.5, T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). The hyper-parameters in Estimator 4 are set to the same values as those in Lemma 4.2. The number of iterations of Lanczos method is set to Õ(1/(L2 )0.25). To find an O( )-SOSP w.p. at least 1− δ, the runtime complexity is Õ(dmin{n/ 1.75, 1/ 2.75 + √ n/ 2}log(1/δ)).
To solve the QCQP more efficiently, here we develop a faster solver which is based on the AppxPCA method (Allen-Zhu & Li, 2016) and KatyushaXW (Allen-Zhu, 2018a). See details in Appendix E. By replacing Lanczos method with this solver in STRfree, we further improve the runtime complexity to Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 + √ n/ 2}). We call this new algorithm STRfree+ whose details can be found in Appendix E. Table 2 shows that for the runtime complexities, both STRfree and STRfree+ outperform existing methods. See more comparison and discussion in Appendix E.4.
7 EXPERIMENTS
Here we compare the proposed STR with several state-of-the-art (stochastic) cubic regularized algorithms and trust region approaches, including trust region (TR) algorithm (Conn et al., 2000), adaptive cubic regularization (ARC) (Cartis et al., 2011), sub-sampled cubic regularization (SCR) (Kohler & Lucchi, 2017a), stochastic variance-reduced cubic (SVRC) (Zhou et al., 2018c), Lite-SVRC (Zhou et al., 2018b), and SRVRC (Zhou & Gu, 2019). For STR, we estimate the gradient as the way in case (1). This is because such a method enjoys lower Hessian computational complexity over the way in case (2) and for most problems, computing their Hessian matrices is much more time-consuming than computing their gradients. For the subproblems in these compared methods, we use Lanczos
method (Gould et al., 1999; Kohler & Lucchi, 2017a) to solve the subproblem approximately in a Hessian-related Krylov subspace. We run simulations on seven datasets from LibSVM (a9a, ijcnn, codrna, phishing, w8a, epsilon and mnist). We run our algorithm for 40 epochs and use the output as the optimal value f∗ for sub-optimality estimation. Note the output has very small gradient already verified by Figure 2 and 4 in appendix. For all the considered algorithms, we set their initializations as zeros and tune their hyper-parameters optimally. For more experimental settings, e.g. details of testing datasets and algorithm parameter settings, please refer to Appendix F.
Two evaluation non-convex problems. Following (Kohler & Lucchi, 2017a; Zhou et al., 2018c), we evaluate all considered algorithms on two learning tasks: the logistic regression with nonconvex regularizer and the nonlinear least square. Given n data points (xi, yi) where xi ∈ Rd is the sample vector and yi ∈ {−1, 1} is the label, logistic regression with non-convex regularizer aims at distinguishing these two kinds of samples by solving the following problem minw 1 n ∑n i=1 log(1 + exp(−yiwTxi)) + λR(w;α), where the non-convex regularizer R(w;α) is
defined as R(w;α) = ∑d i=1 αw 2 i /(1 + αw 2 i ). The nonlinear least square problem fits the nonlinear
data by minimizing minw 12n ∑n i=1 [ yi − φ(wTxi) ]2 +λR(w, α). For these two kinds of problems, we set the parameters λ = 10−3 and α = 10 for all testing datasets.
Comparison of Hessian based algorithms. Figure 1 summarizes testing results on the non-convex logistic regression problem. For each dataset, we report the function value gap v.s. the overall algorithm running time which can reflect the overall computational complexity of an algorithm, and also show the function value gap v.s. Hessian sample complexity which reveals the complexity of Hessian computation. From Figure 1, one can observe that our proposed STR algorithm runs faster than the compared algorithms in terms of the algorithm running time, showing the overall superiority of STR. Furthermore, STR also reveals much sharper convergence curves in terms of the Hessian
sample complexity which is consistent with our theory. This is because to achieve an -accuracy local minimum, the Hessian sample complexity of the proposed STR is Õ(n0.5/ 1.5) and is superior over the complexity of the compared methods (see the comparison in Sec. 4.2). Indeed, this also explains why our algorithm is also faster in terms of algorithm running time, since for most optimization problems, Hessian matrix is much more computationally expensive than the gradient and thus more efficient Hessian sample complexity means faster overall convergence speed. Note, as all compared methods need to compute the Hessian and gradient, their memory complexity are all O(d2 + d). Figure 2 displays results of the compared algorithms on the nonlinear least square problem. STR shows very similar behaviors as those in Figure 1. Specifically, STR achieves fastest convergence rate in terms of both algorithm running time and Hessian sample complexity. On the codrna dataset we further plot the gradient norm versus running time and Hessian sample complexity. One can obverse that the gradient in STR vanishes significantly faster than other algorithms which means that STR can find the stationary point with high efficiency. See Figure 4 in Appendix F.2 for more experimental results on gradient norm comparison. All these results confirm the superiority of the proposed STR.
Comparison of Hessian-free algorithms. Here we compare our proposed Hessian-free STR, namely STRfree, with other state-ofthe-art Hessian-free algorithms on the two high-dimensional datasets, including epsilon and mnist (see details in Appendix F). Here we do not compare STRfree+, as it is based on AppxPCA method (Allen-Zhu & Li, 2016) and KatyushaXW (Allen-Zhu, 2018a) which require tuning a lot of hyper-parameters. From the results in Figure 3, one can observe that compared with other algorithms, our STRfree
achieves the best convergence speed which demonstrates its high efficiency in realistic applications. Besides, one also can find that STRfree is much faster than Hessian based STR since computing full Hessian is actually much computationally expensive than the computation of the Hessian vector.
8 CONCLUSION
We proposed two stochastic trust region variants. Under two settings (whether stochastic first- and second-order oracle complexities are treated equally), the proposed methods achieve state-of-theart oracle complexities. We also propose Hessian-free variants with lowest runtime complexity. Experimental results testify our theoretical implications and the efficiency of the proposed algorithms.
A APPENDIX
In this appendix, Sec. B first provides the proofs for the results in the manuscript. Then, we analyze MetaAlgorithm 7 and STRfree in Sec. C and Sec. D, respectively. Next, in Sec. E, we develop a fast QCQP solver to further improve the computational complexity of STRfree. Finally, more experimental details and results are presented in Sec. F.
B DEFERRED PROOFS
B.1 PROOF OF THEOREM 3.1
Proof. For simplicity of notation, we denote
∇k def = ∇F (xk)− gk and ∇2k def = ∇2F (xk)−Hk.
From Assumption 2.3 we have
F (xk+1) ≤F (xk)+〈∇F (xk),hk〉+ 1 2 〈∇2F (xk)hk,hk〉+ L2 6 ‖hk‖3
=F (xk)+〈∇k+gk,hk〉+ 1
2 〈[∇2k + Hk]hk,hk〉+ L2 6 ‖hk‖3.
Use the CauchySchwarz inequality to obtain
F (xk+1) ≤ F (xk)+〈gk,hk〉+ 1 2 〈Hkhk,hk〉+L2 6 ‖hk‖3+‖∇k‖‖hk‖+ 1 2 ‖∇2k‖‖hk‖2. (11)
The requirement (9) together with the trust region radius ‖h‖ ≤ r = √ /L2 allows us to bound
‖∇k‖‖hk‖+ 1
2 ‖∇2k‖‖hk‖2 ≤
1 3 ·
1.5
√ L2 . (12)
The optimality of (5) indicates that there exists a dual variable λk ≥ 0 so that (Corollary 7.2.2 in (Conn et al., 2000))
First Order : gk + Hkhk + λkL2
2 hk = 0, (13)
Second Order : Hk + λkL2
2 · I < 0, (14)
Complementary : λk · (‖hk‖ − r) = 0. (15)
Multiplying (13) by hk, we have
〈gk + Hkhk + λ kL2 2 hk,hk〉 = 0. (16)
Additionally, using (14) we have
〈(Hk + λ kL2 2 I)hk,hk〉 ≥ 0,
which together with (16) gives 〈gk,hk〉 ≤ 0. (17)
Moreover, the complementary property (15) indicates ‖hk‖= √ /L2 as we have λk > 3 √ /L2 > 0
before MetaAlgorithm 1 terminates. Plug (12), (16), and (17) into (11) and use ‖hk‖ = √ /L2:
F (xk+1) ≤ F (xk)− L2λ k
4 · L2
+ 1
2 ·
1.5
√ L2 . (18)
Therefore, if we have λk > 3 0.5/ √ L2, then
F (xk+1) ≤ F (xk)− 1 4 √ L2 · 1.5. (19)
Using Assumption 2.1, we find λk ≤ 3 0.5/ √ L2 in no more than 4 √ L2 · (F (x0)− F (x∗))/ 1.5 iterations. We now show that once λk ≤ 3 0.5/ √ L2, then xk+1 is already an O( )-SOSP: From (13), we have
‖gk + Hkhk‖ = L2λ k
2 · ‖hk‖ ≤ 2 . (20)
The assumptions ‖∇k‖ ≤ /6 and ‖∇2k‖ ≤ √ L2/3 together with the trust region radius ‖h‖ ≤√
/L2 imply
‖∇F (xk) +∇2F (xk)hk‖ ≤ ‖gk + Hkhk‖+ ‖∇k‖+ ‖∇2k · hk‖ ≤ 2.5 . (21)
On the other hand use Assumption 2.3 to bound
‖∇F (xk+1)−∇F (xk)−∇2F (xk)hk‖ ≤ L2 2 ‖hk‖2 ≤ 2 .
Combining these two results gives ‖∇F (xk+1)‖ ≤ 3 . Besides, using Assumption 2.3, ‖∇2k‖ ≤ √ L2/3, and (14), we derive the Hessian lower bound
∇2F (xk+1) < ∇2F (xk)− L2 · ‖hk‖I < Hk− √ L2/3I−L2‖hk‖I <− √ 12 L2I.
Hence xk+1 is a 12 -stationary point. Additionally, we have ‖hk‖ = r according to the complementary condition (15) for all but the last iteration.
B.2 PROOF OF LEMMA 4.1
Proof. Without loss of generality, we analyze the case 0 ≤ k < q2 for ease of notation. We first focus on Option II. The proof for Option I follows the similar argument. Option II: Define for k = 0 and i ∈ [s′2]
B0i def = ∇2fi(x0)−∇2F (x0),
and define for k ≥ 1 and i ∈ [s2]
Bki def = ∇2fi(xk)−∇2fi(xk−1)− (∇2F (xk)−∇2F (xk−1)).
{Bki } is a martingale difference sequence. We have for all k and i,
E[Bki |xk] = 0.
Besides, we use Assumption 2.2 for k = 0 to bound
‖B0i ‖ ≤ ‖∇2fi(x0)‖+ ‖∇2F (x0)‖ = 2L1, (22)
and use Assumption 2.3 for k ≥ 1 to bound ‖Bki ‖ ≤‖∇2fi(xk)−∇2fi(xk−1)‖+ ‖∇2F (xk)−∇2F (xk−1)‖ ≤ 2 √ L2.
From the construction of Hk, we have
Hk −∇2F (xk) = s′2∑ i=1 B0i s′2 + k∑ j=1 s2∑ i=1 Bji s2 .
Thus using the matrix Azuma’s Inequality in Theorem 7.1 of (Tropp, 2012) and k ≤ p2, we have
Pr{‖Hk −∇2F (xk)‖ ≥ t} ≤d · exp{− t 2/8∑s′2
i=1 4L 2 1/s ′2 2 + ∑k j=1 ∑s2 i=1 4 L2/s 2 2 }
≤d · exp{− t 2/8
4L21/s ′ 2 + 4p2 L2/s2
}.
Consequently, we have Pr{‖Hk −∇2F (xk)‖ ≤ √ L2} ≥ 1− δ/K0.
by taking t = √ L2, s′2 = 16L 2 1/( L2) log(dK0/δ), s2 = 32L1/( √ L2) log(dK0/δ), and p2 =
L1/(2 √ L2).
Option I: The proof is similar to the one of Option II except that we replace B0i with zero matrix. In such case, the matrix Azuma’s Inequality implies
Pr{‖Hk −∇2F (xk)‖ ≥ t} ≤ d · exp{− t 2/8∑k
j=1 ∑s2 i=1 4 L2/s 2 2 } ≤ d · exp{− t 2/8 4p2 L2/s2 }.
Thus by taking t = √ L2, s2 = 32 √ n log(dK0/δ), and p2 = √ n, we have the result.
Amortized Complexity: In option I, the choice of parameters ensures that: s′2 ≤ p2 × s2 and in option II: n ≤ p2 × s2. Consequently, the amortized stochastic second-order oracle complexity is bounded from above by 2s2.
B.3 PROOF OF LEMMA 4.2
Without loss of generality, we analyze the case 0 ≤ k < q1 for ease of notation. Define for k ≥ 1 and i ∈ [s1]
aki def = ∇fi(xk)−∇fi(xk−1)− (∇F (xk)−∇F (xk−1)).
{aki } is a martingale difference sequence: for all k and i
E[aki |xk] = 0.
Besides, aki has bounded norm:
‖aki ‖≤‖∇fi(xk)−∇fi(xk−1)‖+‖∇F (xk)−∇F (xk−1)‖ ≤L1‖xk − xk−1‖+L1‖xk − xk−1‖
≤2L1 √ /L2. (23)
From the construction of gk, we have
gk −∇F (xk) = k∑ j=1 s1∑ i=1 aji s1 .
Recall the Azuma’s Inequality. Using k ≤ p1, we have
Pr{‖gk −∇F (xk)‖ ≥ t}
≤ exp{− t 2/8∑k
j=1 ∑s1 i=1 4 L21 L2s21 } ≤ exp{− t 2/8 4 L21p1/(s1L2) }.
Take t = /6 and denote c = 1152. To ensure that
Pr{‖gk −∇F (xk)‖ ≥ /6} ≤ δ/K0,
we need cL 2 1
L2 log K0δ ≤ s1 p1 . The best amortized stochastic first-order oracle complexity can be obtained by solving the following two-dimensional programming:
min p1≥1,s1≥1
(n+ s1(p1 − 1))/p1
s.t. cL21 L2 log K0 δ ≤ s1 p1 ,
which has the solution s1 = min{n, √ n · cL21 log K0 δ
L2 }, and p1 = max{1,
√ n · L2
cL21 log K0 δ
}. Note
that when we take s1 = n, we directly compute gk = ∇F (xk) without sampling. The amortized stochastic first-order oracle complexity is obtained by plugging in the choice of s1 and p1, which completes the proof.
B.4 PROOF OF LEMMA 5.1
Without loss of generality, we analyze the case 0 ≤ k < q1 for ease of notation. Define for k ≥ 1 and i ∈ [s1]
bki def =∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1) − [∇F (xk)−∇F (xk−1)−∇2F (x̃)(xk − xk−1)].
{bki } is a martingale difference sequence: for all k and i E[bki |xk] = 0. Besides, bki has bounded norm:
‖bki ‖≤‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖ +‖∇F (xk)−∇F (xk−1)−∇2F (x̃)(xk − xk−1)‖.
We can bound ‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖ as follows. ‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖
= ‖ ∫ 1
0
[ ∇2fi(xk−1 + t(xk − xk−1))−∇2fi(x̃) ] (xk − xk−1)dt‖
≤ ∫ 1
0
L2 ∥∥∥txk + (1− t)xk−1 − x̃∥∥∥dt · ‖xk − xk−1‖ ≤ ∫ 1
0
( t‖xk − x̃‖+ (1− t)‖xk−1 − x̃‖ ) dt · L2r
≤ L2kr2, where the first inequality follows from Assumption 2.3 and the last inequality holds because ‖xk − x̃‖ ≤ kr and ‖xk−1 − x̃‖ ≤ kr, where r is the trust region radius. Similarly, we have ‖∇F (xk)− ∇F (xk−1)−∇2F (x̃)(xk − xk−1)‖ ≤ L2kr2. Thus, we bound
‖bki ‖ ≤ 2L2kr2 ≤ 2p1 From the construction of gk, we have
gk −∇F (xk) = k∑ j=1 s1∑ i=1 bji s1 .
We use k ≤ p1 and the Azuma’s inequality to bound Pr{‖gk −∇F (xk)‖ ≥ t}
≤ exp{− t 2/8∑k
j=1 ∑s1 i=1 4p21 2
s21
} ≤ exp{− t 2/8
4 2p31/s1 }.
Thus, by taking t = /6 and c = 1152, we need s1 p31 ≥ c log K0δ . Further we want s1p1 ' O(n) and hence we take p1 = n0.25 and s1 = n0.75c log K0δ . The amortized stochastic first-order oracle complexity is bounded by 2s1.
C ANALYSIS OF METAALGORITHM 7
We first show that INEXACTTRWEAK finds an O( )-SOSP in O(1/ 1.5) iterations with probability at least 2/3 as stated in the following lemma. Lemma C.1. Consider problem (1) under Assumptions 2.1-2.3. Suppose that the differential estimators gk and Hk satisfy Eqn. (9) with probability at least (1− ζ4K ). Besides, suppose that h̃
k is an approximate solution to (8) such that w.p. (1− ζ4K ),
〈gk, h̃k〉+ 1 2 〈Hkh̃k, h̃k〉 ≤ 〈gk,hk〉+ 1 2 〈Hkhk,hk〉+
1.5
√ L2 , (24) where hk is a global solution to (8). By setting ζ = 1/3, r = √ /L2, and K = 4 √ L2∆/
1.5, INEXACTTRWEAK outputs a 500 -SOSP w.p. at least 2/3.
Proof. Combining (11) and (24), we have w.p. (1− ζ4K ),
F (xk+1) ≤ F (xk) + 〈gk,hk〉+ 1 2 〈Hkhk,hk〉+
1.5
√ L2
+ L2 6 ‖h̃k‖3 + ‖∇k‖‖h̃k‖+ 1 2 ‖∇2k‖‖h̃k‖2,
(25)
where hk is a global solution to the QCQP (8) and h̃k is an approximate solution satisfying (24). We let λk denote the dual variable corresponding to the global solution hk as defined in Lemma 2.1. We note that hk and λk are used only in our analysis. The INEXACTWEAK algorithm only requires the approximate solution x̃k without knowledge of hk or λk.
By the assumption that (9) holds with probability (1− ζ4K ) and the fact that ‖h̃ k‖ ≤ r = √ L2 , we have w.p. (1− ζ4K ),
L2 6 ‖h̃k‖3 + ‖∇k‖‖h̃k‖+ 1 2 ‖∇2k‖‖h̃k‖2 ≤
1.5
2 √ L2 . (26)
Plugging (16), (17), and (26) into (25) and applying the union bound, we have w.p. at least (1− ζ2K ),
F (xk+1) ≤ F (xk)− L2λ k‖hk‖2
4 +
3 1.5
2 √ L2
= F (xk)− L2λ kr2
4 +
3 1.5
2 √ L2 , (27)
where the second inequality follows from (15):
0 = λk(‖hk‖ − r) = λk(‖hk‖ − r)(‖hk‖+ r) = λk(‖hk‖2 − r2). (28)
Summing inequality (27) from k = 0 to K − 1 and applying the union bound, we have w.p. at least (1− ζ/2),
1
K K−1∑ k=0 λk ≤ 4(F (x 0)− F (xK+1)) L2r2K + 6 1.5 L1.52 r 2 ≤ 4∆ K + 6 √ √ L2 , (29)
where the second inequality follows from Assumption 2.1 and our choice of the trust region radius.
By sampling k̄ uniformly from {0, . . . ,K − 1}, we obtain
E[λk̄] = 1
K K−1∑ k=0 λk, (30)
where the expectation is taken over the randomness of k̄. Combining (29) and (30) and taking K = 4∆ √ L2/ 1.5, we have w.p. at least (1− ζ/2)
E[λk̄] ≤ 7 √ √ L2 . (31)
Since λk is always no-negative, by Markov’s inequality and the union bound, with probability at least 1− ζ, we have
λk̄ ≤ 14 √
ζ √ L2 . (32)
By taking ζ = 1/3, we have w.p. at least 2/3, λk̄ ≤ 42 √ /L2. The rest of the proof is similar to Theorem 3.1 and we have the result.
The following theorem shows that MetaAlgorithm 7 finds an O( )-SOSP w.p. (1− δ) after running INEXACTTRWEAK for Θ(log(1/δ)) times.
Theorem C.1 (Iteration Complexity of MetaAlgorithm 7). In the same setting as Lemma C.1, let T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). Then MetaAlgorithm 7 finds a 600 -SOSP with probability at least (1− δ).
Proof. By Lemma C.1 and our choice of T , with probability (1 − δ/2), at least one of xt is a 500 -SOSP. On the other hand, since ψt(ṽt) ≤ ψt(vt) + √ L2 with probability 1 − δ/4, if ψt(ṽ t) ≥ − √ c2 L2, then, w.p. 1− δ/4,
ψt(v t) ≥ ψt(ṽt)− √ L2 ≥ − √ c2 L2 − √ L2 ≥ − √ 550 L2, (33)
where the last inequality follows from our choice of c2.
Option I: Since Ht = ∇2F (xt) is the full Hessian, ψt(vt) is the smallest eigenvalue of∇2F (xt). Applying the union bound, we conclude that MetaAlgorithm 7 outputs a 600 -SOSP w.p. (1− δ). Option II: Let Bi := ∇2fi(xt)−∇2F (xt) for i ∈ H, then
Ht −∇2F (xt) = 1 s s∑ i=1 Bi. (34)
By Assumption 2.2, we have
‖Bi‖ ≤ ‖∇2fi(xt)‖+ ‖∇2F (xt)‖ ≤ 2L1. (35) Applying the matrix Azuma’s Inequality in Theorem 7.1 of Tropp (2012) leads to
Pr{‖Ht −∇2F (xt)‖ ≥ √ L2} ≤ d · exp(
− L2s 32L21 ). (36)
By taking s = 32L 2 1
L2 log(4d/δ) and applying the union bound, we have with probability 1− δ, ∇2F (xt) < Ht − √ L2I < (ψt(v t)− √ L2)I < − √ 600 L2I, (37)
where the last inequality follows from (33). This completes the proof.
D PROOF OF THEOREM 6.1
Proof. We first analyze the computational cost of Lanczos method. By Corollary 2 in (Carmon & Duchi, 2018), for any desired accuracy ̃, Lanczos method achieves this accuracy in O( r√
̃ log r
√ d ̃p )
Lanczos iterations w.p. at least (1− p). Without loss of generality, we assume that the number of Lanczos iterations is strictly smaller than the dimension d, otherwise the QCQP subproblem can be solved exactly. We note that each Lanczos iteration involves computation of one matrix-vector product. Therefore, to satisfy the condition (24) in Lemma C.1, one needs to evaluate Õ(1/(L2 )0.25) Hessian-vector products of the form Hkv. Similarly, to solve (10) up to accuracy √ L2 w.h.p., one needs to evaluate Õ(1/(L2 )0.25) Hessian-vector products of the form Htv. In MetaAlgorithm 7, to verify whether the candidate solution vt is indeed anO( )-SOSP, one needs at most O(n) stochastic gradient evaluations and O(min{n, log(4d/δ)L21/(L2 )}/(L2 )0.25)) stochastic Hessian-vector product evaluations, where the latter one follows from the proof of Theorem 7. We proceed to analyze the computational complexity of the INEXACTTRWEAK procedure. Recall that the iteration complexity of MetaAlgorithm 7 is O(log(1/δ)/ 1.5). Following Lemma 4.2 and Corollary 4.1, the stochastic first-order oracle complexity is Õ(min{n/ 1.5, √ n/ 2}log(1/δ)). Following the proof of Lemma 4.1 and Corollary 4.1, when p2 = 1, the overall stochastic Hessian sample complexity is Õ(min{n/ 1.5, 1/ 2.5} log(1/δ)). Since it takes Õ(1/ 0.25) Lanczos iterations to meet the condition (24) as stated above, the overall stochastic Hessian-vector product oracle complexity is Õ(min{n/ 1.75, 1/ 2.75}log(1/δ)). Combining the stochastic first-order and Hessian-vector product complexities, the overall runtime is Õ(dmin{n/ 1.75, 1/ 2.75 + √ n/ 2}log(1/δ)).
E A FASTER HESSIAN-VECTOR BASED QCQP SOLVER
We recall from the previous section that, to approximately solve a quadratic subproblem in STRfree, Lanczos method requires Õ(min{n/ 0.25, 1/ 1.25}) stochastic Hessian-vector product evaluations. In this section, we propose a faster QCQP solver with an Õ(min{n+ n0.75/ 0.25, 1/ }) complexity. Replacing Lanczos method with this QCQP solver in STRfree results in a faster Hessian-free method, which we refer to as STRfree+.
E.1 CONVEX REFORMULATION OF QCQP
To begin with, we present a known result that is key to achieve faster algorithm than Lanczos method. We summarize this result in Lemma E.1 which shows that the trust region subproblem is equivalent to a convex QCQP. Lemma E.1. (Convex Reformulation of QCQP (Flippo & Jansen, 1996; Wang & Xia, 2017)) Denote λmin as the smallest eigenvalue of Hk. Let umin be a corresponding eigenvector. W.l.o.g., we assume that 〈gk,umin〉 ≤ 0. Let µ = min{λmin, 0}. Then the QCQP (8) is equivalent to the convex problem
min h∈Rd,‖h‖≤r qk(h) = 〈gk,h〉+ 1 2 〈(Hk − µI)h,h〉+ 1 2 µr2 (38)
in the sense that (8) and (38) have the same minimum function value. Moreover, when λmin < 0, for any optimal solution of (38), denoted by hkc ,
hkc +
√ 〈hkc ,umin〉2 − ‖umin‖2(‖hkc‖2 − r2)− 〈hkc ,umin〉
‖umin‖2 umin (39)
is a global minimizer of the original QCQP (8).
To perform the above reformulation, one needs to compute the exact eigenpair (λmin,umin). Nevertheless, as we shall see, it is sufficient to compute an approximate eigenpair (λ̃, ũ) such that
λmin ≤ λ̃ = ũTHkũ ≤ λmin + ̃, (40) where ̃ is a target accuracy to be determined later. We note that ̃ ≤ 2L2 w.l.o.g. since ‖Hk‖ ≤ L2. With this approximate eigenpair, it remains to solve the following convex problem
min h∈Rd,‖h‖≤r q̃k(h) = 〈gk,h〉+ 1 2 〈(Hk − µ̃I)h,h〉+ 1 2 µ̃r2 (41)
where µ̃ = min{0, λ̃− ̃}. One can check that the problem (41) well approximates (38). Corollary E.1. Let qk∗ and q̃k∗ be the minimum function value of (38) and (41), respectively. Assume λmin ≤ λ̃ = ũTHkũ ≤ λmin + ̃. Then
|qk∗ − q̃k∗ | ≤ ̃r2. (42)
We note that the above convex reformulation approach divides an indefinite QCQP into two subproblems: (i) computation of an approximate eigenpair (λ̃, ũ); (ii) solving the convex problem (41). As we shall see, by exploiting the finite-sum structure of the Hessian Hk, these two subproblems can be efficiently solved. We treat these two subproblems in the following two subsections, respectively.
E.2 FINDING THE SMALLEST EIGENVECTOR
To find a unit vector that satisfies requirement (40), we resort to the AppxPCA method (Allen-Zhu & Li, 2016), which first finds an approximate eigenvalue λ = λmin − ̃ via binary search and then applies Power method to the positive definite matrix (Hk − λI)−1 for a logarithmic number of iterations. Computing (Hk − λI)−1v for any vector v is equivalent to solving the ̃-strongly convex problem (Allen-Zhu & Li, 2018)
min u φk(u) :=
1 2 uT (Hk − λI)u− 〈v,u〉 (43)
We note that Hk = 1|S| ∑ i∈S ∇2fi(xk). Specifically, in STRfree, either |S| = n (i.e., Hk is the full Hessian) or |S| = Õ(L21/(L2 )) by Lemma 4.1. Therefore, φk(·) can be expressed as sum of non-convex functions
φk(u) = 1 |S| ∑ i∈S φki (u) = 1 |S| ∑ i∈S (1 2 uT (∇2fi(xk)− λI)u− 〈v,u〉 ) . (44)
By observing that each φki is non-convex and has (4L2)-Lipschitz gradient, we can use KatyushaXS (Allen-Zhu, 2018a) to solve problem (43) in Õ(|S|+ |S|3/4 √ L2/̃) stochastic Hessianvector product (i.e., ∇2fi(xk)u) evaluations. The following result is taken from (Agarwal et al., 2017, Section G.3), which gives the overall computation complexity of AppxPCA.
Algorithm 9 Fast QCQP Solver
Input: Hk, gk, r, ̃, ̃1 1: Use AppxPCA to find (λ̃, ũ) satisfying (40), in which the matrix inverse is solved by KatyushaXS; 2: Use KatyushaXW to solve (41) up to accuracy ̃1, i.e., find a vector h̃ such that q̃k(h)− q̃k∗ ≤ ̃1
with high probability; 3: Return h̃ + ( √ 〈h̃, ũ〉2 − ‖ũ‖2(‖h̃‖2 − r2)− 〈h̃, ũ〉)ũ/‖ũ‖2.
Algorithm 10 STRfree+ 1: In the same setting as MetaAlgorithm 7, 2: construct gradient estimator gk by Estimator 4; 3: construct Hessian estimator Hk by 4: Option I: Hk := ∇2F (xk); 5: Option II: Draw s samples indexed byH and let Hk := ∇2f(xk;H); 6: use Algorithm 9 to solve QCQP subproblems. Lemma E.2. Let Hk = 1|S| ∑ i∈S ∇2fi(xk) ∈ Rd×d, where ‖∇2fi(xk)‖ ≤ L2. With probability at least 1− p, AppxPCA produces a unit vector u satisfying uTHku ≤ λmin + ̃. The total stochastic Hessian-vector product oracle complexity is Õ(|S|+ |S|3/4 √ L2/̃).
E.3 SOLVING THE CONVEX QCQP
In what follows, we show that the convex problem (41) can be solved efficiently. We first observe that problem (41) has a finite-sum structure and can be rewritten as an unconstrained problem of the form
min h∈Rd
1 |S| ∑ i∈S q̃ki (h)+Ψ(h) = 1 |S| ∑ i∈S ( 〈gk,h〉+ 1 2 〈(∇2fi(xk)−µ̃I)h,h〉+ 1 2 µ̃r2 ) +Ψ(h), (45)
where Ψ(h) = 0 if ‖h‖ ≤ r, otherwise Ψ(h) = +∞. We note that each q̃ki (h) in (45) has (4L2)-Lipschitz continuous gradient since ‖∇2fi(xk)‖ ≤ L2 and ̃ ≤ 2L2. Therefore, we can use KatyushaXW (Allen-Zhu, 2018a) to solve (45). By (Allen-Zhu, 2018a, Theorem 4.6), KatyushaXW finds a point h such that E[q̃k(h)− q̃k∗ ] ≤ ̃1 using Õ(|S|+ |S|3/4 √ L2 · r/ √ ̃1) stochastic Hessianvector products, where ̃1 is the target accuracy to be determined later.
E.4 PUTTING IT ALL TOGETHER
The complete procedure of our fast QCQP solver is summarized in Algorithm 9. Combining all the above results and setting r = √ /L2, ̃ = √ L2/2, and ̃1 = ̃r2, one can find an approximate solution to QCQP (8) satisfying requirement (24) in Õ(|S|+ |S|3/4L0.252 / 0.25) stochastic Hessianvector product evaluations. By replacing Lanczos method with this solver in STRfree, we derive a new Hessian-free method called STRfree+, which is summarized in Algorithm 10. The following theorem establishes the overall runtime complexity of STRfree+ for finding an -SOSP.
Theorem E.1. Consider Algorithm 10 for solving problem (1). Let ζ = 1/3, r = √ /L2, K = 4 √ L2∆/ 1.5, T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). The hyper-parameters in Estimator 4 are set to the same values as those in Lemma 4.2. Besides, let ̃ = √ L2/2 and ̃1 = ̃r 2 in Algorithm 9. To find an O( )-SOSP w.p. at least 1 − δ, the runtime complexity is Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 + √ n/ 2}log(1/δ)).
Proof. The proof directly follows from that in Sec. D.
We compare the runtime complexity of STRfree and STRfree+ with existing Hessian free methods in Table 2. One can see that STRfree strictly outperforms Hessian-free Cubic. Besides, STRfree outperforms Fast-Cubic if n ≥ Ω(1/ 4/3), which is a mild condition for large-scale problems in the moderate accuracy case. STRfree+ strictly outperforms both Hessian-free Cubic and Fast-Cubic. We
note that the runtime analyses in (Tripuraneni et al., 2018; Zhou & Gu, 2019) rely on an additional assumption which states that for all x, with probability 1,
‖∇fi(x)−∇F (x)‖ ≤ σ. (46)
Under this additional assumption, one can use the same argument as in Section B.2 and B.3 to prove that STRfree achieves a runtime complexity of Õ(dmin{n/ 1.75, n0.5/ 2 + 1/ 2.75, 1/ 3})1. Similarly, the runtime complexity of STRfree+ would be Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 +√ n/ 2, 1/ 3}). In this sense, both STRfree and STRfree+ outperform Stochastic Cubic and SRVRCfree.
F ADDITIONAL EXPERIMENTAL RESULTS
F.1 MORE EXPERIMENTAL DETAILS
Descriptions of Testing Datasets. We briefly introduce the seven testing datasets in the manuscript. Among them, six datasets are provided by the LibSVM website2, including (a9a, ijcnn, codrna, phishing, w8a and epsilon). The detailed information is summarized in Table 3. We can observe that these datasets are different from each other in feature dimension, training samples, etc.
1To obtain this complexity, one needs to replace the full gradient in line 2 of Estimator 4 (and line 6 of MetaAlgorithm 7) with a mini-batch stochastic gradient when n ≥ Ω(1/ 2).
2https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
Experimental Settings. In the manuscript, following SVRC (Zhou et al., 2018c) and LiteSVRC (Zhou et al., 2018b), we select hyper parameters from a set, namely s1 from {0.2n, 0.6n, n}, s2 from {0.01n, 0.1n, 0.2n}, p1 and p2 from {0.01n0.5, 0.05n0.5, 0.1n0.5}. For the Hessian estimation at the beginning of each p2 iterations, we use full Hessian. Similarly, for the gradient estimation at the beginning of each p1 iterations, we adopt the full gradient as the gradient estimation.
Memory Analysis. SVRC, Lite-SVRC, and our method need to store the previous and current gradient and Hessian and thus their memory complexity is 2(d2 + d). TR, CR (ARC) and SCR need to compute current and Hessian and thus has complexity d2 + d. So these memory is of the same order but our method is much faster than TR, CR and SCR both validated by theory and experiments.
F.2 MORE EXPERIMENTS
Here we give more experimental results on the gradient norm v.s. the algorithm running time and the Hessian sample complexity. Due to the space limit, in the manuscript we only provide the gradient-norm related results on the codrna dataset. Here we provide results of a9a and ijcnn datasets in Figure 4. One can observe that on both the logistic regression with non-convex regularizer and the nonlinear least square problems, the proposed algorithm always shows sharper convergence behavior in terms of both the running time and the Hessian sample complexity. These observations are consistent with the results in Figure 2 in the manuscript. All these results demonstrate the high efficiency of our proposed algorithm and also confirm our theoretical implication. | 1. What is the main contribution of the paper in trust region methods?
2. What are the proposed methods for achieving approximate local minima in a stochastic regime?
3. How effective are the derived methods in the empirical evaluation?
4. Are there any minor points or suggestions for improvement in the paper? | Review | Review
The authors propose a new analysis for trust region methods with approximate models. Using this result, they propose a number of methods to create stochastic trust region methods by constructing approximate quadratic models (based on a stochastic first and second order estimate) which satisfy the requirements for convergence. The effectiveness of the derived methods is evaluated empirically on two standard non-convex regression problems.
This paper is overall an interesting contribution which proposes a number of competitive methods for achieving approximate local minima in a stochastic regime, with both hessian based and “hessian-free” methods. A couple of minor points:
- In the experiments, it would be helpful to also include some measure of uncertainty (such as standard error bars) in the plots given the stochastic nature of the problem (although I do not expect high variance given the construction of the algorithm).
- It would be helpful to indicate which results still hold in the online setting (not finite sum). Indeed, from the proof of theorem 3.1, MetaAlgorithm 1 does not seem to rely on the finite sum setting, which is mostly used for analyzing the variance-reduced estimators. This would be helpful as it would enable MetaAlgorithm 1 to be used with appropriate variance-reduced estimators in settings beyond the finite-sum problems. |
ICLR | Title
A Stochastic Trust Region Method for Non-convex Minimization
Abstract
We target the problem of finding a local minimum in non-convex finite-sum minimization. Towards this goal, we first prove that the trust region method with inexact gradient and Hessian estimation can achieve a convergence rate of order O(1/k) as long as those differential estimations are sufficiently accurate. Combining such result with a novel Hessian estimator, we propose a sample-efficient stochastic trust region (STR) algorithm which finds an ( , √ )-approximate local minimum within Õ( √ n/ ) stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor ofO(n). Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.
N/A
√ )-approximate local
minimum within Õ( √ n/ 1.5) stochastic Hessian oracle queries. This improves the state-of-the-art result by a factor ofO(n1/6). Finally, we also develop Hessian-free STR algorithms which achieve the lowest runtime complexity. Experiments verify theoretical conclusions and the efficiency of the proposed algorithms.
1 INTRODUCTION
We consider the following finite-sum non-convex minimization problem
min x∈Rd
F (x) = 1
n ∑n i=1 fi(x), (1)
where each (non-convex) component function fi : Rd → R is assumed to have L1-Lipschitz continuous gradient and L2-Lipschitz continuous Hessian. Since first-order stationary points could be saddle points with inferior generalization performance (Dauphin et al., 2014), in this work we are particularly interested in computing ( , √ )-approximate second-order stationary points, -SOSP:
‖∇F (x )‖ ≤ and ∇2F (x ) < − √ L2 I. (2)
To find a local minimum of problem (1), the cubic regularization approach (Nesterov & Polyak, 2006) and the trust region algorithm (Conn et al., 2000; Curtis et al., 2017) are two classical methods. Specifically, cubic regularization forms a cubic surrogate function for the objective F (x) by adding a third-order regularization term to the second-order Taylor expansion, and minimizes it iteratively. Such a method is proved to achieve an O(1/k2/3) global convergence rate and thus needs O(n/ 1.5) stochastic first- and second-order oracle queries, namely the evaluation number of stochastic gradient and Hessian, to achieve a point that satisfies (2). On the other hand, trust region algorithms estimate the objective with its second-order Taylor expansion but minimize it only within a local region. Recently, Curtis et al. (2017) proposes a trust region variant to achieve the same convergence rate as the cubic regularization approach. But both methods require computing full gradients and Hessians of F (x) and thus suffer from high computational cost in large-scale problems.
To avoid costly exact differential evaluations, many works explore the finite-sum structure of problem (1) and develop stochastic cubic regularization approaches. Both Kohler & Lucchi (2017b) and Xu et al. (2017) propose to directly subsample the gradient and Hessian in the cubic surrogate function, and achieve Õ(1/ 3.5) and Õ(1/ 2.5) stochastic first- and second-order oracle complexities respectively. By plugging a stochastic variance reduced estimator (Johnson & Zhang, 2013) and the Hessian tracking technique (Gower et al., 2018) into the gradient and Hessian estimation, the approach in (Zhou et al., 2018a) improves both the stochastic first- and second-order oracle complexities to Õ(n0.8/ 1.5). Recently, Zhang et al. (2018) and Zhou et al. (2018b) develop more efficient stochastic cubic regularization variants, which further reduce the stochastic second-order oracle complexity to Õ(n2/3/ 1.5) at the cost of increasing the stochastic first-order oracle complexity to Õ(n2/3/ 2.5).
Contributions: In this paper we propose and exploit a formulation in which we make explicit control of the step size in the trust region method. This idea is leveraged to develop two efficient stochastic trust region (STR) approaches. We tailor our methods to achieve state-of-the-art oracle complexities under the following two measurements: (i) the stochastic second-order oracle complexity is prioritized; (ii) the stochastic first- and second-order oracle complexities are treated equally. Specifically, in Setting (i), our method STR1 employs a newly proposed estimator to approximate the Hessian and adopts the estimator in (Fang et al., 2018) for gradient approximation. Our novel Hessian estimator maintains an accurate second-order differential approximation with lower amortized oracle complexity. In this way, STR1 achieves Õ(min{1/ 2, √ n/ 1.5}) stochastic second-order oracle complexity. This is lower than existing results for solving problem (1). In Setting (ii), our method STR2 substitutes the gradient estimator in STR1 with one that integrates stochastic gradient and Hessian together to maintain an accurate gradient approximation. As a result, STR2 achieves convergence in Õ(n3/4/ 1.5) overall stochastic first- and second-order oracle queries. Finally, based on STR, we further develop Hessian-free STR algorithms, namely STRfree and STRfree+, which outperform existing Hessian-free algorithms theoretically.
1.1 RELATED WORK
Computing local minimum to a non-convex optimization problem is gaining considerable amount of attentions in recent years. Both cubic regularization (CR) approaches (Nesterov & Polyak, 2006) and trust region (TR) algorithms (Conn et al., 2000; Curtis et al., 2017) can escape saddle points and find a local minimum by iterating the variable along the direction related to the eigenvector of the Hessian with the most negative eigenvalue. As the CR heavily depends on the regularization parameter for the cubic term, Cartis et al. (2011) propose an adaptive cubic regularization (ARC) approach to boost the efficiency by adaptively tuning the regularization parameter according to the current objective decrease. Noting the high cost of full gradient and Hessian computation in ARC, sub-sampled cubic regularization (SCR) (Kohler & Lucchi, 2017a) is developed for sampling partial data points to estimate the full gradient and Hessian. Recently, by exploring the finite-sum structure of the target problem, many works incorporate variance reduction technique (Johnson & Zhang, 2013) into CR and propose stochastic variance-reduced methods. For example, Zhou et al. (2018c) propose stochastic variance-reduced cubic (SVRC) in which they integrate the stochastic variance-reduced gradient estimator (Johnson & Zhang, 2013) and the Hessian tracking technique (Gower et al., 2018) with CR. Such a method is proved to be at least O(n1/5) faster than CR and TR. Then Zhou et al. (2018b) use adaptive gradient batch size and constant Hessian batch size, and develop Lite-SVRC to further reduce the stochastic second-order oracle Õ(n4/5/ 1.5) of SVRC to Õ(n2/3/ 1.5) at the cost of higher gradient computation cost. Similarly, except turning the gradient batch size, Zhang et al. (2018) further adaptively sample a certain number of data points to estimate the Hessian and prove the proposed method to have the same stochastic second-order oracle complexity as Lite-SVRC.
2 PRELIMINARY
Notation. We use ‖v‖ to denote the Euclidean norm of vector v and use ‖A‖ to denote the spectral norm of matrix A. Let S be the set of component indices. We define the minibatch average of component functions by f(x;S) def= 1|S| ∑ i∈S fi(x). Then we specify the assumptions that are necessary to the analysis of our methods.
MetaAlgorithm 1 Inexact Trust Region Method
Input: initial point x0, step size r, number of iterations K, construction of differential estimators gk and Hk
1: for k = 0 to K − 1 do 2: Compute hk and λk by solving (8); 3: xk+1 := xk + hk; 4: if λk ≤ 3 √ /L2 then 5: Output x = xk+1; 6: end if 7: end for
Assumption 2.1. F is bounded from below and its global optimal is achieved at x∗. We denote ∆ = F (x0)− F (x∗). Assumption 2.2. Each fi : Rd → R has L1-Lipschitz continuous gradient: for any x,y ∈ Rd
‖∇fi(x)−∇fi(y)‖ ≤ L1‖x− y‖. (3) Assumption 2.3. Each fi : Rd → R has L2-Lipschitz continuous Hessian: for any x,y ∈ Rd
‖∇2fi(x)−∇2fi(y)‖ ≤ L2‖x− y‖. (4)
2.1 TRUST REGION METHOD
Here we briefly introduce the trust region method (Conn et al., 2000). In each step, it first solves the Quadratic Constraint Quadratic Program (QCQP) defined as
hk := argmin h∈Rd,‖h‖≤r 〈∇F (xk),h〉+ 1 2 〈∇2F (xk)h,h〉, (5)
where r is the trust-region radius. Then it updates the new variable as
xk+1 := xk + hk. (6)
Since ∇2F (xk) is indefinite, the trust-region subproblem (5) is non-convex. But its global optimizer can be characterized by the following lemma (Corollary 7.2.2 in (Conn et al., 2000)). Lemma 2.1. Any global minimizer of problem (5) satisfies the equation(
∇2F (xk) + λI ) hk = −∇F (xk), (7)
where the dual variable λ ≥ 0 should satisfy∇2F (xk) + λI < 0 and λ(‖hk‖ − r) = 0.
In particular, the standard QCQP solver returns both the minimizer hk as well as the corresponding dual variable λ of subproblem (5). In the following section, we first prove that the deterministic trust-region update (5) and (6) converges at the rate of O(1/k2/3), much sharper than existing provable convergence rate O(1/ √ k) (Conn et al., 2000), and then develop a more efficient stochastic trust-region approach.
3 METHODOLOGY
Here we first introduce a general inexact trust region method which is summarized in MetaAlgorithm 1. It accepts inexact gradient estimation gk and Hessian estimation Hk as input to the QCQP subproblem
hk := argmin h∈Rd,‖h‖≤r 〈gk,h〉+ 1 2 〈Hkh,h〉. (8)
Similar to (5), Lemma 2.1 characterizes the global optimum to problem (8) which can be efficiently solved by Lanczos method (Gould et al., 1999). Assume the dual variable of the minimizer hk is λk.
We prove that such inexact trust region method achieves the optimal O(1/k2/3) convergence rate when the estimation gk and Hk at each iteration are sufficiently close to their full (exact) counterparts ∇F (xk) and ∇2F (xk) respectively:
‖gk −∇F (xk)‖ ≤ 6 , ‖Hk −∇2F (xk)‖ ≤ √ L2 3 . (9)
Algorithm 2 STR1 Input: initial point x0, step size r, number of iterations K
1: for k = 1 to K do 2: Construct gradient estimator gk by Estimator 4; 3: Construct Hessian estimator Hk by Estimator 3; 4: Compute hk and λk by solving (8); 5: xk+1 := xk + hk; 6: if λk ≤ 3 √ /L2 then 7: Output x = xk+1; 8: end if 9: end for
Such result allows us to derive stochastic trust-region variants with novel differential estimators that are tailored to ensure the optimal convergence rate. We state our formal results in Theorem 3.1, whose proof is deferred to Appendix B.1 due to the space limit.
Theorem 3.1 (Main Result). Consider problem (1) under Assumption 2.1-2.3. If the differential estimators gk and Hk satisfy Eqn. (9) for all k, MetaAlgorithm 1 finds an O( )-SOSP in less than K = O( √ L2∆/ 1.5) iterations by setting the trust-region radius as r = √ /L2.
Remark 3.1. We emphasize that MetaAlgorithm 1 degenerates to the exact trust region method by taking gk = ∇F (xk) and Hk = ∇2F (xk). Such result is of its own interest because this is the first proof to show that the vanilla trust region method has the optimal O(1/k2/3) convergence rate. Similar rate is achieved by Curtis et al. (2017) but with a much more complicated trust region variant.
Theorem 3.1 shows the explicit step size control of the trust region method: Since the dual variable satisfies λk > 3 0.5/ √ L2 > 0 for all but the last iteration, we always find the solution to the trustregion subproblem (8) in the boundary, i.e. ‖hk‖ = r, according to the complementary condition (15) in Appendix B.1. Such exact step size control property is missing in the cubic-regularization method where the step size is implicitly decided by the cubic regularization parameter.
More importantly, we emphasize that such explicit step size control is crucial to the sample efficiency of our variance reduced differential estimators. The essence of variance reduction is to exploit the correlations between the differentials in consecutive iterations. Intuitively, when two neighboring iterates are close, so are their differentials due to the Lipschitz continuity, and hence a smaller number of samples suffice to maintain the accuracy of the estimators. On the other hand, smaller step size reduces the per-iteration objective decrease which harms the convergence rate of the algorithm (see proof of Theorem 3.1). Therefore, the explicit step size control in trust region method allows us to well trade-off the per-iteration sample complexity and convergence rate, from which we can derive stochastic trust region approaches with the state-of-the-art sample efficiency. In contrast, existing trust region methods change the step size at every iteration according to progress made, which requires loss evaluations that can be as expensive as gradient computations (e.g. the non-convex linear model in Section 7) and is thus prohibitive for large-scale problems.
4 STOCHASTIC TRUST REGION METHOD: TYPE I
Having the inexact trust region method as prototype, we now present our first sample-efficient stochastic trust region method, namely STR1, in Algorithm 2 which emphasizes cheaper stochastic second-order oracle complexity. As Theorem 3.1 already guarantees the optimal convergence rate of MetaAlgorithm 1 when the gradient estimator gk and the Hessian estimator Hk meet the requirement (9), here we focus on constructing such novel differential estimators. Specifically, we first present our Hessian estimator in Estimator 3 and our first gradient estimator in Estimator 4, both of which exploit the trust region radius r = √ /L2 to reduce their variances.
4.1 HESSIAN ESTIMATOR
Our epoch-wise Hessian estimator Hk is given in Estimator 3, where p2 controls the epoch length and s2 (and optionally s′2) controls the minibatch size. At the beginning of each epoch, Estimator 3 has two options, designed for different target accuracy: Option I is preferable for the high accuracy
Estimator 3 Hessian Estimator Input: Epoch length p2, sample size s2, s′2 (optional)
1: if mod(k, p2)= 0 then 2: Option I: high accuracy case (small ) 3: Hk := ∇2F (xk); 4: Option II: low accuracy case (moderate ) 5: Draw s′2 samples indexed byH′ and let Hk := ∇2f(xk;H′); 6: else 7: Draw s2 samples indexed byH and let Hk := ∇2f(xk;H)−∇2f(xk−1;H) + Hk−1; 8: end if
Estimator 4 Gradient Estimator: Case (1) 1: if mod(k, p1)= 0 then 2: gk := ∇F (xk); 3: else 4: Draw s1 samples indexed by G and gk = ∇f(xk;G)−∇f(xk−1;G) + gk−1; 5: end if
case ( < O(1/n)) where we compute the full Hessian to avoid approximation error, and Option II is designed for the moderate accuracy case ( > O(1/n)) where we only need an approximate Hessian estimator. Then, p2 iterations follow with Hk defined in a recurrent manner. These recurrent estimators exist for the first-order case (Nguyen et al., 2017; Fang et al., 2018), but their bound only holds under the vector `2 norm. Here we generalize them into Hessian estimation with matrix spectral norm bound.
The following lemma analyzes the amortized stochastic second-order oracle (Hessian) complexity for Algorithm 3 to meet the requirement in Theorem 3.1. As we need an error bound under the spectral norm, we will appeal to the matrix Azuma’s inequality (Tropp, 2012). The proof is deferred to Appendix B.2.
Lemma 4.1. Assume Algorithm 2 takes the trust region radius r = √ /L2 as in Theorem 3.1. For any k ≥ 0, Estimator 3 produces estimator Hk for the second order differential ∇2F (xk) such that ‖Hk −∇2F (xk)‖ ≤ √ L2/3 with probability at least 1 − δ/K0 if we set (1) p2 = √ n and s2 = 32 √ n log(dK0/δ) in option I, or (2) p2 = L1/(2 √ L2), s′2 = 16L 2 1/( L2) log(dK0/δ),
and s2 = 32L1/( √ L2) log(dK0/δ) in option II. Here K0 is a constant to be determined later. Consequently the amortized per-iteration stochastic second-order oracle complexity to construct Hk is no more than 2s2 = min{64 √ n log dK0δ , 64L1√ L2 log dK0δ }.
4.2 GRADIENT ESTIMATOR: CASE (1)
When the stochastic second-order oracle complexity is prioritized, we directly employ the SPIDER gradient estimator to construct gk (Fang et al., 2018). Similar to the construction for Hk, the estimator gk is also constructed in an epoch-wise manner as presented in Estimator 4, where p1 controls the epoch length and s1 controls the minibatch size.
We now analyze the stochastic first-order oracle complexity to meet the requirement in Theorem 3.1. Lemma 4.2. Assume Algorithm 2 takes the trust region radius r = √ /L2. Estimator 4 produces estimator gk of the first order differential ∇F (xk) such that ‖gk − ∇F (xk)‖ ≤ /6 with probability at least 1 − δ/K0 for any k ≥ 0, if we set p1 = max{1, √ n L2/(cL21 log K0 δ )} and
s1 = min{n, √ cnL21 log(K0/δ)/( L2)}, where the constant c = 1152 and K0 is a constant to be determined later. Consequently, the amortized per-iteration stochastic first-order oracle complexity to construct gk is min{n, √ 4cnL21 log (K0/δ)/( L2)}.
The proof of Lemma 4.2 is similar to the one of Lemma 4.1 and is deferred to Appendix B.3. These two lemmas only guarantee that the differential estimators satisfy the requirement (9) in a single iteration and can be extended to hold for all k by using the union bound with K0 = 2K, where K denotes the number of iterations. Combining such lifted result with Theorem 3.1, we can establish the computational complexity bound as follows.
Algorithm 5 STR2 Input: initial point x0, step size r, number of iterations K
1: for k = 1 to K do 2: Construct gradient estimator gk by Estimator 6; 3: Construct Hessian estimator Hk by Estimator 3; 4: Compute hk and λk by solving (8); 5: xk+1 := xk + hk; 6: if λk ≤ 3 √ /L2 then 7: Output x = xk+1; 8: end if 9: end for
Estimator 6 Gradient Estimator: Case (2) 1: if mod(k, p1)= 0 then 2: Let x̃ := xk, gk := ∇F (x̃) 3: else 4: Draw s1 samples indexed by G; 5: gk = ∇f(xk;G)−∇f(xk−1;G) + gk−1 + [∇2F (x̃)−∇2f(x̃;G)](xk − xk−1); 6: end if
Corollary 4.1 (Computational Complexity of STR1). Assume Algorithm 2 will use Estimator 4 to construct the first-order differential estimator gk and use Estimator 3 to construct the second-order differential estimator Hk. To find a 12 -SOSP with probability at least 1− δ, the overall stochastic first-order oracle complexity isO(min{n √ L2∆ 1.5 , √ nL1 2 log( L2∆ δ 1.5 )}) and the overall stochastic secondorder oracle complexity is O(min{ √ nL2∆ 1.5 , L1∆ 2 } log( d √ L2∆
δ 1.5 )).
From Corollary 4.1 we see that Õ(min{ √ n/ 1.5, 1/ 2}) stochastic second-order oracle queries are sufficient for STR1 to find an -SOSP which is significantly better than both the subsampled cubic regularization method Õ(1/ 2.5) (Kohler & Lucchi, 2017a) and the variance reduction based ones Õ(n2/3/ 1.5) (Zhou et al., 2018b; Zhang et al., 2018). Recently, Zhou & Gu (2019) developed a stochastic recursive variance-reduced cubic regularization (SRVRC) method which finds an ( , √ )- approximate local minimum with Õ(n/ 1.5, 1/ 3) SFO and Õ( √ n/ 1.5, 1/ 2) SSO. But the result of SRVRC needs to assume stochastic gradient to be bounded, i.e., ‖∇fi(x)−∇F (x)‖ ≤ σ. With this extra assumption, STR1 enjoys Õ(n/ 1.5, n/ 2, 1/ 3) SFO and Õ( √ n/ 1.5, 1/ 2) SSO. Thus, if 1/ ≤ n ≤ 1/ 2, STR1 outperforms SRVRC; otherwise they have the same complexities.
5 STOCHASTIC TRUST REGION METHOD: TYPE II
In the above section, we focus on the setting where the stochastic second-order oracle complexity is prioritized over the stochastic first-order oracle complexity. In this setting, STR1 achieves the stateof-the-art efficiency. In this section, we consider a different complexity measure where first-order and second-order oracle complexities are treated equally and our goal is to minimize the maximum of them. We note that, currently the best result is Õ(n4/5/ 1.5) of the SVRC method (Zhou et al., 2018c). Since the Hessian estimator Hk of STR1 already delivers the superior Õ( √ n/ 1.5) stochastic Hessian complexity, in STR2 (see Algorithm 5), we retain Estimator 3 for second-order differential estimation and use Estimator 6 to further reduce the stochastic gradient complexity.
5.1 GRADIENT ESTIMATOR: CASE (2)
When stochastic gradient and Hessian complexities are equally important, we use Hessian to improve the gradient estimation. Denote x(a) = axt + (1− a)x̃. From Assumption 2.3, we have
‖∇fi(xt)−∇fi(x̃)−∇2fi(x̃)(xt − x̃)‖=‖ ∫ 1
0
[∇2fi(x(a))−∇2fi(x̃)](xt − x̃)da‖≤ L2 2 ‖xt − x̃‖2.
Such property can be used to improve Lemma 4.2 of Estimator 4. Specifically, define the correction
ck = [∇2F (x̃)−∇2f(x̃;G)](xk − xk−1),
where x̃ is some reference point updated in an epoch-wise manner. Estimator 6 adds ck to the estimator in Estimator 4. Note that in Estimator 6, the first- and second-order oracle complexities are the same. We now analyze the first-order (and second-order) oracle complexity to meet requirement (9).
Lemma 5.1. Assume Algorithm 5 takes the trust region radius r = √ /L2 as in Theorem 3.1. For any k ≥ 0, Estimator 6 produces estimator gk for the first order differential∇F (xk) such that ‖gk− ∇F (xk)‖ ≤ /6 with probability at least 1−δ/K0, if we set p1 = n0.25 and s1 = n0.75c log(K0/δ), where c = 1152 and K0 is a constant to be determined. Consequently, the amortized per-iteration stochastic first-order oracle complexity to construct gk is 2s1 = 2n0.75c log(K0/δ).
The proof of Lemma 5.1 is similar to the one of Lemma 4.1 and is deferred to Appendix B.4. Similar to the previous section, Lemma 5.1 only guarantees that the gradient estimator satisfies the requirement (9) in a single iteration. Such result can be extended to hold for all k by using the union bound with K0 = 2K, which together with Theorem 3.1 gives the following corollary. Corollary 5.1 (Computational Complexity of STR2). Algorithm 5 finds a 12 -SOSP with probability at least 1 − δ, within O(n 0.75√L2∆ 1.5 log( √ L2∆ δ 1.5 )) overall stochastic first-order oracle queries and O(n 0.75√L2∆ 1.5 log( d √ L2∆
δ 1.5 )) overall stochastic second-order oracle queries.
Corollary 5.1 shows that to find an -SOSP, both SFO and SSO of STR2 are Õ(n3/4/ 1.5) which surpasses the best existing one Õ(n4/5/ 1.5) in (Zhou et al., 2018c).
6 PRACTICAL STOCHASTIC TRUST REGION VARIANTS
6.1 HANDLING INEXACT QCQP SOLUTIONS
One drawback of MetaAlgorithm 1 is that it requires the exact solution to the QCQP subproblem (8) and uses the dual variable as stopping criterion. We address this problem by developing a practical variant, MetaAlgorithm 7, which admits inexact QCQP solutions without access to the dual variable. This algorithm repeatedly invokes a procedure called INEXACTTRWEAK, which, as we shall see, outputs an O( )-SOSP with a constant probability of 2/3 in O(1/ 1.5) iterations. By repeatedly invoking INEXACTTRWEAK for Θ(log(1/δ)) times, MetaAlgorithm 7 boosts the probability to (1− δ) for any desired δ. This repeating technique has been studied by, e.g., (Allen-Zhu & Li, 2018; AllenZhu, 2018b). To test whether the t-th run outputs an O( )-SOSP, we need to compute ‖∇F (xt)‖ and the smallest eigenvalue of∇2F (xt). The latter one can be approximated by solving the QCQP
vt := argmin ‖v‖≤1 ψt(v) = 〈Htv,v〉, (10)
where Ht is the full Hessian∇2F (xt) or its estimation. One can show that MetaAlgorithm 7 finds an O( )-SOSP w.p. at least (1−δ) in Õ(1/ 1.5) iterations. We defer the detailed analysis to Appendix C.
6.2 HESSIAN-FREE IMPLEMENTATION
Based on MetaAlgorithm 7, we propose a Hessian-free method named STRfree, which is summarized in Algorithm 8. STRfree leverages the full/stochastic Hessian and Estimator 4 to construct Hk and gk, respectively. Besides, it uses Lanczos method (Gould et al., 1999; Carmon & Duchi, 2018) as the QCQP solver, which can be implemented in a Hessian-free manner (i.e., using only Hessian-vector products without explicit Hessian matrix evaluations). Thus, Hk is only accessed through Hessianvector products and is never explicitly constructed. Since Hessian-vector products can be computed in linear time (in terms of the dimension d) for many machine learning problems (Allen-Zhu, 2018b; Agarwal et al., 2017), Hessian-free methods are usually more practical than Hessian based ones for high dimensional problems. The following theorem, whose proof can be found in Appendix D, establishes the runtime complexity (i.e., the total complexity of stochastic gradient and Hessian-vector product evaluations (Zhou & Gu, 2019)) of STRfree.
MetaAlgorithm 7 Inexact Trust Region Method II
Input: initial point x0, step size r, number of inner iterations K, constants δ, ζ ∈ (0, 1), c1, c2, number of outer iterations T = Θ(log(1/δ)), sample size s (optional)
1: for t = 1 to T do 2: xt ← INEXACTTRWEAK(x0, r,K, ζ); 3: Option I: high accuracy case (small ) 4: Ht := ∇2F (xt); 5: Option II: low accuracy case (moderate ) 6: Draw s samples indexed byH and let Ht := ∇2f(xt;H); 7: Compute ṽt by solving (10) up to accuracy √ L2 with probability 1− δ/4;
8: if ‖∇F (xt)‖ ≤ c1 and ψt(ṽt) ≥ − √ c2 L2 then
9: return x := xt; 10: end if 11: end for
12: procedure INEXACTTRWEAK(x0, r, K, ζ) 13: for k = 0 to K − 1 do 14: Compute gk and Hk such that (9) holds with probability 1− ζ4K ; 15: Compute h̃k by solving (8) up to accuracy 1.5/ √ L2 with probability 1− ζ4K ; 16: xk+1 := xk + h̃k; 17: end for 18: Randomly select k̄ from {0, . . . ,K − 1}; 19: return xk̄+1; 20: end procedure
Algorithm 8 STRfree 1: In the same setting as MetaAlgorithm 7, 2: construct gradient estimator gk by Estimator 4; 3: construct Hessian estimator Hk by 4: Option I: Hk := ∇2F (xk); 5: Option II: Draw s samples indexed byH and let Hk := ∇2f(xk;H); 6: use Lanczos method to solve QCQP subproblems.
Theorem 6.1. Consider Algorithm 8 for solving problem (1). Let ζ = 1/3, r = √ /L2, K = 4 √ L2∆/ 1.5, T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). The hyper-parameters in Estimator 4 are set to the same values as those in Lemma 4.2. The number of iterations of Lanczos method is set to Õ(1/(L2 )0.25). To find an O( )-SOSP w.p. at least 1− δ, the runtime complexity is Õ(dmin{n/ 1.75, 1/ 2.75 + √ n/ 2}log(1/δ)).
To solve the QCQP more efficiently, here we develop a faster solver which is based on the AppxPCA method (Allen-Zhu & Li, 2016) and KatyushaXW (Allen-Zhu, 2018a). See details in Appendix E. By replacing Lanczos method with this solver in STRfree, we further improve the runtime complexity to Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 + √ n/ 2}). We call this new algorithm STRfree+ whose details can be found in Appendix E. Table 2 shows that for the runtime complexities, both STRfree and STRfree+ outperform existing methods. See more comparison and discussion in Appendix E.4.
7 EXPERIMENTS
Here we compare the proposed STR with several state-of-the-art (stochastic) cubic regularized algorithms and trust region approaches, including trust region (TR) algorithm (Conn et al., 2000), adaptive cubic regularization (ARC) (Cartis et al., 2011), sub-sampled cubic regularization (SCR) (Kohler & Lucchi, 2017a), stochastic variance-reduced cubic (SVRC) (Zhou et al., 2018c), Lite-SVRC (Zhou et al., 2018b), and SRVRC (Zhou & Gu, 2019). For STR, we estimate the gradient as the way in case (1). This is because such a method enjoys lower Hessian computational complexity over the way in case (2) and for most problems, computing their Hessian matrices is much more time-consuming than computing their gradients. For the subproblems in these compared methods, we use Lanczos
method (Gould et al., 1999; Kohler & Lucchi, 2017a) to solve the subproblem approximately in a Hessian-related Krylov subspace. We run simulations on seven datasets from LibSVM (a9a, ijcnn, codrna, phishing, w8a, epsilon and mnist). We run our algorithm for 40 epochs and use the output as the optimal value f∗ for sub-optimality estimation. Note the output has very small gradient already verified by Figure 2 and 4 in appendix. For all the considered algorithms, we set their initializations as zeros and tune their hyper-parameters optimally. For more experimental settings, e.g. details of testing datasets and algorithm parameter settings, please refer to Appendix F.
Two evaluation non-convex problems. Following (Kohler & Lucchi, 2017a; Zhou et al., 2018c), we evaluate all considered algorithms on two learning tasks: the logistic regression with nonconvex regularizer and the nonlinear least square. Given n data points (xi, yi) where xi ∈ Rd is the sample vector and yi ∈ {−1, 1} is the label, logistic regression with non-convex regularizer aims at distinguishing these two kinds of samples by solving the following problem minw 1 n ∑n i=1 log(1 + exp(−yiwTxi)) + λR(w;α), where the non-convex regularizer R(w;α) is
defined as R(w;α) = ∑d i=1 αw 2 i /(1 + αw 2 i ). The nonlinear least square problem fits the nonlinear
data by minimizing minw 12n ∑n i=1 [ yi − φ(wTxi) ]2 +λR(w, α). For these two kinds of problems, we set the parameters λ = 10−3 and α = 10 for all testing datasets.
Comparison of Hessian based algorithms. Figure 1 summarizes testing results on the non-convex logistic regression problem. For each dataset, we report the function value gap v.s. the overall algorithm running time which can reflect the overall computational complexity of an algorithm, and also show the function value gap v.s. Hessian sample complexity which reveals the complexity of Hessian computation. From Figure 1, one can observe that our proposed STR algorithm runs faster than the compared algorithms in terms of the algorithm running time, showing the overall superiority of STR. Furthermore, STR also reveals much sharper convergence curves in terms of the Hessian
sample complexity which is consistent with our theory. This is because to achieve an -accuracy local minimum, the Hessian sample complexity of the proposed STR is Õ(n0.5/ 1.5) and is superior over the complexity of the compared methods (see the comparison in Sec. 4.2). Indeed, this also explains why our algorithm is also faster in terms of algorithm running time, since for most optimization problems, Hessian matrix is much more computationally expensive than the gradient and thus more efficient Hessian sample complexity means faster overall convergence speed. Note, as all compared methods need to compute the Hessian and gradient, their memory complexity are all O(d2 + d). Figure 2 displays results of the compared algorithms on the nonlinear least square problem. STR shows very similar behaviors as those in Figure 1. Specifically, STR achieves fastest convergence rate in terms of both algorithm running time and Hessian sample complexity. On the codrna dataset we further plot the gradient norm versus running time and Hessian sample complexity. One can obverse that the gradient in STR vanishes significantly faster than other algorithms which means that STR can find the stationary point with high efficiency. See Figure 4 in Appendix F.2 for more experimental results on gradient norm comparison. All these results confirm the superiority of the proposed STR.
Comparison of Hessian-free algorithms. Here we compare our proposed Hessian-free STR, namely STRfree, with other state-ofthe-art Hessian-free algorithms on the two high-dimensional datasets, including epsilon and mnist (see details in Appendix F). Here we do not compare STRfree+, as it is based on AppxPCA method (Allen-Zhu & Li, 2016) and KatyushaXW (Allen-Zhu, 2018a) which require tuning a lot of hyper-parameters. From the results in Figure 3, one can observe that compared with other algorithms, our STRfree
achieves the best convergence speed which demonstrates its high efficiency in realistic applications. Besides, one also can find that STRfree is much faster than Hessian based STR since computing full Hessian is actually much computationally expensive than the computation of the Hessian vector.
8 CONCLUSION
We proposed two stochastic trust region variants. Under two settings (whether stochastic first- and second-order oracle complexities are treated equally), the proposed methods achieve state-of-theart oracle complexities. We also propose Hessian-free variants with lowest runtime complexity. Experimental results testify our theoretical implications and the efficiency of the proposed algorithms.
A APPENDIX
In this appendix, Sec. B first provides the proofs for the results in the manuscript. Then, we analyze MetaAlgorithm 7 and STRfree in Sec. C and Sec. D, respectively. Next, in Sec. E, we develop a fast QCQP solver to further improve the computational complexity of STRfree. Finally, more experimental details and results are presented in Sec. F.
B DEFERRED PROOFS
B.1 PROOF OF THEOREM 3.1
Proof. For simplicity of notation, we denote
∇k def = ∇F (xk)− gk and ∇2k def = ∇2F (xk)−Hk.
From Assumption 2.3 we have
F (xk+1) ≤F (xk)+〈∇F (xk),hk〉+ 1 2 〈∇2F (xk)hk,hk〉+ L2 6 ‖hk‖3
=F (xk)+〈∇k+gk,hk〉+ 1
2 〈[∇2k + Hk]hk,hk〉+ L2 6 ‖hk‖3.
Use the CauchySchwarz inequality to obtain
F (xk+1) ≤ F (xk)+〈gk,hk〉+ 1 2 〈Hkhk,hk〉+L2 6 ‖hk‖3+‖∇k‖‖hk‖+ 1 2 ‖∇2k‖‖hk‖2. (11)
The requirement (9) together with the trust region radius ‖h‖ ≤ r = √ /L2 allows us to bound
‖∇k‖‖hk‖+ 1
2 ‖∇2k‖‖hk‖2 ≤
1 3 ·
1.5
√ L2 . (12)
The optimality of (5) indicates that there exists a dual variable λk ≥ 0 so that (Corollary 7.2.2 in (Conn et al., 2000))
First Order : gk + Hkhk + λkL2
2 hk = 0, (13)
Second Order : Hk + λkL2
2 · I < 0, (14)
Complementary : λk · (‖hk‖ − r) = 0. (15)
Multiplying (13) by hk, we have
〈gk + Hkhk + λ kL2 2 hk,hk〉 = 0. (16)
Additionally, using (14) we have
〈(Hk + λ kL2 2 I)hk,hk〉 ≥ 0,
which together with (16) gives 〈gk,hk〉 ≤ 0. (17)
Moreover, the complementary property (15) indicates ‖hk‖= √ /L2 as we have λk > 3 √ /L2 > 0
before MetaAlgorithm 1 terminates. Plug (12), (16), and (17) into (11) and use ‖hk‖ = √ /L2:
F (xk+1) ≤ F (xk)− L2λ k
4 · L2
+ 1
2 ·
1.5
√ L2 . (18)
Therefore, if we have λk > 3 0.5/ √ L2, then
F (xk+1) ≤ F (xk)− 1 4 √ L2 · 1.5. (19)
Using Assumption 2.1, we find λk ≤ 3 0.5/ √ L2 in no more than 4 √ L2 · (F (x0)− F (x∗))/ 1.5 iterations. We now show that once λk ≤ 3 0.5/ √ L2, then xk+1 is already an O( )-SOSP: From (13), we have
‖gk + Hkhk‖ = L2λ k
2 · ‖hk‖ ≤ 2 . (20)
The assumptions ‖∇k‖ ≤ /6 and ‖∇2k‖ ≤ √ L2/3 together with the trust region radius ‖h‖ ≤√
/L2 imply
‖∇F (xk) +∇2F (xk)hk‖ ≤ ‖gk + Hkhk‖+ ‖∇k‖+ ‖∇2k · hk‖ ≤ 2.5 . (21)
On the other hand use Assumption 2.3 to bound
‖∇F (xk+1)−∇F (xk)−∇2F (xk)hk‖ ≤ L2 2 ‖hk‖2 ≤ 2 .
Combining these two results gives ‖∇F (xk+1)‖ ≤ 3 . Besides, using Assumption 2.3, ‖∇2k‖ ≤ √ L2/3, and (14), we derive the Hessian lower bound
∇2F (xk+1) < ∇2F (xk)− L2 · ‖hk‖I < Hk− √ L2/3I−L2‖hk‖I <− √ 12 L2I.
Hence xk+1 is a 12 -stationary point. Additionally, we have ‖hk‖ = r according to the complementary condition (15) for all but the last iteration.
B.2 PROOF OF LEMMA 4.1
Proof. Without loss of generality, we analyze the case 0 ≤ k < q2 for ease of notation. We first focus on Option II. The proof for Option I follows the similar argument. Option II: Define for k = 0 and i ∈ [s′2]
B0i def = ∇2fi(x0)−∇2F (x0),
and define for k ≥ 1 and i ∈ [s2]
Bki def = ∇2fi(xk)−∇2fi(xk−1)− (∇2F (xk)−∇2F (xk−1)).
{Bki } is a martingale difference sequence. We have for all k and i,
E[Bki |xk] = 0.
Besides, we use Assumption 2.2 for k = 0 to bound
‖B0i ‖ ≤ ‖∇2fi(x0)‖+ ‖∇2F (x0)‖ = 2L1, (22)
and use Assumption 2.3 for k ≥ 1 to bound ‖Bki ‖ ≤‖∇2fi(xk)−∇2fi(xk−1)‖+ ‖∇2F (xk)−∇2F (xk−1)‖ ≤ 2 √ L2.
From the construction of Hk, we have
Hk −∇2F (xk) = s′2∑ i=1 B0i s′2 + k∑ j=1 s2∑ i=1 Bji s2 .
Thus using the matrix Azuma’s Inequality in Theorem 7.1 of (Tropp, 2012) and k ≤ p2, we have
Pr{‖Hk −∇2F (xk)‖ ≥ t} ≤d · exp{− t 2/8∑s′2
i=1 4L 2 1/s ′2 2 + ∑k j=1 ∑s2 i=1 4 L2/s 2 2 }
≤d · exp{− t 2/8
4L21/s ′ 2 + 4p2 L2/s2
}.
Consequently, we have Pr{‖Hk −∇2F (xk)‖ ≤ √ L2} ≥ 1− δ/K0.
by taking t = √ L2, s′2 = 16L 2 1/( L2) log(dK0/δ), s2 = 32L1/( √ L2) log(dK0/δ), and p2 =
L1/(2 √ L2).
Option I: The proof is similar to the one of Option II except that we replace B0i with zero matrix. In such case, the matrix Azuma’s Inequality implies
Pr{‖Hk −∇2F (xk)‖ ≥ t} ≤ d · exp{− t 2/8∑k
j=1 ∑s2 i=1 4 L2/s 2 2 } ≤ d · exp{− t 2/8 4p2 L2/s2 }.
Thus by taking t = √ L2, s2 = 32 √ n log(dK0/δ), and p2 = √ n, we have the result.
Amortized Complexity: In option I, the choice of parameters ensures that: s′2 ≤ p2 × s2 and in option II: n ≤ p2 × s2. Consequently, the amortized stochastic second-order oracle complexity is bounded from above by 2s2.
B.3 PROOF OF LEMMA 4.2
Without loss of generality, we analyze the case 0 ≤ k < q1 for ease of notation. Define for k ≥ 1 and i ∈ [s1]
aki def = ∇fi(xk)−∇fi(xk−1)− (∇F (xk)−∇F (xk−1)).
{aki } is a martingale difference sequence: for all k and i
E[aki |xk] = 0.
Besides, aki has bounded norm:
‖aki ‖≤‖∇fi(xk)−∇fi(xk−1)‖+‖∇F (xk)−∇F (xk−1)‖ ≤L1‖xk − xk−1‖+L1‖xk − xk−1‖
≤2L1 √ /L2. (23)
From the construction of gk, we have
gk −∇F (xk) = k∑ j=1 s1∑ i=1 aji s1 .
Recall the Azuma’s Inequality. Using k ≤ p1, we have
Pr{‖gk −∇F (xk)‖ ≥ t}
≤ exp{− t 2/8∑k
j=1 ∑s1 i=1 4 L21 L2s21 } ≤ exp{− t 2/8 4 L21p1/(s1L2) }.
Take t = /6 and denote c = 1152. To ensure that
Pr{‖gk −∇F (xk)‖ ≥ /6} ≤ δ/K0,
we need cL 2 1
L2 log K0δ ≤ s1 p1 . The best amortized stochastic first-order oracle complexity can be obtained by solving the following two-dimensional programming:
min p1≥1,s1≥1
(n+ s1(p1 − 1))/p1
s.t. cL21 L2 log K0 δ ≤ s1 p1 ,
which has the solution s1 = min{n, √ n · cL21 log K0 δ
L2 }, and p1 = max{1,
√ n · L2
cL21 log K0 δ
}. Note
that when we take s1 = n, we directly compute gk = ∇F (xk) without sampling. The amortized stochastic first-order oracle complexity is obtained by plugging in the choice of s1 and p1, which completes the proof.
B.4 PROOF OF LEMMA 5.1
Without loss of generality, we analyze the case 0 ≤ k < q1 for ease of notation. Define for k ≥ 1 and i ∈ [s1]
bki def =∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1) − [∇F (xk)−∇F (xk−1)−∇2F (x̃)(xk − xk−1)].
{bki } is a martingale difference sequence: for all k and i E[bki |xk] = 0. Besides, bki has bounded norm:
‖bki ‖≤‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖ +‖∇F (xk)−∇F (xk−1)−∇2F (x̃)(xk − xk−1)‖.
We can bound ‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖ as follows. ‖∇fi(xk)−∇fi(xk−1)−∇2fi(x̃)(xk − xk−1)‖
= ‖ ∫ 1
0
[ ∇2fi(xk−1 + t(xk − xk−1))−∇2fi(x̃) ] (xk − xk−1)dt‖
≤ ∫ 1
0
L2 ∥∥∥txk + (1− t)xk−1 − x̃∥∥∥dt · ‖xk − xk−1‖ ≤ ∫ 1
0
( t‖xk − x̃‖+ (1− t)‖xk−1 − x̃‖ ) dt · L2r
≤ L2kr2, where the first inequality follows from Assumption 2.3 and the last inequality holds because ‖xk − x̃‖ ≤ kr and ‖xk−1 − x̃‖ ≤ kr, where r is the trust region radius. Similarly, we have ‖∇F (xk)− ∇F (xk−1)−∇2F (x̃)(xk − xk−1)‖ ≤ L2kr2. Thus, we bound
‖bki ‖ ≤ 2L2kr2 ≤ 2p1 From the construction of gk, we have
gk −∇F (xk) = k∑ j=1 s1∑ i=1 bji s1 .
We use k ≤ p1 and the Azuma’s inequality to bound Pr{‖gk −∇F (xk)‖ ≥ t}
≤ exp{− t 2/8∑k
j=1 ∑s1 i=1 4p21 2
s21
} ≤ exp{− t 2/8
4 2p31/s1 }.
Thus, by taking t = /6 and c = 1152, we need s1 p31 ≥ c log K0δ . Further we want s1p1 ' O(n) and hence we take p1 = n0.25 and s1 = n0.75c log K0δ . The amortized stochastic first-order oracle complexity is bounded by 2s1.
C ANALYSIS OF METAALGORITHM 7
We first show that INEXACTTRWEAK finds an O( )-SOSP in O(1/ 1.5) iterations with probability at least 2/3 as stated in the following lemma. Lemma C.1. Consider problem (1) under Assumptions 2.1-2.3. Suppose that the differential estimators gk and Hk satisfy Eqn. (9) with probability at least (1− ζ4K ). Besides, suppose that h̃
k is an approximate solution to (8) such that w.p. (1− ζ4K ),
〈gk, h̃k〉+ 1 2 〈Hkh̃k, h̃k〉 ≤ 〈gk,hk〉+ 1 2 〈Hkhk,hk〉+
1.5
√ L2 , (24) where hk is a global solution to (8). By setting ζ = 1/3, r = √ /L2, and K = 4 √ L2∆/
1.5, INEXACTTRWEAK outputs a 500 -SOSP w.p. at least 2/3.
Proof. Combining (11) and (24), we have w.p. (1− ζ4K ),
F (xk+1) ≤ F (xk) + 〈gk,hk〉+ 1 2 〈Hkhk,hk〉+
1.5
√ L2
+ L2 6 ‖h̃k‖3 + ‖∇k‖‖h̃k‖+ 1 2 ‖∇2k‖‖h̃k‖2,
(25)
where hk is a global solution to the QCQP (8) and h̃k is an approximate solution satisfying (24). We let λk denote the dual variable corresponding to the global solution hk as defined in Lemma 2.1. We note that hk and λk are used only in our analysis. The INEXACTWEAK algorithm only requires the approximate solution x̃k without knowledge of hk or λk.
By the assumption that (9) holds with probability (1− ζ4K ) and the fact that ‖h̃ k‖ ≤ r = √ L2 , we have w.p. (1− ζ4K ),
L2 6 ‖h̃k‖3 + ‖∇k‖‖h̃k‖+ 1 2 ‖∇2k‖‖h̃k‖2 ≤
1.5
2 √ L2 . (26)
Plugging (16), (17), and (26) into (25) and applying the union bound, we have w.p. at least (1− ζ2K ),
F (xk+1) ≤ F (xk)− L2λ k‖hk‖2
4 +
3 1.5
2 √ L2
= F (xk)− L2λ kr2
4 +
3 1.5
2 √ L2 , (27)
where the second inequality follows from (15):
0 = λk(‖hk‖ − r) = λk(‖hk‖ − r)(‖hk‖+ r) = λk(‖hk‖2 − r2). (28)
Summing inequality (27) from k = 0 to K − 1 and applying the union bound, we have w.p. at least (1− ζ/2),
1
K K−1∑ k=0 λk ≤ 4(F (x 0)− F (xK+1)) L2r2K + 6 1.5 L1.52 r 2 ≤ 4∆ K + 6 √ √ L2 , (29)
where the second inequality follows from Assumption 2.1 and our choice of the trust region radius.
By sampling k̄ uniformly from {0, . . . ,K − 1}, we obtain
E[λk̄] = 1
K K−1∑ k=0 λk, (30)
where the expectation is taken over the randomness of k̄. Combining (29) and (30) and taking K = 4∆ √ L2/ 1.5, we have w.p. at least (1− ζ/2)
E[λk̄] ≤ 7 √ √ L2 . (31)
Since λk is always no-negative, by Markov’s inequality and the union bound, with probability at least 1− ζ, we have
λk̄ ≤ 14 √
ζ √ L2 . (32)
By taking ζ = 1/3, we have w.p. at least 2/3, λk̄ ≤ 42 √ /L2. The rest of the proof is similar to Theorem 3.1 and we have the result.
The following theorem shows that MetaAlgorithm 7 finds an O( )-SOSP w.p. (1− δ) after running INEXACTTRWEAK for Θ(log(1/δ)) times.
Theorem C.1 (Iteration Complexity of MetaAlgorithm 7). In the same setting as Lemma C.1, let T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). Then MetaAlgorithm 7 finds a 600 -SOSP with probability at least (1− δ).
Proof. By Lemma C.1 and our choice of T , with probability (1 − δ/2), at least one of xt is a 500 -SOSP. On the other hand, since ψt(ṽt) ≤ ψt(vt) + √ L2 with probability 1 − δ/4, if ψt(ṽ t) ≥ − √ c2 L2, then, w.p. 1− δ/4,
ψt(v t) ≥ ψt(ṽt)− √ L2 ≥ − √ c2 L2 − √ L2 ≥ − √ 550 L2, (33)
where the last inequality follows from our choice of c2.
Option I: Since Ht = ∇2F (xt) is the full Hessian, ψt(vt) is the smallest eigenvalue of∇2F (xt). Applying the union bound, we conclude that MetaAlgorithm 7 outputs a 600 -SOSP w.p. (1− δ). Option II: Let Bi := ∇2fi(xt)−∇2F (xt) for i ∈ H, then
Ht −∇2F (xt) = 1 s s∑ i=1 Bi. (34)
By Assumption 2.2, we have
‖Bi‖ ≤ ‖∇2fi(xt)‖+ ‖∇2F (xt)‖ ≤ 2L1. (35) Applying the matrix Azuma’s Inequality in Theorem 7.1 of Tropp (2012) leads to
Pr{‖Ht −∇2F (xt)‖ ≥ √ L2} ≤ d · exp(
− L2s 32L21 ). (36)
By taking s = 32L 2 1
L2 log(4d/δ) and applying the union bound, we have with probability 1− δ, ∇2F (xt) < Ht − √ L2I < (ψt(v t)− √ L2)I < − √ 600 L2I, (37)
where the last inequality follows from (33). This completes the proof.
D PROOF OF THEOREM 6.1
Proof. We first analyze the computational cost of Lanczos method. By Corollary 2 in (Carmon & Duchi, 2018), for any desired accuracy ̃, Lanczos method achieves this accuracy in O( r√
̃ log r
√ d ̃p )
Lanczos iterations w.p. at least (1− p). Without loss of generality, we assume that the number of Lanczos iterations is strictly smaller than the dimension d, otherwise the QCQP subproblem can be solved exactly. We note that each Lanczos iteration involves computation of one matrix-vector product. Therefore, to satisfy the condition (24) in Lemma C.1, one needs to evaluate Õ(1/(L2 )0.25) Hessian-vector products of the form Hkv. Similarly, to solve (10) up to accuracy √ L2 w.h.p., one needs to evaluate Õ(1/(L2 )0.25) Hessian-vector products of the form Htv. In MetaAlgorithm 7, to verify whether the candidate solution vt is indeed anO( )-SOSP, one needs at most O(n) stochastic gradient evaluations and O(min{n, log(4d/δ)L21/(L2 )}/(L2 )0.25)) stochastic Hessian-vector product evaluations, where the latter one follows from the proof of Theorem 7. We proceed to analyze the computational complexity of the INEXACTTRWEAK procedure. Recall that the iteration complexity of MetaAlgorithm 7 is O(log(1/δ)/ 1.5). Following Lemma 4.2 and Corollary 4.1, the stochastic first-order oracle complexity is Õ(min{n/ 1.5, √ n/ 2}log(1/δ)). Following the proof of Lemma 4.1 and Corollary 4.1, when p2 = 1, the overall stochastic Hessian sample complexity is Õ(min{n/ 1.5, 1/ 2.5} log(1/δ)). Since it takes Õ(1/ 0.25) Lanczos iterations to meet the condition (24) as stated above, the overall stochastic Hessian-vector product oracle complexity is Õ(min{n/ 1.75, 1/ 2.75}log(1/δ)). Combining the stochastic first-order and Hessian-vector product complexities, the overall runtime is Õ(dmin{n/ 1.75, 1/ 2.75 + √ n/ 2}log(1/δ)).
E A FASTER HESSIAN-VECTOR BASED QCQP SOLVER
We recall from the previous section that, to approximately solve a quadratic subproblem in STRfree, Lanczos method requires Õ(min{n/ 0.25, 1/ 1.25}) stochastic Hessian-vector product evaluations. In this section, we propose a faster QCQP solver with an Õ(min{n+ n0.75/ 0.25, 1/ }) complexity. Replacing Lanczos method with this QCQP solver in STRfree results in a faster Hessian-free method, which we refer to as STRfree+.
E.1 CONVEX REFORMULATION OF QCQP
To begin with, we present a known result that is key to achieve faster algorithm than Lanczos method. We summarize this result in Lemma E.1 which shows that the trust region subproblem is equivalent to a convex QCQP. Lemma E.1. (Convex Reformulation of QCQP (Flippo & Jansen, 1996; Wang & Xia, 2017)) Denote λmin as the smallest eigenvalue of Hk. Let umin be a corresponding eigenvector. W.l.o.g., we assume that 〈gk,umin〉 ≤ 0. Let µ = min{λmin, 0}. Then the QCQP (8) is equivalent to the convex problem
min h∈Rd,‖h‖≤r qk(h) = 〈gk,h〉+ 1 2 〈(Hk − µI)h,h〉+ 1 2 µr2 (38)
in the sense that (8) and (38) have the same minimum function value. Moreover, when λmin < 0, for any optimal solution of (38), denoted by hkc ,
hkc +
√ 〈hkc ,umin〉2 − ‖umin‖2(‖hkc‖2 − r2)− 〈hkc ,umin〉
‖umin‖2 umin (39)
is a global minimizer of the original QCQP (8).
To perform the above reformulation, one needs to compute the exact eigenpair (λmin,umin). Nevertheless, as we shall see, it is sufficient to compute an approximate eigenpair (λ̃, ũ) such that
λmin ≤ λ̃ = ũTHkũ ≤ λmin + ̃, (40) where ̃ is a target accuracy to be determined later. We note that ̃ ≤ 2L2 w.l.o.g. since ‖Hk‖ ≤ L2. With this approximate eigenpair, it remains to solve the following convex problem
min h∈Rd,‖h‖≤r q̃k(h) = 〈gk,h〉+ 1 2 〈(Hk − µ̃I)h,h〉+ 1 2 µ̃r2 (41)
where µ̃ = min{0, λ̃− ̃}. One can check that the problem (41) well approximates (38). Corollary E.1. Let qk∗ and q̃k∗ be the minimum function value of (38) and (41), respectively. Assume λmin ≤ λ̃ = ũTHkũ ≤ λmin + ̃. Then
|qk∗ − q̃k∗ | ≤ ̃r2. (42)
We note that the above convex reformulation approach divides an indefinite QCQP into two subproblems: (i) computation of an approximate eigenpair (λ̃, ũ); (ii) solving the convex problem (41). As we shall see, by exploiting the finite-sum structure of the Hessian Hk, these two subproblems can be efficiently solved. We treat these two subproblems in the following two subsections, respectively.
E.2 FINDING THE SMALLEST EIGENVECTOR
To find a unit vector that satisfies requirement (40), we resort to the AppxPCA method (Allen-Zhu & Li, 2016), which first finds an approximate eigenvalue λ = λmin − ̃ via binary search and then applies Power method to the positive definite matrix (Hk − λI)−1 for a logarithmic number of iterations. Computing (Hk − λI)−1v for any vector v is equivalent to solving the ̃-strongly convex problem (Allen-Zhu & Li, 2018)
min u φk(u) :=
1 2 uT (Hk − λI)u− 〈v,u〉 (43)
We note that Hk = 1|S| ∑ i∈S ∇2fi(xk). Specifically, in STRfree, either |S| = n (i.e., Hk is the full Hessian) or |S| = Õ(L21/(L2 )) by Lemma 4.1. Therefore, φk(·) can be expressed as sum of non-convex functions
φk(u) = 1 |S| ∑ i∈S φki (u) = 1 |S| ∑ i∈S (1 2 uT (∇2fi(xk)− λI)u− 〈v,u〉 ) . (44)
By observing that each φki is non-convex and has (4L2)-Lipschitz gradient, we can use KatyushaXS (Allen-Zhu, 2018a) to solve problem (43) in Õ(|S|+ |S|3/4 √ L2/̃) stochastic Hessianvector product (i.e., ∇2fi(xk)u) evaluations. The following result is taken from (Agarwal et al., 2017, Section G.3), which gives the overall computation complexity of AppxPCA.
Algorithm 9 Fast QCQP Solver
Input: Hk, gk, r, ̃, ̃1 1: Use AppxPCA to find (λ̃, ũ) satisfying (40), in which the matrix inverse is solved by KatyushaXS; 2: Use KatyushaXW to solve (41) up to accuracy ̃1, i.e., find a vector h̃ such that q̃k(h)− q̃k∗ ≤ ̃1
with high probability; 3: Return h̃ + ( √ 〈h̃, ũ〉2 − ‖ũ‖2(‖h̃‖2 − r2)− 〈h̃, ũ〉)ũ/‖ũ‖2.
Algorithm 10 STRfree+ 1: In the same setting as MetaAlgorithm 7, 2: construct gradient estimator gk by Estimator 4; 3: construct Hessian estimator Hk by 4: Option I: Hk := ∇2F (xk); 5: Option II: Draw s samples indexed byH and let Hk := ∇2f(xk;H); 6: use Algorithm 9 to solve QCQP subproblems. Lemma E.2. Let Hk = 1|S| ∑ i∈S ∇2fi(xk) ∈ Rd×d, where ‖∇2fi(xk)‖ ≤ L2. With probability at least 1− p, AppxPCA produces a unit vector u satisfying uTHku ≤ λmin + ̃. The total stochastic Hessian-vector product oracle complexity is Õ(|S|+ |S|3/4 √ L2/̃).
E.3 SOLVING THE CONVEX QCQP
In what follows, we show that the convex problem (41) can be solved efficiently. We first observe that problem (41) has a finite-sum structure and can be rewritten as an unconstrained problem of the form
min h∈Rd
1 |S| ∑ i∈S q̃ki (h)+Ψ(h) = 1 |S| ∑ i∈S ( 〈gk,h〉+ 1 2 〈(∇2fi(xk)−µ̃I)h,h〉+ 1 2 µ̃r2 ) +Ψ(h), (45)
where Ψ(h) = 0 if ‖h‖ ≤ r, otherwise Ψ(h) = +∞. We note that each q̃ki (h) in (45) has (4L2)-Lipschitz continuous gradient since ‖∇2fi(xk)‖ ≤ L2 and ̃ ≤ 2L2. Therefore, we can use KatyushaXW (Allen-Zhu, 2018a) to solve (45). By (Allen-Zhu, 2018a, Theorem 4.6), KatyushaXW finds a point h such that E[q̃k(h)− q̃k∗ ] ≤ ̃1 using Õ(|S|+ |S|3/4 √ L2 · r/ √ ̃1) stochastic Hessianvector products, where ̃1 is the target accuracy to be determined later.
E.4 PUTTING IT ALL TOGETHER
The complete procedure of our fast QCQP solver is summarized in Algorithm 9. Combining all the above results and setting r = √ /L2, ̃ = √ L2/2, and ̃1 = ̃r2, one can find an approximate solution to QCQP (8) satisfying requirement (24) in Õ(|S|+ |S|3/4L0.252 / 0.25) stochastic Hessianvector product evaluations. By replacing Lanczos method with this solver in STRfree, we derive a new Hessian-free method called STRfree+, which is summarized in Algorithm 10. The following theorem establishes the overall runtime complexity of STRfree+ for finding an -SOSP.
Theorem E.1. Consider Algorithm 10 for solving problem (1). Let ζ = 1/3, r = √ /L2, K = 4 √ L2∆/ 1.5, T = 32 log(2/δ), c1 = 600, c2 = 500, and s = 32L21 L2
log(4d/δ). The hyper-parameters in Estimator 4 are set to the same values as those in Lemma 4.2. Besides, let ̃ = √ L2/2 and ̃1 = ̃r 2 in Algorithm 9. To find an O( )-SOSP w.p. at least 1 − δ, the runtime complexity is Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 + √ n/ 2}log(1/δ)).
Proof. The proof directly follows from that in Sec. D.
We compare the runtime complexity of STRfree and STRfree+ with existing Hessian free methods in Table 2. One can see that STRfree strictly outperforms Hessian-free Cubic. Besides, STRfree outperforms Fast-Cubic if n ≥ Ω(1/ 4/3), which is a mild condition for large-scale problems in the moderate accuracy case. STRfree+ strictly outperforms both Hessian-free Cubic and Fast-Cubic. We
note that the runtime analyses in (Tripuraneni et al., 2018; Zhou & Gu, 2019) rely on an additional assumption which states that for all x, with probability 1,
‖∇fi(x)−∇F (x)‖ ≤ σ. (46)
Under this additional assumption, one can use the same argument as in Section B.2 and B.3 to prove that STRfree achieves a runtime complexity of Õ(dmin{n/ 1.75, n0.5/ 2 + 1/ 2.75, 1/ 3})1. Similarly, the runtime complexity of STRfree+ would be Õ(dmin{n/ 1.5 + n0.75/ 1.75, 1/ 2.5 +√ n/ 2, 1/ 3}). In this sense, both STRfree and STRfree+ outperform Stochastic Cubic and SRVRCfree.
F ADDITIONAL EXPERIMENTAL RESULTS
F.1 MORE EXPERIMENTAL DETAILS
Descriptions of Testing Datasets. We briefly introduce the seven testing datasets in the manuscript. Among them, six datasets are provided by the LibSVM website2, including (a9a, ijcnn, codrna, phishing, w8a and epsilon). The detailed information is summarized in Table 3. We can observe that these datasets are different from each other in feature dimension, training samples, etc.
1To obtain this complexity, one needs to replace the full gradient in line 2 of Estimator 4 (and line 6 of MetaAlgorithm 7) with a mini-batch stochastic gradient when n ≥ Ω(1/ 2).
2https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
Experimental Settings. In the manuscript, following SVRC (Zhou et al., 2018c) and LiteSVRC (Zhou et al., 2018b), we select hyper parameters from a set, namely s1 from {0.2n, 0.6n, n}, s2 from {0.01n, 0.1n, 0.2n}, p1 and p2 from {0.01n0.5, 0.05n0.5, 0.1n0.5}. For the Hessian estimation at the beginning of each p2 iterations, we use full Hessian. Similarly, for the gradient estimation at the beginning of each p1 iterations, we adopt the full gradient as the gradient estimation.
Memory Analysis. SVRC, Lite-SVRC, and our method need to store the previous and current gradient and Hessian and thus their memory complexity is 2(d2 + d). TR, CR (ARC) and SCR need to compute current and Hessian and thus has complexity d2 + d. So these memory is of the same order but our method is much faster than TR, CR and SCR both validated by theory and experiments.
F.2 MORE EXPERIMENTS
Here we give more experimental results on the gradient norm v.s. the algorithm running time and the Hessian sample complexity. Due to the space limit, in the manuscript we only provide the gradient-norm related results on the codrna dataset. Here we provide results of a9a and ijcnn datasets in Figure 4. One can observe that on both the logistic regression with non-convex regularizer and the nonlinear least square problems, the proposed algorithm always shows sharper convergence behavior in terms of both the running time and the Hessian sample complexity. These observations are consistent with the results in Figure 2 in the manuscript. All these results demonstrate the high efficiency of our proposed algorithm and also confirm our theoretical implication. | 1. What is the focus of the paper regarding optimization algorithms?
2. What are the strengths and weaknesses of the proposed algorithm compared to prior works?
3. How does the reviewer assess the presentation and clarity of the paper's content?
4. Are there any typos or unclear statements in the paper that need to be addressed?
5. How does the reviewer evaluate the novelty and significance of the paper's contribution? | Review | Review
This paper applies the spider algorithm (Fang et al., 2018) for reducing variance in first-order stochastic optimization to a second order optimization algorithm, i.e., trust region algorithm.
Due to the new gradient and Hessian estimators in Fang et al., 2018, the proposed stochastic trust region algorithms in this paper achieve $O(\min\{1/\epsilon^2,\sqrt{n}/\epsilon^{1.5}\})$ stochastic second-order oracle (SSO) complexity. This result improves the SSO of stochastic variance-reduced cubic (SVRC) by a factor of $n^{1/6}$. This paper has a moderate contribution because of the new algorithms and improved complexity results. However, the idea of variance reduction is not novel and the result is not surprising since the improvement comes purely from spider (Fang et al., 2018) and thus this work is somewhat incremental. The paper is in general fairly written without many typos or unclear statements. But some places are verbose and necessarily complicated. Moreover, the comparison between this paper and existing work is not clear and straightforward. Lastly, the presentation of the current paper is unnecessarily verbose and complicated.
The comparison of this paper with a similar paper by Zhou & Gu (2019) is not convincing. In the remark after Corollary 4.1 and in Section 6.1, the authors mentioned that the SRVRC algorithm proposed by Zhou & Gu (2019) achieves similar complexity as this paper. But the authors did not present the complexity of SRVRC in Table 1. This is not appropriate since it is very similar and related to this paper. The authors should compare with existing work in a more clear and fair way.
In Remark 3.1 and the discussion after that, the authors argued that the exact step size control is crucial to the sample efficiency of stochastic trust region algorithms. However, the cubic regularization based algorithm in Zhou & Gu (2019) can also achieve the same second order oracle complexity. Therefore, I don’t think the arguments in the two paragraphs after Remark 3.1 are the key reason for the improvement. Instead, the spider estimator that greatly reduces the variance is the key point leading to the improved sample efficiency.
I find the presentation of this paper is very verbose and unnecessarily long (10 pages, 8 algorithm boxes and 8 theorems/lemmas in the main paper). Many places can be simplified or combined in order to increase the readability of the main theorems. Some intermediate results may also be moved to the appendix if necessary.
Algorithm 2 and 3 are almost identical since the gradient and Hessian estimators ($g^k, H^k$) are represented in the same form. I don’t see the point of repeating the algorithms twice. In the Estimator 3, the two options can be simply combined by setting $s_2’=\min\{1/\epsilon, n\}$ since according to Lemma 4.1 $s_2’=1/\epsilon$ in option II.
The paper talks about $\epsilon$-SOSP, $\tilde{O}(\epsilon)$-SOSP, $12\epsilon$-SOSP and so on in many places. It would be better to be consistent and use the same notation.
In the text after Corollary 4.1, there are some typos in the complexities where a $\min$ operator is missing. Moreover, the complexity of Zhou & Gu (2019) presented here seems not correct. I quickly checked their paper and found in their table that the SFO complexity of Zhou & Gu (2019) is $\tilde{O}(\min\{n/\epsilon^{3/2},n^{1/2}/\epsilon^2,1/\epsilon^3\})$, which is in fact smaller than the complexity of STR1 in this paper. Therefore, I think the comparison in this paragraph is not correct. |
ICLR | Title
Yformer: U-Net Inspired Transformer Architecture for Far Horizon Time Series Forecasting
Abstract
Time series data is ubiquitous in research as well as in a wide variety of industrial applications. Effectively analyzing the available historical data and providing insights into the far future allows us to make effective decisions. Recent research has witnessed the superior performance of transformer-based architectures, especially in the regime of far horizon time series forecasting. However, the current state of the art sparse Transformer architectures fail to couple downand upsampling procedures to produce outputs in a similar resolution as the input. We propose the Yformer model, based on a novel Y-shaped encoder-decoder architecture that (1) uses direct connection from the downscaled encoder layer to the corresponding upsampled decoder layer in a U-Net inspired architecture, (2) Combines the downscaling/upsampling with sparse attention to capture long-range effects, and (3) stabilizes the encoder-decoder stacks with the addition of an auxiliary reconstruction loss. Extensive experiments have been conducted with relevant baselines on four benchmark datasets, demonstrating an average improvement of 19.82, 18.41 percentage MSE and 13.62, 11.85 percentage MAE in comparison to the current state of the art for the univariate and the multivariate settings respectively.
1 INTRODUCTION
In the most simple case, time series forecasting deals with a scalar time-varying signal and aims to predict or forecast its values in the near future; for example, countless applications in finance, healthcare, production automatization, etc. (Carta et al., 2021; Cao et al., 2018; Sagheer & Kotb, 2019) can benefit from an accurate forecasting solution. Often not just a single scalar signal is of interest, but multiple at once, and further time-varying signals are available and even known for the future. For example, suppose one aims to forecast the energy consumption of a house, it likely depends on the social time that one seeks to forecast for (such as the next hour or day), and also on features of these time points (such as weekday, daylight, etc.), which are known already for the future. This is also the case in model predictive control (Camacho & Alba, 2013), where one is interested to forecast the expected value realized by some planned action, then this action is also known at the time of forecast. More generally, time series forecasting, nowadays deals with quadruples (x, y, x′, y′) of known past predictors x, known past targets y, known future predictors x′ and sought future targets y′. (Figure 3 in appendix section A provides a simple illustration)
Time series problems can often be addressed by methods developed initially for images, treating them as 1-dimensional images. Especially for time-series classification many typical time series encoder architectures have been adapted from models for images (Wang et al., 2017; Zou et al., 2019). Time series forecasting then is closely related to image outpainting (Van Hoorick, 2019), the task to predict how an image likely extends to the left, right, top or bottom, as well as to the more well-known task of image segmentation, where for each input pixel, an output pixel has to be predicted, whose channels encode pixel-wise classes such as vehicle, road, pedestrian say for road scenes. Time series forecasting combines aspects from both problem settings: information about targets from shifted positions (e.g., the past targets y as in image outpainting) and information about other channels from the same positions (e.g., the future predictors x′ as in image segmentation). One of the most successful, principled architectures for the image segmentation task are U-Nets introduced in Ronneberger et al. (2015), an architecture that successively downsamples / coarsens
its inputs and then upsamples / refines the latent representation with deconvolutions also using the latent representations of the same detail level, tightly coupling down- and upsampling procedures and thus yielding latent features on the same resolution as the inputs.
Following the great success in Natural Language Processing (NLP) applications, attention-based, esp. transformer-based architectures (Vaswani et al., 2017) that model pairwise interactions between sequence elements have been recently adapted for time series forecasting. One of the significant challenges, is that the length of the time series, are often one or two magnitudes of order larger than the (sentence-level) NLP problems. Plenty of approaches aim to mitigate the quadratic complexity O(T 2) in the sequence/time series length T to at mostO(T log T ). For example, the Informer architecture (Zhou et al., 2020), arguably one of the most accurate forecasting models researched so far, adapts the transformer by a sparse attention mechanism and a successive downsampling/coarsening of the past time series. As in the original transformer, only the coarsest representation is fed into the decoder. Possibly to remedy the loss in resolution by this procedure, the Informer feeds its input a second time into the decoder network, this time without any coarsening.
While forecasting problems share many commonalities with image segmentation problems, transformer-based architectures like the Informer do not involve coupled down- and upscaling procedures to yield predictions on the same resolution as the inputs. Thus, we propose a novel Y-shaped architecture called Yformer that
1. Couples downscaling/upscaling to leverage both, coarse and fine-grained features for time series forecasting,
2. Combines the coupled scaling mechanism with sparse attention modules to capture longrange effects on all scale levels, and
3. Stabilizes encoder and decoder stacks by reconstructing the recent past.
2 RELATED WORK
Deep Learning Based Time Series Forecasting: While Convolutional Neural Network (CNN) and Recurrent Neural network (RNN) based architectures (Salinas et al., 2020; Rangapuram et al., 2018) outperform traditional methods like ARIMA (Box & Jenkins, 1968) and exponential smoothing methods (Hyndman & Athanasopoulos, 2018), the addition of attention layers (Vaswani et al., 2017) to model time series forecasting has proven to be very beneficial across different problem settings (Fan et al., 2019; Qin et al., 2017; Lai et al., 2018). Attention allows direct pair-wise interaction with eccentric events (like holidays) and can model temporal dynamics inherently unlike RNN’s and CNN’s that fail to capture long-range dependencies directly. Recent work like Reformer (Kitaev et al., 2020), Linformer (Wang et al., 2020) and Informer (Zhou et al., 2020) have focused on reducing the quadratic complexity of modeling pair-wise interactions to O(T log T ) with the introduction of restricted attention layers. Consequently, they can predict for longer forecasting horizons but are hindered by their capability of aggregating features and maintaining the resolution required for far horizon forecasting.
U-Net: The Yformer model is inspired by the famous U-Net architecture introduced in Ronneberger et al. (2015) originating from the field of medical image segmentation. The U-net architecture is capable of compressing information by aggregating over the inputs and up-sampling embeddings to the same resolutions as that of the inputs from their compressed latent features. Current transformer architectures like the Informer (Zhou et al., 2020) do not utilize up-sampling techniques even though the network produces intermediate multi resolution feature maps. Our work aims to capitalize on these multi resolution feature maps and use the U-net shape effectively for the task of time series forecasting. Previous works like Stoller et al. (2019) and Perslev et al. (2019) have successfully applied U-Net architecture for the task of sequence modeling and time series segmentation, illustrating superior results in the respective tasks. These work motivate the use of a U-Net-inspired architecture for time series forecasting as current methods fail to couple sparse attention mechanism with the U-Net shaped architecture. Additional related works section is decoupled from the main text and is presented in the appendix section B.
3 PROBLEM FORMULATION
By a time series x with M channels, we mean a finite sequence of vectors in RM , denote their space by R∗×M := ⋃ T∈N RT×M , and their length by |x| := T (for x ∈ RT×M ,M ∈ N). We write (x, y) ∈ R∗×(M+O) to denote two time series of same length with M and O channels, respectively.
We model a time series forecasting instance as a quadruple (x, y, x′, y′) ∈ R∗×(M+O) × R∗×(M+O), where x, y denote the past predictors and targets until a reference time point T and x′, y′ denote the future predictors and targets from the reference point T to the next τ time steps. Here, τ = |x′| is called the forecast horizon. For a Time Series Forecasting Problem, given (i) a sample D := {(x1, y1, x′1, y′1), . . . , (xN , yN , x ′ N , y ′ N )} from an unknown distribution p of time series forecasting instances and (ii) a function ` : R∗×(O+O) → R called loss, we attempt to find a function ŷ : R∗×(M+O) ×R∗×M → R∗×O (with |ŷ(x, y, x′)| = |x′|) with minimal expected loss
E(x,y,x′,y′)∼p `(y′, ŷ(x, y, x′)) (1)
The loss ` usually is the mean absolute error (MAE) or mean squared error (MSE) averaged over future time points:
`mae(y′, ŷ) := 1
|y′| |y′|∑ t=1 1 O ||y′t − ŷt||1, `mse(y′, ŷ) := 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||22 (2)
Furthermore, if there is only one target channel and no predictor channels (O = 1,M = 0), the time series forecasting problem is called univariate, otherwise multivariate.
4 BACKGROUND
Our work incorporates restricted attention based Transformer in a U-Net inspired architecture. For this reason, we base our work on the current state of the art sparse attention model Informer, introduced in Zhou et al. (2020). We provide a brief overview of the sparse attention mechanism (ProbSparse) and the encoder block (Contracting ProbSparse Self-Attention Blocks) used in the Informer model for completeness.
ProbSparse Attention: The ProbSparse attention mechanism restricts the canonical attention (Vaswani et al., 2017) by selecting a subset u of dominant queries having the largest variance across all the keys. Consequently, the query Q ∈ RLQ×d in the canonical attention is replaced by a sparse query matrix Q ∈ RLQ×d consisting of only the u dominant queries. ProbSparse attention can hence be defined as:
APropSparse(Q,K,V ) = Softmax(QK T
√ d )V (3)
where d denotes the input dimension to the attention module. For more details on the ProbSparse attention mechanism, we refer the reader to Zhou et al. (2020).
Contracting ProbSparse Self-Attention Blocks: The Informer model uses Contracting ProbSparse Self-Attention Blocks to distill out redundant information from the long history input sequence (x, y) in a pyramid structure motivated from the image domain (Lin et al., 2017). The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query, key and value for self-attention. This is followed by multiple layers of convolution (Conv1d), and finally the MaxPool operation reduces the latent dimension by effectively distilling out redundant information at each block. We refer the reader to Algorithm 2 in the appendix section C where these operations are presented in an algorithmic structure for a comprehensive overview.
5 METHODOLOGY
The Yformer model is a Y-shaped (Figure 1b) symmetric encoder-decoder architecture that is specifically designed to take advantage of the multi-resolution embeddings generated by the Contracting ProbSparse Self-Attention Blocks. The fundamental design consideration is the adoption of U-Netinspired connections to extract encoder features at multiple resolutions and provide direct connection to the corresponding symmetric decoder block (simple illustration provided in Figure 4, appendix section A). Furthermore, the addition of reconstruction loss helps the model learn generalized embeddings that better approximate the data generating distribution.
The Y-Past Encoder of the Yformer is designed using a similar encoder structure as that of the Informer. The Y-Past Encoder embeds the past sequence (x, y) into a scalar projection along with the addition of positional and temporal embeddings. Multiple Contracting ProbSparse Self-Attention Blocks are used to generate encoder embeddings at various resolutions. The Informer model uses the final low-dimensional embedding as the input to the decoder (Figure 1a) whereas, the Yformer retains the embeddings at multiple resolutions to be passed on to the decoder. This allows the Yformer to use high-dimensional lower-level embeddings effectively.
The Y-Future Encoder of the Yformer mitigates the issue of the redundant reprocessing of parts of the past sequence (x, y) used as tokens (xtoken, ytoken) in the Informer architecture. The Informer model uses only the coarsest representation from the encoder embedding, leading to a loss in resolution and forcing the Informer to pass part of the past sequence as tokens (xtoken, ytoken) to the decoder (Figure 1a). The Yformer separates the future predictors and the past sequence (x, y) bypassing the future predictors (x′) through a separate encoder and utilizing the multi-resolution embeddings to dismiss the need for tokens entirely. Unlike the Y-Past Encoder, the attention blocks in the Y-Future encoder are based on masked canonical self-attention mechanism (Vaswani et al., 2017). Masking the attention ensures that there is no information leak from the future time steps to the past. Moreover, a masked canonical self-attention mechanism helps reduce the complexity, as half of the query-key interactions are restricted by design. Thus, the Y-Future Encoder is designed by stacking
multiple Contracting ProbSparse Self-Attention Blocks where the ProbSparse attention is replaced by the Masked Attention. We name these blocks Contracting Masked Self-Attention Blocks (Algorithm 3 appendix section C).
The Yformer processes the past inputs and the future predictors separately within its encoders. However, considering the time steps, the future predictors are a continuation of the past time steps. For this reason, the Yformer model concatenates (represented by the symbol + ) the past encoder embedding and the future encoder embedding along the time dimension after each encoder block, preserving the continuity between the past input time steps and the future time steps. Let i represent the index of an encoder block, then epasti+1 and e fut i+1 represent the output from the past encoder and the future encoder respectively. The final concatenated encoder embedding (ei+1) is calculated as,
epasti+1 = ContractingProbSparseSelfAttentionBlock(e past i ) efuti+1 = ContractingMaskedSelfAttentionBlock(e fut i ) ei+1 = e past i+1 ++ e fut i+1
(4)
The encoder embeddings represented by E = [e0, . . . , eI ] (where I is the number of encoder layers) contain the combination of past and future embeddings at multiple resolutions.
The Y-Decoder of the Yformer consists of two parts. The first part takes as input the final concatenated low-dimensional embedding (eI) and performs a multi-head canonical self-attention mechanism. Here, the past encoder embedding (epastI ) is allowed to attend to itself as well as the future encoder embedding (efutI ) in an unrestricted fashion. The encoder embedding (eI) is the lowdimensional distilled embedding, and skipping query-key interaction within these low-dimensional embeddings might deny the model useful pair-wise interactions. Therefore, it is by design that this is the only part of the Yformer model that uses canonical self-attention in comparison to the Informer that uses canonical attention within its repeating decoder block, as shown in Figure 1a. Since the canonical self-attention layer is separated from the repeating attention blocks within the decoder, the Yformer complexity from this full attention module does not increase with an increase in the number of decoder blocks. The U-Net architecture inspires the second part of the Y-Decoder. Consequently, the decoder is structured in a symmetric expanding path identical to the contracting encoder. We realize this architecture by introducing upsampling on the ProbSparse attention mechanism using Expanding ProbSparse Cross-Attention Block.
The Expanding ProbSparse Cross-Attention Block within the Yformer decoder performs two tasks: (1) upsample the compressed encoder embedding eI and (2) perform restricted cross attention between the expanding decoder embedding dI−i and the corresponding encoder embedding ei (represented in Figure 4 appendix section A). We accomplish both the tasks by introducing an Expanding ProbSparse Cross-Attention Block as illustrated in Algorithm 1.
Algorithm 1 Expanding ProbSparse Cross-Attention Block Input : dI−i, ei Output : dI−i+1 dI−i+1 ← ProbSparseCrossAttn(dI−i, ei) dI−i+1 ← Conv1d(dI−i+1) dI−i+1 ← Conv1d(dI−i+1) dI−i+1 ← LayerNorm(dI−i+1) dI−i+1 ← ELU(ConvTranspose1d(dI−i+1)))
The Expanding ProbSparse Cross-Attention Blocks within the Yformer decoder uses a ProbSparseCrossAttn to construct direct connections between the lower levels of the encoder and the corresponding symmetric higher levels of the decoder. Direct connections from the encoder to the decoder are an essential component for majority of the models within the image domain. For example, ResNet (He et al., 2016), and DenseNet (Huang et al., 2017) have demonstrated that direct connections between previous feature maps, strengthen feature propagation, reduce parameters, mitigate vanishing gradients and encourage feature reuse. However, current transformer-based architectures like the Informer fail to utilize such direct connections. The ProbSparseCrossAttn
takes in as input the decoder embedding from the previous layer dI−i as queries and the corresponding encoder embedding ei as keys. The Yformer uses the ProbSparse restricted attention so that the model is scalable with an increase in the number of decoder blocks.
We utilize ConvTranspose1d or popularly known as Deconvolution for incrementally increasing the embedding space. The famous U-Net architecture uses a symmetric expanding path using such Deconvolution layers. This property enables the model to not only aggregate over the input but also upscale the latent dimensions, improving the overall expressivity of the architecture. The decoder of Yformer follows a similar strategy by employing Deconvolution to expand the embedding space of the encoded output. We describe the different operators used in the appendix section C.
A fully connected layer (LinearLayer) predicts the future time steps ŷfut from the final decoder layer (dI) and additionally reconstructs the past input targets ŷpast.
[ŷpast, ŷfut] = LinearLayer(dI) (5)
The addition of reconstruction loss to the Yformer as an auxiliary loss, serves two significant purposes. Firstly, the reconstruction loss acts as a data-dependent regularization term that reduces overfitting by learning embeddings that are more general (Ghasedi Dizaji et al., 2017; Jarrett & van der Schaar, 2020). Secondly, the reconstruction loss helps in producing future-output in a similar distribution as the inputs (Bank et al., 2020). For far horizon forecasting, we are interested in learning a future-output distribution. However, the future-output distribution and the past-input distribution arise from the same data generating process. Therefore having an auxiliary reconstruction loss would direct the gradients to a better approximate of the data generating process. The Yformer model is trained on the combined loss `,
` = α `mse(y, ŷpast) + (1− α) `mse(y′, ŷfut) (6)
where the first term tries to learn the past targets y and the second term learns the future targets y′. We use the reconstruction factor (α) to vary the importance of reconstruction and future prediction and tune this as a hyperparameter.
6 EXPERIMENTS
6.1 DATASETS
To evaluate our proposed YFormer architecture, we use two real-world public datasets and one public benchmark to compare the experiment results with the published results by the Informer.
ETT (Electricity Transformer Temperature1): This real-world dataset for the electric power deployment introduced by Zhou et al. 2020 combines short-term periodical patterns, long-term periodical patterns, long-term trends, and irregular patterns. The data consists of load and temperature readings from two transformers and is split into two hourly subsets ETTh1 and ETTh2. The ETTm1 dataset is generated by splitting ETTh1 dataset into 15-minute intervals. The dataset has six features and 70,080 data points in total. For easy comparison, we kept the splits for train/val/test consistent with the published results in Zhou et al. 2020, where the available 20 months of data is split as 12/4/4.
ECL (Electricity Consuming Load2): This electricity dataset represents the electricity consumption from 2011 to 2014 of 370 clients recorded in 15-minutes periods in Kilowatt (kW). We split the data into 15/3/4 months for train, validation, and test respectively as in Zhou et al. 2020.
6.2 EXPERIMENTAL SETUP
Baseline: Our main baseline is the the Informer architecture by Zhou et al. 2020. The results presented in the paper were reported to have a data scaling issue3, and the authors have updated
1Available under https:// github.com/zhouhaoyi/ETDataset. 2Available under https://archive.ics.uci.edu/ml/ datasets/ElectricityLoadDiagrams20112014 3https://github.com/zhouhaoyi/Informer2020/issues/41
their results in the official GitHub repository4. Therefore we compare against these updated results. Recently, the Query Selector (Klimek et al., 2021) utilize the Informer framework for far horizon forecasting, however they report results from the Informer paper before the bug fix and hence we avoid comparison with this work. As a second baseline, we also compare the second-best performing model in Zhou et al. 2020 which is the Informer that uses canonical attention module. It is represented as Informer* in the tables. Furthermore, we also compare against DeepAR (Salinas et al., 2020), LogTrans (Li et al., 2019), and LSTnet (Lai et al., 2018) architectures as they outperformed the Informer baseline for certain forecasting horizons. For a quick analysis, we present the percent improvement achieved by the Yformer over the current best results.
For easy comparison, we choose two commonly used metrics for time series forecasting to evaluate the Yformer architecture, the MAE and MSE in Equation 2. We report the results for the MAE in the paper and provide the MSE results in the appendix section D. We performed our experiments on GeForce RTX 2080 Ti GPU nodes with 32 GB ram and provide results as an average of three runs.
6.3 RESULTS AND ANALYSIS
This section compares our results with the results reported in the Informer baseline both in uni- and multivariate settings for the multiple datasets and horizons. The best-performing and the second-best models (lowest MAE) are highlighted in bold and red, respectively.
Univariate: The proposed Yformer model is able to outperform the current state of the art Informer baseline in 19 out of the 20 available tasks across different datasets and horizons by an average of 13.62 % of MAE. The Table 1 illustrates that the superiority of the Yformer is not just limited to a far horizon but even for the shorter horizons and in general across datasets. Considering the individual datasets, the Yformer surpasses the current state of the art by 8, 6.8, 21.9, and 18.1% of MAE for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively as seen in Table 1. We also report MSE scores in the supplementary appendix section D, which illustrates an improvement of 16.7, 12.6, 34.8 and 15.2% for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively. We observe that the MAE for the model is greater at horizon 48 than the MAE at horizon 168 for the ETTh1 dataset. This may be a case where the reused hyperparameters from the Informer paper are far from optimal for
4Commit 702fb9bbc69847ecb84a8bb205693089efb41c6
the Yformer. The other results show consistent behavior of increasing error with increasing horizon length τ . Additionally, this behavior is also observed in the Informer baseline for ETTh2 dataset (Table 2), where the loss is 1.340 for horizon 336 and 1.515 for a horizon of 168.
Figure 2 differentiates the improvement of Y-former architecture from the baseline Informer and the reconstruction loss. We notice that the Yformer architecture without the reconstruction loss is able to outperform the Informer architecture across different datasets and horizons. Additionally, the optimal value for the loss weighting hyperparameter α is always greater than zero as shown in Table 3, confirming the assumption that the addition of the auxiliary reconstruction loss term helps the model to generalize well to the future distribution.
Multivariate: We observe a similar trend in the multivariate setting. Here the Yformer model outperforms the current state of the art method in all of the 20 tasks across the four datasets by a margin of 11.85% of MAE. There is a clear superiority of the proposed approach in longer horizons across the different datasets. Across the different datasets, the Yformer improves the current state of the art MAE results by 9, 13.5, 13.1, and 11.7% of MAE and 12, 26.3, 13.9, and 17.1% of MSE for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively (table in the supplementary appendix section D). We attribute the improvement in performance to superior architecture and the ability to approximate the data distribution for the multiple targets due to the reconstruction loss.
7 ABLATION STUDY
We performed additional experiments on the ETTh2 and ETTm1 datasets to study the impact of the Yformer model architecture and the effect of the reconstruction loss separately.
7.1 Y-FORMER ARCHITECTURE
In this section we attempt to understand (1) How much of an improvement is brought about by the Y-shaped model architecture? and (2) How much impact did the reconstruction loss have? From Figure 2, it is clear that the Yformer architecture performs better or is comparable to the Informer throughout the entire horizon range (without the reconstruction loss α = 0). Moreover, we can notice that for the larger horizons, the Yformer architecture (with α = 0) seems to have a clear
advantage over in Informer baseline. We attribute this improvement in performance to the direct UNet inspired connections within the Yformer architecture. Using feature maps at multiple resolutions offers a clear advantage by eliminating vanishing gradients and encouraging feature reuse.
The reconstruction loss seems to have a significant impact in reducing the loss for the ETTm1 multivariate case (right bottom graph Figure 2). And, in general, it helps to improve the results for the Yformer architecture as the green curve in the Figure 2 is almost always below the blue (Informer) and orange (Yformer with α = 0) curve. The significance of the loss depends on the dataset but, in general, the auxiliary loss helps to improve the overall performance.
7.2 RECONSTRUCTION FACTOR
How impactful is the reconstruction factor α from the proposed loss in Eq. 6? We analyzed the optimal value for α across different forecasting horizons and summarized them in Table 3. Interestingly, α = 0.7 seems to be the predominant optimal setting, implying a high weight for the reconstruction loss helps the Yformer to achieve a lower loss for the future targets. Additionally, we can observe a trend that α is on average larger for short forecasting horizons signifying that the input reconstruction loss is also important for the short horizon forecast.
8 CONCLUSION
Time series forecasting is an important business and research problem that has a broad impact in today’s world. This paper proposes a novel Y-shaped architecture, specifically designed for the far horizon time series forecasting problem. The study shows the importance of direct connections from the multi-resolution encoder to the decoder and reconstruction loss for the task of time series forecasting. The Yformer couples the U-Net architecture from the image segmentation domain on a sparse transformer model to achieve state of the art results in 39 out of the 40 tasks across various settings as presented in the Tables 1 and 2.
9 REPRODUCIBILITY STATEMENT
All the experimental datasets used to evaluate the Yformer architecture are publicly available and, we provide the source code for the proposed architecture with the supplementary materials for reproducibility. The hyperparameters needed to reproduce the reported results on the different datasets are presented in the appendix section E.
A APPENDIX : ADDITIONAL FIGURES
B APPENDIX : ADDITIONAL RELATED WORKS
Time-series Forecasting: Time Series forecasting has been a well-established research topic with steadily growing applications in a real-world environment. Traditionally, ARIMA Box & Jenkins (1968) and exponential smoothing methods Hyndman & Athanasopoulos (2018) have been used for time series forecasting. However, limited scalability and the inability to model non-linear patterns restrict their use for time series forecasting.
Reconstruction Loss The usage of a reconstruction loss originated in the domain of variational autoencoders Kingma & Welling (2013) to reconstruct the inputs during the generative process. Forecasting for a long horizon in a single forward pass can be considered as generating a distribution given the past time steps as the input distribution. A reconstruction loss would hence be beneficial as
the forecast distribution is similar to the past input distribution. Moreover, a recent study by Le et al. 2018 has shown that the addition of the reconstruction term to any loss function generally provides uniform stability and bounds on the generalization error, therefore leading to a more robust model overall with no negative effect on the performance.
Our work tries to combine the different ideas of U-Net from image segmentation with a sparse attention network for modeling time series that also utilizes the additional guidance from reconstruction loss.
C APPENDIX : DEFINITION
C.1 OPERATORS
ProbSparseAttn: Attention module that uses the ProbSparse method introduced in Zhou et al. (2020). The query matrix Q ∈ RLQ×d denotes the sparse query matrix with u dominant queries.
APropSparse(Q,K,V ) = Softmax(QK T
√ d )V (7)
MaskedAttn: Canonical self-attention with masking to prevent positions from attending to subsequent positions in the future (Vaswani et al., 2017).
Conv1d: Given N batches of 1D array of length L and C number of channels/dimensions. A convolution operation produces an output:
out(Ni, Coutj ) = bias(Coutj ) + Cin−1∑ k=0 weight(Coutj , k) ? input(Ni, k) (8)
For further reference please visit pytorch Conv1D page
LayerNorm: Layer Normalization introduced in Ba et al. (2016), normalizes the inputs across channels/dimensions. LayerNorm is the default normalization in common transformer architectures (Vaswani et al., 2017). Here, γ and β are learnable affine transformations.
out(N, ∗) = input(N, ∗)− E[input(N, ∗)]√ Var[input(N, ∗)] + ∗ γ + β (9)
MaxPool: Given N batches of 1D array of length L, and C number of channels/dimensions. A MaxPool operation produces an output.
out(Ni, Cj , k) = max m=0,...,kernel size−1 input(Ni, Cj , stride× k +m) (10)
For further reference please visit pytorch MaxPool1D page
ELU: Given an input x, the ELU applies element-wise non linear activation function as shown.
ELU(x) = { x, if x > 0 α ∗ (exp(x)− 1), if x ≤ 0 (11)
ConvTranspose1d: Also known as deconvolution or fractionally strided convolution, uses convolution on padded input to produce upsampled outputs (see pytorch ConvTranspose1d page).
C.2 CONTRACTING PROBSPARSE SELF-ATTENTION BLOCKS
The Informer model uses Contracting ProbSparse Self-Attention Blocks to distil out redundant information from the long history input sequence (x, y) in a pyramid structure similar to Lin et al.
(2017). The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query, key and value for self-attention. This is followed by multiple layers of convolution (Conv1d), and finally a MaxPool operation is performed to reduce the latent dimension at each block, after applying non-linearity with an ELU() activation function . The LayerNorm operation, regularizes the model using Layer Normalization (Ba et al., 2016). Algorithm 2 shows the multiple operations performed within a Contracting ProbSparse Self-Attention Block that takes an input hi and produces the hidden representation hi+1 for the i+ 1th block.
Algorithm 2 Contracting ProbSparse Self-Attention Block Input : hi Output : hi+1 hi+1 ← ProbSparseAttn(hi, hi) hi+1 ← Conv1d(hi+1) hi+1 ← Conv1d(hi+1) hi+1 ← LayerNorm(hi+1) hi+1 ← MaxPool(ELU(Conv1d(hi+1)))
C.3 CONTRACTING MASKED SELF-ATTENTION BLOCKS
The Y-Future Encoder, uses multiple blocks of Contracting Masked Self-Attention Blocks that replaces the ProbSparseAttn in the Contracting ProbSparse Self-Attention Blocks with a masked attention. Masking attention for the Y-Future Encoder, avoids any future information leak architecturally. In our experiments, the addition of restricted attention like the ProbSparse on an already masked attention resulted in performance loss. This could be the result of missing query-key interaction brought about by the ProbSparse on an already masked attention. Algorithm 3 shows the multiple operations performed within a Contracting Masked Self-Attention Block that takes an input hi and produces the hidden representation hi+1 for the i+ 1th block.
Algorithm 3 Contracting Masked Self-Attention Block Input : hi Output : hi+1 hi+1 ← MaskedAttn(hi, hi) hi+1 ← Conv1d(hi+1) hi+1 ← Conv1d(hi+1) hi+1 ← LayerNorm(hi+1) hi+1 ← MaxPool(ELU(Conv1d(hi+1)))
D APPENDIX : MSE RESULTS
E APPENDIX : HYPERPARAMETERS
E.1 HYPERPARAMETER SEARCH SPACE
For a fair comparison, we retain the design choices from the Informer baseline like the history input length (T ) for a particular forecast length (τ ), so that any performance improvement can exclusively be attributed to the architecture of the Yformer model and not to an increased history input length. We performed a grid search for learning rates of {0.001, 0.0001}, α-values of {0, 0.3, 0.5, 0.7, 1}, number of encoder and decoder blocks I = {2, 3, 4} while keeping all the other hyperparameters the same as the Informer. Furthermore, Adam optimizer was used for all experiments, and we employed an early stopping criterion with a patience of three epochs. To counteract overfitting, we tried dropout with varying ratios but interestingly found the effect to be minimal in the results. Therefore, we adopt weight-decay for our experiments with factors {0, 0.02, 0.05} for additional regularization. We select the optimal hyperparameters based on the lowest validation loss and will publish the code upon acceptance.
E.2 OPTIMAL HYPERPARAMETERS | 1. What is the focus and contribution of the paper on long sequence time series forecasting?
2. What are the strengths of the proposed method, particularly in terms of employing skip connections?
3. What are the weaknesses of the paper regarding its organization, technical novelty, and experimentation?
4. Do you have any concerns about the effectiveness of the skip connections and the overall novelty of the approach?
5. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents Yformer to perform long sequence time series forecasting. The key idea is to employ skip connection to improve the prediction resolution and stabilize the encoder and decoder by reconstructing the recent past. The experiment results on two datasets showed the effectiveness of the proposed method.
Review
Strengths
Llong range time series forecasting is an interesting problem to investigate.
Adding skip connections between encoder and decoder is technically sound
The overall experiment results showed the effectiveness of the proposed method.
Weaknesses
The organization of this paper is not well. Many technical details are not very clear in the main context
The overall technical novelty is limited
The effectiveness of the skip connections are not fully assessed
More details of the experiments are not provided.
The main problem of this paper is that the main context (especially the methodology section) is not self-contained. The reader will have to rely on details in the appendix or other papers to fully understand the proposed technique.
Another concern is the novelty. Skip connections are common practice in U-net and the idea of stabilizing the encoder and decoder by reconstructing the recent past is also not new. Although it is a new application area for skip connections, the overall technical novelty is limited.
In addition, the ablation study over whether the Skip connections are used or not is not provided.
Several related works are not mentioned or compared.
[1] "Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting." Sen, Rajat, Hsiang-Fu Yu, and Inderjit S. Dhillon NeurIPS 2019.
[2] "Modeling long-and short-term temporal patterns with deep neural networks." Lai, Guokun, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu, SIGIR 2018.
[3] "Shape and time distortion loss for training deep time series forecasting models." Vincent, L. E., and Nicolas Thome NeurIPS 2019.
As for the experiments:
It is not clear whether the setting in Eq. (1) is consistent with the settings in Informer or Reformer.
It is also not clear how to set y’ in the experiments.
Only two datasets are used for evaluation, which may not be sufficient to show the generalization capability of the proposed technique.
Standard deviations of the prediction results are not provided. |
ICLR | Title
Yformer: U-Net Inspired Transformer Architecture for Far Horizon Time Series Forecasting
Abstract
Time series data is ubiquitous in research as well as in a wide variety of industrial applications. Effectively analyzing the available historical data and providing insights into the far future allows us to make effective decisions. Recent research has witnessed the superior performance of transformer-based architectures, especially in the regime of far horizon time series forecasting. However, the current state of the art sparse Transformer architectures fail to couple downand upsampling procedures to produce outputs in a similar resolution as the input. We propose the Yformer model, based on a novel Y-shaped encoder-decoder architecture that (1) uses direct connection from the downscaled encoder layer to the corresponding upsampled decoder layer in a U-Net inspired architecture, (2) Combines the downscaling/upsampling with sparse attention to capture long-range effects, and (3) stabilizes the encoder-decoder stacks with the addition of an auxiliary reconstruction loss. Extensive experiments have been conducted with relevant baselines on four benchmark datasets, demonstrating an average improvement of 19.82, 18.41 percentage MSE and 13.62, 11.85 percentage MAE in comparison to the current state of the art for the univariate and the multivariate settings respectively.
1 INTRODUCTION
In the most simple case, time series forecasting deals with a scalar time-varying signal and aims to predict or forecast its values in the near future; for example, countless applications in finance, healthcare, production automatization, etc. (Carta et al., 2021; Cao et al., 2018; Sagheer & Kotb, 2019) can benefit from an accurate forecasting solution. Often not just a single scalar signal is of interest, but multiple at once, and further time-varying signals are available and even known for the future. For example, suppose one aims to forecast the energy consumption of a house, it likely depends on the social time that one seeks to forecast for (such as the next hour or day), and also on features of these time points (such as weekday, daylight, etc.), which are known already for the future. This is also the case in model predictive control (Camacho & Alba, 2013), where one is interested to forecast the expected value realized by some planned action, then this action is also known at the time of forecast. More generally, time series forecasting, nowadays deals with quadruples (x, y, x′, y′) of known past predictors x, known past targets y, known future predictors x′ and sought future targets y′. (Figure 3 in appendix section A provides a simple illustration)
Time series problems can often be addressed by methods developed initially for images, treating them as 1-dimensional images. Especially for time-series classification many typical time series encoder architectures have been adapted from models for images (Wang et al., 2017; Zou et al., 2019). Time series forecasting then is closely related to image outpainting (Van Hoorick, 2019), the task to predict how an image likely extends to the left, right, top or bottom, as well as to the more well-known task of image segmentation, where for each input pixel, an output pixel has to be predicted, whose channels encode pixel-wise classes such as vehicle, road, pedestrian say for road scenes. Time series forecasting combines aspects from both problem settings: information about targets from shifted positions (e.g., the past targets y as in image outpainting) and information about other channels from the same positions (e.g., the future predictors x′ as in image segmentation). One of the most successful, principled architectures for the image segmentation task are U-Nets introduced in Ronneberger et al. (2015), an architecture that successively downsamples / coarsens
its inputs and then upsamples / refines the latent representation with deconvolutions also using the latent representations of the same detail level, tightly coupling down- and upsampling procedures and thus yielding latent features on the same resolution as the inputs.
Following the great success in Natural Language Processing (NLP) applications, attention-based, esp. transformer-based architectures (Vaswani et al., 2017) that model pairwise interactions between sequence elements have been recently adapted for time series forecasting. One of the significant challenges, is that the length of the time series, are often one or two magnitudes of order larger than the (sentence-level) NLP problems. Plenty of approaches aim to mitigate the quadratic complexity O(T 2) in the sequence/time series length T to at mostO(T log T ). For example, the Informer architecture (Zhou et al., 2020), arguably one of the most accurate forecasting models researched so far, adapts the transformer by a sparse attention mechanism and a successive downsampling/coarsening of the past time series. As in the original transformer, only the coarsest representation is fed into the decoder. Possibly to remedy the loss in resolution by this procedure, the Informer feeds its input a second time into the decoder network, this time without any coarsening.
While forecasting problems share many commonalities with image segmentation problems, transformer-based architectures like the Informer do not involve coupled down- and upscaling procedures to yield predictions on the same resolution as the inputs. Thus, we propose a novel Y-shaped architecture called Yformer that
1. Couples downscaling/upscaling to leverage both, coarse and fine-grained features for time series forecasting,
2. Combines the coupled scaling mechanism with sparse attention modules to capture longrange effects on all scale levels, and
3. Stabilizes encoder and decoder stacks by reconstructing the recent past.
2 RELATED WORK
Deep Learning Based Time Series Forecasting: While Convolutional Neural Network (CNN) and Recurrent Neural network (RNN) based architectures (Salinas et al., 2020; Rangapuram et al., 2018) outperform traditional methods like ARIMA (Box & Jenkins, 1968) and exponential smoothing methods (Hyndman & Athanasopoulos, 2018), the addition of attention layers (Vaswani et al., 2017) to model time series forecasting has proven to be very beneficial across different problem settings (Fan et al., 2019; Qin et al., 2017; Lai et al., 2018). Attention allows direct pair-wise interaction with eccentric events (like holidays) and can model temporal dynamics inherently unlike RNN’s and CNN’s that fail to capture long-range dependencies directly. Recent work like Reformer (Kitaev et al., 2020), Linformer (Wang et al., 2020) and Informer (Zhou et al., 2020) have focused on reducing the quadratic complexity of modeling pair-wise interactions to O(T log T ) with the introduction of restricted attention layers. Consequently, they can predict for longer forecasting horizons but are hindered by their capability of aggregating features and maintaining the resolution required for far horizon forecasting.
U-Net: The Yformer model is inspired by the famous U-Net architecture introduced in Ronneberger et al. (2015) originating from the field of medical image segmentation. The U-net architecture is capable of compressing information by aggregating over the inputs and up-sampling embeddings to the same resolutions as that of the inputs from their compressed latent features. Current transformer architectures like the Informer (Zhou et al., 2020) do not utilize up-sampling techniques even though the network produces intermediate multi resolution feature maps. Our work aims to capitalize on these multi resolution feature maps and use the U-net shape effectively for the task of time series forecasting. Previous works like Stoller et al. (2019) and Perslev et al. (2019) have successfully applied U-Net architecture for the task of sequence modeling and time series segmentation, illustrating superior results in the respective tasks. These work motivate the use of a U-Net-inspired architecture for time series forecasting as current methods fail to couple sparse attention mechanism with the U-Net shaped architecture. Additional related works section is decoupled from the main text and is presented in the appendix section B.
3 PROBLEM FORMULATION
By a time series x with M channels, we mean a finite sequence of vectors in RM , denote their space by R∗×M := ⋃ T∈N RT×M , and their length by |x| := T (for x ∈ RT×M ,M ∈ N). We write (x, y) ∈ R∗×(M+O) to denote two time series of same length with M and O channels, respectively.
We model a time series forecasting instance as a quadruple (x, y, x′, y′) ∈ R∗×(M+O) × R∗×(M+O), where x, y denote the past predictors and targets until a reference time point T and x′, y′ denote the future predictors and targets from the reference point T to the next τ time steps. Here, τ = |x′| is called the forecast horizon. For a Time Series Forecasting Problem, given (i) a sample D := {(x1, y1, x′1, y′1), . . . , (xN , yN , x ′ N , y ′ N )} from an unknown distribution p of time series forecasting instances and (ii) a function ` : R∗×(O+O) → R called loss, we attempt to find a function ŷ : R∗×(M+O) ×R∗×M → R∗×O (with |ŷ(x, y, x′)| = |x′|) with minimal expected loss
E(x,y,x′,y′)∼p `(y′, ŷ(x, y, x′)) (1)
The loss ` usually is the mean absolute error (MAE) or mean squared error (MSE) averaged over future time points:
`mae(y′, ŷ) := 1
|y′| |y′|∑ t=1 1 O ||y′t − ŷt||1, `mse(y′, ŷ) := 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||22 (2)
Furthermore, if there is only one target channel and no predictor channels (O = 1,M = 0), the time series forecasting problem is called univariate, otherwise multivariate.
4 BACKGROUND
Our work incorporates restricted attention based Transformer in a U-Net inspired architecture. For this reason, we base our work on the current state of the art sparse attention model Informer, introduced in Zhou et al. (2020). We provide a brief overview of the sparse attention mechanism (ProbSparse) and the encoder block (Contracting ProbSparse Self-Attention Blocks) used in the Informer model for completeness.
ProbSparse Attention: The ProbSparse attention mechanism restricts the canonical attention (Vaswani et al., 2017) by selecting a subset u of dominant queries having the largest variance across all the keys. Consequently, the query Q ∈ RLQ×d in the canonical attention is replaced by a sparse query matrix Q ∈ RLQ×d consisting of only the u dominant queries. ProbSparse attention can hence be defined as:
APropSparse(Q,K,V ) = Softmax(QK T
√ d )V (3)
where d denotes the input dimension to the attention module. For more details on the ProbSparse attention mechanism, we refer the reader to Zhou et al. (2020).
Contracting ProbSparse Self-Attention Blocks: The Informer model uses Contracting ProbSparse Self-Attention Blocks to distill out redundant information from the long history input sequence (x, y) in a pyramid structure motivated from the image domain (Lin et al., 2017). The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query, key and value for self-attention. This is followed by multiple layers of convolution (Conv1d), and finally the MaxPool operation reduces the latent dimension by effectively distilling out redundant information at each block. We refer the reader to Algorithm 2 in the appendix section C where these operations are presented in an algorithmic structure for a comprehensive overview.
5 METHODOLOGY
The Yformer model is a Y-shaped (Figure 1b) symmetric encoder-decoder architecture that is specifically designed to take advantage of the multi-resolution embeddings generated by the Contracting ProbSparse Self-Attention Blocks. The fundamental design consideration is the adoption of U-Netinspired connections to extract encoder features at multiple resolutions and provide direct connection to the corresponding symmetric decoder block (simple illustration provided in Figure 4, appendix section A). Furthermore, the addition of reconstruction loss helps the model learn generalized embeddings that better approximate the data generating distribution.
The Y-Past Encoder of the Yformer is designed using a similar encoder structure as that of the Informer. The Y-Past Encoder embeds the past sequence (x, y) into a scalar projection along with the addition of positional and temporal embeddings. Multiple Contracting ProbSparse Self-Attention Blocks are used to generate encoder embeddings at various resolutions. The Informer model uses the final low-dimensional embedding as the input to the decoder (Figure 1a) whereas, the Yformer retains the embeddings at multiple resolutions to be passed on to the decoder. This allows the Yformer to use high-dimensional lower-level embeddings effectively.
The Y-Future Encoder of the Yformer mitigates the issue of the redundant reprocessing of parts of the past sequence (x, y) used as tokens (xtoken, ytoken) in the Informer architecture. The Informer model uses only the coarsest representation from the encoder embedding, leading to a loss in resolution and forcing the Informer to pass part of the past sequence as tokens (xtoken, ytoken) to the decoder (Figure 1a). The Yformer separates the future predictors and the past sequence (x, y) bypassing the future predictors (x′) through a separate encoder and utilizing the multi-resolution embeddings to dismiss the need for tokens entirely. Unlike the Y-Past Encoder, the attention blocks in the Y-Future encoder are based on masked canonical self-attention mechanism (Vaswani et al., 2017). Masking the attention ensures that there is no information leak from the future time steps to the past. Moreover, a masked canonical self-attention mechanism helps reduce the complexity, as half of the query-key interactions are restricted by design. Thus, the Y-Future Encoder is designed by stacking
multiple Contracting ProbSparse Self-Attention Blocks where the ProbSparse attention is replaced by the Masked Attention. We name these blocks Contracting Masked Self-Attention Blocks (Algorithm 3 appendix section C).
The Yformer processes the past inputs and the future predictors separately within its encoders. However, considering the time steps, the future predictors are a continuation of the past time steps. For this reason, the Yformer model concatenates (represented by the symbol + ) the past encoder embedding and the future encoder embedding along the time dimension after each encoder block, preserving the continuity between the past input time steps and the future time steps. Let i represent the index of an encoder block, then epasti+1 and e fut i+1 represent the output from the past encoder and the future encoder respectively. The final concatenated encoder embedding (ei+1) is calculated as,
epasti+1 = ContractingProbSparseSelfAttentionBlock(e past i ) efuti+1 = ContractingMaskedSelfAttentionBlock(e fut i ) ei+1 = e past i+1 ++ e fut i+1
(4)
The encoder embeddings represented by E = [e0, . . . , eI ] (where I is the number of encoder layers) contain the combination of past and future embeddings at multiple resolutions.
The Y-Decoder of the Yformer consists of two parts. The first part takes as input the final concatenated low-dimensional embedding (eI) and performs a multi-head canonical self-attention mechanism. Here, the past encoder embedding (epastI ) is allowed to attend to itself as well as the future encoder embedding (efutI ) in an unrestricted fashion. The encoder embedding (eI) is the lowdimensional distilled embedding, and skipping query-key interaction within these low-dimensional embeddings might deny the model useful pair-wise interactions. Therefore, it is by design that this is the only part of the Yformer model that uses canonical self-attention in comparison to the Informer that uses canonical attention within its repeating decoder block, as shown in Figure 1a. Since the canonical self-attention layer is separated from the repeating attention blocks within the decoder, the Yformer complexity from this full attention module does not increase with an increase in the number of decoder blocks. The U-Net architecture inspires the second part of the Y-Decoder. Consequently, the decoder is structured in a symmetric expanding path identical to the contracting encoder. We realize this architecture by introducing upsampling on the ProbSparse attention mechanism using Expanding ProbSparse Cross-Attention Block.
The Expanding ProbSparse Cross-Attention Block within the Yformer decoder performs two tasks: (1) upsample the compressed encoder embedding eI and (2) perform restricted cross attention between the expanding decoder embedding dI−i and the corresponding encoder embedding ei (represented in Figure 4 appendix section A). We accomplish both the tasks by introducing an Expanding ProbSparse Cross-Attention Block as illustrated in Algorithm 1.
Algorithm 1 Expanding ProbSparse Cross-Attention Block Input : dI−i, ei Output : dI−i+1 dI−i+1 ← ProbSparseCrossAttn(dI−i, ei) dI−i+1 ← Conv1d(dI−i+1) dI−i+1 ← Conv1d(dI−i+1) dI−i+1 ← LayerNorm(dI−i+1) dI−i+1 ← ELU(ConvTranspose1d(dI−i+1)))
The Expanding ProbSparse Cross-Attention Blocks within the Yformer decoder uses a ProbSparseCrossAttn to construct direct connections between the lower levels of the encoder and the corresponding symmetric higher levels of the decoder. Direct connections from the encoder to the decoder are an essential component for majority of the models within the image domain. For example, ResNet (He et al., 2016), and DenseNet (Huang et al., 2017) have demonstrated that direct connections between previous feature maps, strengthen feature propagation, reduce parameters, mitigate vanishing gradients and encourage feature reuse. However, current transformer-based architectures like the Informer fail to utilize such direct connections. The ProbSparseCrossAttn
takes in as input the decoder embedding from the previous layer dI−i as queries and the corresponding encoder embedding ei as keys. The Yformer uses the ProbSparse restricted attention so that the model is scalable with an increase in the number of decoder blocks.
We utilize ConvTranspose1d or popularly known as Deconvolution for incrementally increasing the embedding space. The famous U-Net architecture uses a symmetric expanding path using such Deconvolution layers. This property enables the model to not only aggregate over the input but also upscale the latent dimensions, improving the overall expressivity of the architecture. The decoder of Yformer follows a similar strategy by employing Deconvolution to expand the embedding space of the encoded output. We describe the different operators used in the appendix section C.
A fully connected layer (LinearLayer) predicts the future time steps ŷfut from the final decoder layer (dI) and additionally reconstructs the past input targets ŷpast.
[ŷpast, ŷfut] = LinearLayer(dI) (5)
The addition of reconstruction loss to the Yformer as an auxiliary loss, serves two significant purposes. Firstly, the reconstruction loss acts as a data-dependent regularization term that reduces overfitting by learning embeddings that are more general (Ghasedi Dizaji et al., 2017; Jarrett & van der Schaar, 2020). Secondly, the reconstruction loss helps in producing future-output in a similar distribution as the inputs (Bank et al., 2020). For far horizon forecasting, we are interested in learning a future-output distribution. However, the future-output distribution and the past-input distribution arise from the same data generating process. Therefore having an auxiliary reconstruction loss would direct the gradients to a better approximate of the data generating process. The Yformer model is trained on the combined loss `,
` = α `mse(y, ŷpast) + (1− α) `mse(y′, ŷfut) (6)
where the first term tries to learn the past targets y and the second term learns the future targets y′. We use the reconstruction factor (α) to vary the importance of reconstruction and future prediction and tune this as a hyperparameter.
6 EXPERIMENTS
6.1 DATASETS
To evaluate our proposed YFormer architecture, we use two real-world public datasets and one public benchmark to compare the experiment results with the published results by the Informer.
ETT (Electricity Transformer Temperature1): This real-world dataset for the electric power deployment introduced by Zhou et al. 2020 combines short-term periodical patterns, long-term periodical patterns, long-term trends, and irregular patterns. The data consists of load and temperature readings from two transformers and is split into two hourly subsets ETTh1 and ETTh2. The ETTm1 dataset is generated by splitting ETTh1 dataset into 15-minute intervals. The dataset has six features and 70,080 data points in total. For easy comparison, we kept the splits for train/val/test consistent with the published results in Zhou et al. 2020, where the available 20 months of data is split as 12/4/4.
ECL (Electricity Consuming Load2): This electricity dataset represents the electricity consumption from 2011 to 2014 of 370 clients recorded in 15-minutes periods in Kilowatt (kW). We split the data into 15/3/4 months for train, validation, and test respectively as in Zhou et al. 2020.
6.2 EXPERIMENTAL SETUP
Baseline: Our main baseline is the the Informer architecture by Zhou et al. 2020. The results presented in the paper were reported to have a data scaling issue3, and the authors have updated
1Available under https:// github.com/zhouhaoyi/ETDataset. 2Available under https://archive.ics.uci.edu/ml/ datasets/ElectricityLoadDiagrams20112014 3https://github.com/zhouhaoyi/Informer2020/issues/41
their results in the official GitHub repository4. Therefore we compare against these updated results. Recently, the Query Selector (Klimek et al., 2021) utilize the Informer framework for far horizon forecasting, however they report results from the Informer paper before the bug fix and hence we avoid comparison with this work. As a second baseline, we also compare the second-best performing model in Zhou et al. 2020 which is the Informer that uses canonical attention module. It is represented as Informer* in the tables. Furthermore, we also compare against DeepAR (Salinas et al., 2020), LogTrans (Li et al., 2019), and LSTnet (Lai et al., 2018) architectures as they outperformed the Informer baseline for certain forecasting horizons. For a quick analysis, we present the percent improvement achieved by the Yformer over the current best results.
For easy comparison, we choose two commonly used metrics for time series forecasting to evaluate the Yformer architecture, the MAE and MSE in Equation 2. We report the results for the MAE in the paper and provide the MSE results in the appendix section D. We performed our experiments on GeForce RTX 2080 Ti GPU nodes with 32 GB ram and provide results as an average of three runs.
6.3 RESULTS AND ANALYSIS
This section compares our results with the results reported in the Informer baseline both in uni- and multivariate settings for the multiple datasets and horizons. The best-performing and the second-best models (lowest MAE) are highlighted in bold and red, respectively.
Univariate: The proposed Yformer model is able to outperform the current state of the art Informer baseline in 19 out of the 20 available tasks across different datasets and horizons by an average of 13.62 % of MAE. The Table 1 illustrates that the superiority of the Yformer is not just limited to a far horizon but even for the shorter horizons and in general across datasets. Considering the individual datasets, the Yformer surpasses the current state of the art by 8, 6.8, 21.9, and 18.1% of MAE for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively as seen in Table 1. We also report MSE scores in the supplementary appendix section D, which illustrates an improvement of 16.7, 12.6, 34.8 and 15.2% for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively. We observe that the MAE for the model is greater at horizon 48 than the MAE at horizon 168 for the ETTh1 dataset. This may be a case where the reused hyperparameters from the Informer paper are far from optimal for
4Commit 702fb9bbc69847ecb84a8bb205693089efb41c6
the Yformer. The other results show consistent behavior of increasing error with increasing horizon length τ . Additionally, this behavior is also observed in the Informer baseline for ETTh2 dataset (Table 2), where the loss is 1.340 for horizon 336 and 1.515 for a horizon of 168.
Figure 2 differentiates the improvement of Y-former architecture from the baseline Informer and the reconstruction loss. We notice that the Yformer architecture without the reconstruction loss is able to outperform the Informer architecture across different datasets and horizons. Additionally, the optimal value for the loss weighting hyperparameter α is always greater than zero as shown in Table 3, confirming the assumption that the addition of the auxiliary reconstruction loss term helps the model to generalize well to the future distribution.
Multivariate: We observe a similar trend in the multivariate setting. Here the Yformer model outperforms the current state of the art method in all of the 20 tasks across the four datasets by a margin of 11.85% of MAE. There is a clear superiority of the proposed approach in longer horizons across the different datasets. Across the different datasets, the Yformer improves the current state of the art MAE results by 9, 13.5, 13.1, and 11.7% of MAE and 12, 26.3, 13.9, and 17.1% of MSE for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively (table in the supplementary appendix section D). We attribute the improvement in performance to superior architecture and the ability to approximate the data distribution for the multiple targets due to the reconstruction loss.
7 ABLATION STUDY
We performed additional experiments on the ETTh2 and ETTm1 datasets to study the impact of the Yformer model architecture and the effect of the reconstruction loss separately.
7.1 Y-FORMER ARCHITECTURE
In this section we attempt to understand (1) How much of an improvement is brought about by the Y-shaped model architecture? and (2) How much impact did the reconstruction loss have? From Figure 2, it is clear that the Yformer architecture performs better or is comparable to the Informer throughout the entire horizon range (without the reconstruction loss α = 0). Moreover, we can notice that for the larger horizons, the Yformer architecture (with α = 0) seems to have a clear
advantage over in Informer baseline. We attribute this improvement in performance to the direct UNet inspired connections within the Yformer architecture. Using feature maps at multiple resolutions offers a clear advantage by eliminating vanishing gradients and encouraging feature reuse.
The reconstruction loss seems to have a significant impact in reducing the loss for the ETTm1 multivariate case (right bottom graph Figure 2). And, in general, it helps to improve the results for the Yformer architecture as the green curve in the Figure 2 is almost always below the blue (Informer) and orange (Yformer with α = 0) curve. The significance of the loss depends on the dataset but, in general, the auxiliary loss helps to improve the overall performance.
7.2 RECONSTRUCTION FACTOR
How impactful is the reconstruction factor α from the proposed loss in Eq. 6? We analyzed the optimal value for α across different forecasting horizons and summarized them in Table 3. Interestingly, α = 0.7 seems to be the predominant optimal setting, implying a high weight for the reconstruction loss helps the Yformer to achieve a lower loss for the future targets. Additionally, we can observe a trend that α is on average larger for short forecasting horizons signifying that the input reconstruction loss is also important for the short horizon forecast.
8 CONCLUSION
Time series forecasting is an important business and research problem that has a broad impact in today’s world. This paper proposes a novel Y-shaped architecture, specifically designed for the far horizon time series forecasting problem. The study shows the importance of direct connections from the multi-resolution encoder to the decoder and reconstruction loss for the task of time series forecasting. The Yformer couples the U-Net architecture from the image segmentation domain on a sparse transformer model to achieve state of the art results in 39 out of the 40 tasks across various settings as presented in the Tables 1 and 2.
9 REPRODUCIBILITY STATEMENT
All the experimental datasets used to evaluate the Yformer architecture are publicly available and, we provide the source code for the proposed architecture with the supplementary materials for reproducibility. The hyperparameters needed to reproduce the reported results on the different datasets are presented in the appendix section E.
A APPENDIX : ADDITIONAL FIGURES
B APPENDIX : ADDITIONAL RELATED WORKS
Time-series Forecasting: Time Series forecasting has been a well-established research topic with steadily growing applications in a real-world environment. Traditionally, ARIMA Box & Jenkins (1968) and exponential smoothing methods Hyndman & Athanasopoulos (2018) have been used for time series forecasting. However, limited scalability and the inability to model non-linear patterns restrict their use for time series forecasting.
Reconstruction Loss The usage of a reconstruction loss originated in the domain of variational autoencoders Kingma & Welling (2013) to reconstruct the inputs during the generative process. Forecasting for a long horizon in a single forward pass can be considered as generating a distribution given the past time steps as the input distribution. A reconstruction loss would hence be beneficial as
the forecast distribution is similar to the past input distribution. Moreover, a recent study by Le et al. 2018 has shown that the addition of the reconstruction term to any loss function generally provides uniform stability and bounds on the generalization error, therefore leading to a more robust model overall with no negative effect on the performance.
Our work tries to combine the different ideas of U-Net from image segmentation with a sparse attention network for modeling time series that also utilizes the additional guidance from reconstruction loss.
C APPENDIX : DEFINITION
C.1 OPERATORS
ProbSparseAttn: Attention module that uses the ProbSparse method introduced in Zhou et al. (2020). The query matrix Q ∈ RLQ×d denotes the sparse query matrix with u dominant queries.
APropSparse(Q,K,V ) = Softmax(QK T
√ d )V (7)
MaskedAttn: Canonical self-attention with masking to prevent positions from attending to subsequent positions in the future (Vaswani et al., 2017).
Conv1d: Given N batches of 1D array of length L and C number of channels/dimensions. A convolution operation produces an output:
out(Ni, Coutj ) = bias(Coutj ) + Cin−1∑ k=0 weight(Coutj , k) ? input(Ni, k) (8)
For further reference please visit pytorch Conv1D page
LayerNorm: Layer Normalization introduced in Ba et al. (2016), normalizes the inputs across channels/dimensions. LayerNorm is the default normalization in common transformer architectures (Vaswani et al., 2017). Here, γ and β are learnable affine transformations.
out(N, ∗) = input(N, ∗)− E[input(N, ∗)]√ Var[input(N, ∗)] + ∗ γ + β (9)
MaxPool: Given N batches of 1D array of length L, and C number of channels/dimensions. A MaxPool operation produces an output.
out(Ni, Cj , k) = max m=0,...,kernel size−1 input(Ni, Cj , stride× k +m) (10)
For further reference please visit pytorch MaxPool1D page
ELU: Given an input x, the ELU applies element-wise non linear activation function as shown.
ELU(x) = { x, if x > 0 α ∗ (exp(x)− 1), if x ≤ 0 (11)
ConvTranspose1d: Also known as deconvolution or fractionally strided convolution, uses convolution on padded input to produce upsampled outputs (see pytorch ConvTranspose1d page).
C.2 CONTRACTING PROBSPARSE SELF-ATTENTION BLOCKS
The Informer model uses Contracting ProbSparse Self-Attention Blocks to distil out redundant information from the long history input sequence (x, y) in a pyramid structure similar to Lin et al.
(2017). The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query, key and value for self-attention. This is followed by multiple layers of convolution (Conv1d), and finally a MaxPool operation is performed to reduce the latent dimension at each block, after applying non-linearity with an ELU() activation function . The LayerNorm operation, regularizes the model using Layer Normalization (Ba et al., 2016). Algorithm 2 shows the multiple operations performed within a Contracting ProbSparse Self-Attention Block that takes an input hi and produces the hidden representation hi+1 for the i+ 1th block.
Algorithm 2 Contracting ProbSparse Self-Attention Block Input : hi Output : hi+1 hi+1 ← ProbSparseAttn(hi, hi) hi+1 ← Conv1d(hi+1) hi+1 ← Conv1d(hi+1) hi+1 ← LayerNorm(hi+1) hi+1 ← MaxPool(ELU(Conv1d(hi+1)))
C.3 CONTRACTING MASKED SELF-ATTENTION BLOCKS
The Y-Future Encoder, uses multiple blocks of Contracting Masked Self-Attention Blocks that replaces the ProbSparseAttn in the Contracting ProbSparse Self-Attention Blocks with a masked attention. Masking attention for the Y-Future Encoder, avoids any future information leak architecturally. In our experiments, the addition of restricted attention like the ProbSparse on an already masked attention resulted in performance loss. This could be the result of missing query-key interaction brought about by the ProbSparse on an already masked attention. Algorithm 3 shows the multiple operations performed within a Contracting Masked Self-Attention Block that takes an input hi and produces the hidden representation hi+1 for the i+ 1th block.
Algorithm 3 Contracting Masked Self-Attention Block Input : hi Output : hi+1 hi+1 ← MaskedAttn(hi, hi) hi+1 ← Conv1d(hi+1) hi+1 ← Conv1d(hi+1) hi+1 ← LayerNorm(hi+1) hi+1 ← MaxPool(ELU(Conv1d(hi+1)))
D APPENDIX : MSE RESULTS
E APPENDIX : HYPERPARAMETERS
E.1 HYPERPARAMETER SEARCH SPACE
For a fair comparison, we retain the design choices from the Informer baseline like the history input length (T ) for a particular forecast length (τ ), so that any performance improvement can exclusively be attributed to the architecture of the Yformer model and not to an increased history input length. We performed a grid search for learning rates of {0.001, 0.0001}, α-values of {0, 0.3, 0.5, 0.7, 1}, number of encoder and decoder blocks I = {2, 3, 4} while keeping all the other hyperparameters the same as the Informer. Furthermore, Adam optimizer was used for all experiments, and we employed an early stopping criterion with a patience of three epochs. To counteract overfitting, we tried dropout with varying ratios but interestingly found the effect to be minimal in the results. Therefore, we adopt weight-decay for our experiments with factors {0, 0.02, 0.05} for additional regularization. We select the optimal hyperparameters based on the lowest validation loss and will publish the code upon acceptance.
E.2 OPTIMAL HYPERPARAMETERS | 1. What is the focus and contribution of the paper on YFormer?
2. What are the strengths of the proposed approach, particularly in terms of its architecture and mathematical description?
3. What are the weaknesses of the paper, especially regarding the complexity of the proposed architecture and the lack of information about the variance of the proposed model?
4. Do you have any concerns or questions regarding the evaluation and comparison with other baseline models?
5. Is there anything else that the authors could improve or provide more information about in their paper? | Summary Of The Paper
Review | Summary Of The Paper
A Former model is proposed in this paper, based on a Y-shaped encoder-decoder architecture that(1) uses direct connection from the downscaled encoder layer to the corresponding upsampled decoder layer in a U-Net inspired architecture, and (2) combines the downscaling/upsampling with sparse attention to capture long-range effects, and (3) stabilizes the encoder-decoder stars with the addition of an auxiliary reconstruction loss. The proposed model is evaluated on ETT and ECL dataset, and showed superior performance against baseline models including LogTransformer, LSTnet, Informer and Informer*.
Review
Strength:
The paper is well written and the proposed framework is easy to understand. In addition, I believe the mathematical description of the model is correct.
Extensive evaluation is conducted, and the proposed YFormer shows an average of over 10% improvement compared with state-of-the-art models.
A good ablation study is provided to justify the choice of the proposed architecture, and also hyper parameter selection.
The authors of this paper choose baseline very carefully. They mentioned the reason why they are comparing with certain baseline models, identified some issues in some of the baseline models, and also provided reasoning why models such as Query Selector is not being used as a baseline model. I believe this thorough investigation and understanding of previous works is very important.
Weakness:
The mathematical description of the proposed architecture and task, although correct, is a bit over complicated. For example, section 3 describes a standard time-series forecasting problem with its corresponding notations. I would encourage the authors to review the notations needed in this section. I think some of them are not being used afterwards. 2.Since the results provided is an average of three runs. It would be beneficial if the authors could provide the standard deviation of the results as well. It would be informative to have an estimation of the variance of the proposed model.
In the abstract, the authors claim the model is tested on four datasets, in section 6.1, it is said to be two real-world public datasets and one public benchmark. However, (I could be missing it somewhere), seems like only two datasets - ETT and ECL, are being evaluated on. |
ICLR | Title
Yformer: U-Net Inspired Transformer Architecture for Far Horizon Time Series Forecasting
Abstract
Time series data is ubiquitous in research as well as in a wide variety of industrial applications. Effectively analyzing the available historical data and providing insights into the far future allows us to make effective decisions. Recent research has witnessed the superior performance of transformer-based architectures, especially in the regime of far horizon time series forecasting. However, the current state of the art sparse Transformer architectures fail to couple downand upsampling procedures to produce outputs in a similar resolution as the input. We propose the Yformer model, based on a novel Y-shaped encoder-decoder architecture that (1) uses direct connection from the downscaled encoder layer to the corresponding upsampled decoder layer in a U-Net inspired architecture, (2) Combines the downscaling/upsampling with sparse attention to capture long-range effects, and (3) stabilizes the encoder-decoder stacks with the addition of an auxiliary reconstruction loss. Extensive experiments have been conducted with relevant baselines on four benchmark datasets, demonstrating an average improvement of 19.82, 18.41 percentage MSE and 13.62, 11.85 percentage MAE in comparison to the current state of the art for the univariate and the multivariate settings respectively.
1 INTRODUCTION
In the most simple case, time series forecasting deals with a scalar time-varying signal and aims to predict or forecast its values in the near future; for example, countless applications in finance, healthcare, production automatization, etc. (Carta et al., 2021; Cao et al., 2018; Sagheer & Kotb, 2019) can benefit from an accurate forecasting solution. Often not just a single scalar signal is of interest, but multiple at once, and further time-varying signals are available and even known for the future. For example, suppose one aims to forecast the energy consumption of a house, it likely depends on the social time that one seeks to forecast for (such as the next hour or day), and also on features of these time points (such as weekday, daylight, etc.), which are known already for the future. This is also the case in model predictive control (Camacho & Alba, 2013), where one is interested to forecast the expected value realized by some planned action, then this action is also known at the time of forecast. More generally, time series forecasting, nowadays deals with quadruples (x, y, x′, y′) of known past predictors x, known past targets y, known future predictors x′ and sought future targets y′. (Figure 3 in appendix section A provides a simple illustration)
Time series problems can often be addressed by methods developed initially for images, treating them as 1-dimensional images. Especially for time-series classification many typical time series encoder architectures have been adapted from models for images (Wang et al., 2017; Zou et al., 2019). Time series forecasting then is closely related to image outpainting (Van Hoorick, 2019), the task to predict how an image likely extends to the left, right, top or bottom, as well as to the more well-known task of image segmentation, where for each input pixel, an output pixel has to be predicted, whose channels encode pixel-wise classes such as vehicle, road, pedestrian say for road scenes. Time series forecasting combines aspects from both problem settings: information about targets from shifted positions (e.g., the past targets y as in image outpainting) and information about other channels from the same positions (e.g., the future predictors x′ as in image segmentation). One of the most successful, principled architectures for the image segmentation task are U-Nets introduced in Ronneberger et al. (2015), an architecture that successively downsamples / coarsens
its inputs and then upsamples / refines the latent representation with deconvolutions also using the latent representations of the same detail level, tightly coupling down- and upsampling procedures and thus yielding latent features on the same resolution as the inputs.
Following the great success in Natural Language Processing (NLP) applications, attention-based, esp. transformer-based architectures (Vaswani et al., 2017) that model pairwise interactions between sequence elements have been recently adapted for time series forecasting. One of the significant challenges, is that the length of the time series, are often one or two magnitudes of order larger than the (sentence-level) NLP problems. Plenty of approaches aim to mitigate the quadratic complexity O(T 2) in the sequence/time series length T to at mostO(T log T ). For example, the Informer architecture (Zhou et al., 2020), arguably one of the most accurate forecasting models researched so far, adapts the transformer by a sparse attention mechanism and a successive downsampling/coarsening of the past time series. As in the original transformer, only the coarsest representation is fed into the decoder. Possibly to remedy the loss in resolution by this procedure, the Informer feeds its input a second time into the decoder network, this time without any coarsening.
While forecasting problems share many commonalities with image segmentation problems, transformer-based architectures like the Informer do not involve coupled down- and upscaling procedures to yield predictions on the same resolution as the inputs. Thus, we propose a novel Y-shaped architecture called Yformer that
1. Couples downscaling/upscaling to leverage both, coarse and fine-grained features for time series forecasting,
2. Combines the coupled scaling mechanism with sparse attention modules to capture longrange effects on all scale levels, and
3. Stabilizes encoder and decoder stacks by reconstructing the recent past.
2 RELATED WORK
Deep Learning Based Time Series Forecasting: While Convolutional Neural Network (CNN) and Recurrent Neural network (RNN) based architectures (Salinas et al., 2020; Rangapuram et al., 2018) outperform traditional methods like ARIMA (Box & Jenkins, 1968) and exponential smoothing methods (Hyndman & Athanasopoulos, 2018), the addition of attention layers (Vaswani et al., 2017) to model time series forecasting has proven to be very beneficial across different problem settings (Fan et al., 2019; Qin et al., 2017; Lai et al., 2018). Attention allows direct pair-wise interaction with eccentric events (like holidays) and can model temporal dynamics inherently unlike RNN’s and CNN’s that fail to capture long-range dependencies directly. Recent work like Reformer (Kitaev et al., 2020), Linformer (Wang et al., 2020) and Informer (Zhou et al., 2020) have focused on reducing the quadratic complexity of modeling pair-wise interactions to O(T log T ) with the introduction of restricted attention layers. Consequently, they can predict for longer forecasting horizons but are hindered by their capability of aggregating features and maintaining the resolution required for far horizon forecasting.
U-Net: The Yformer model is inspired by the famous U-Net architecture introduced in Ronneberger et al. (2015) originating from the field of medical image segmentation. The U-net architecture is capable of compressing information by aggregating over the inputs and up-sampling embeddings to the same resolutions as that of the inputs from their compressed latent features. Current transformer architectures like the Informer (Zhou et al., 2020) do not utilize up-sampling techniques even though the network produces intermediate multi resolution feature maps. Our work aims to capitalize on these multi resolution feature maps and use the U-net shape effectively for the task of time series forecasting. Previous works like Stoller et al. (2019) and Perslev et al. (2019) have successfully applied U-Net architecture for the task of sequence modeling and time series segmentation, illustrating superior results in the respective tasks. These work motivate the use of a U-Net-inspired architecture for time series forecasting as current methods fail to couple sparse attention mechanism with the U-Net shaped architecture. Additional related works section is decoupled from the main text and is presented in the appendix section B.
3 PROBLEM FORMULATION
By a time series x with M channels, we mean a finite sequence of vectors in RM , denote their space by R∗×M := ⋃ T∈N RT×M , and their length by |x| := T (for x ∈ RT×M ,M ∈ N). We write (x, y) ∈ R∗×(M+O) to denote two time series of same length with M and O channels, respectively.
We model a time series forecasting instance as a quadruple (x, y, x′, y′) ∈ R∗×(M+O) × R∗×(M+O), where x, y denote the past predictors and targets until a reference time point T and x′, y′ denote the future predictors and targets from the reference point T to the next τ time steps. Here, τ = |x′| is called the forecast horizon. For a Time Series Forecasting Problem, given (i) a sample D := {(x1, y1, x′1, y′1), . . . , (xN , yN , x ′ N , y ′ N )} from an unknown distribution p of time series forecasting instances and (ii) a function ` : R∗×(O+O) → R called loss, we attempt to find a function ŷ : R∗×(M+O) ×R∗×M → R∗×O (with |ŷ(x, y, x′)| = |x′|) with minimal expected loss
E(x,y,x′,y′)∼p `(y′, ŷ(x, y, x′)) (1)
The loss ` usually is the mean absolute error (MAE) or mean squared error (MSE) averaged over future time points:
`mae(y′, ŷ) := 1
|y′| |y′|∑ t=1 1 O ||y′t − ŷt||1, `mse(y′, ŷ) := 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||22 (2)
Furthermore, if there is only one target channel and no predictor channels (O = 1,M = 0), the time series forecasting problem is called univariate, otherwise multivariate.
4 BACKGROUND
Our work incorporates restricted attention based Transformer in a U-Net inspired architecture. For this reason, we base our work on the current state of the art sparse attention model Informer, introduced in Zhou et al. (2020). We provide a brief overview of the sparse attention mechanism (ProbSparse) and the encoder block (Contracting ProbSparse Self-Attention Blocks) used in the Informer model for completeness.
ProbSparse Attention: The ProbSparse attention mechanism restricts the canonical attention (Vaswani et al., 2017) by selecting a subset u of dominant queries having the largest variance across all the keys. Consequently, the query Q ∈ RLQ×d in the canonical attention is replaced by a sparse query matrix Q ∈ RLQ×d consisting of only the u dominant queries. ProbSparse attention can hence be defined as:
APropSparse(Q,K,V ) = Softmax(QK T
√ d )V (3)
where d denotes the input dimension to the attention module. For more details on the ProbSparse attention mechanism, we refer the reader to Zhou et al. (2020).
Contracting ProbSparse Self-Attention Blocks: The Informer model uses Contracting ProbSparse Self-Attention Blocks to distill out redundant information from the long history input sequence (x, y) in a pyramid structure motivated from the image domain (Lin et al., 2017). The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query, key and value for self-attention. This is followed by multiple layers of convolution (Conv1d), and finally the MaxPool operation reduces the latent dimension by effectively distilling out redundant information at each block. We refer the reader to Algorithm 2 in the appendix section C where these operations are presented in an algorithmic structure for a comprehensive overview.
5 METHODOLOGY
The Yformer model is a Y-shaped (Figure 1b) symmetric encoder-decoder architecture that is specifically designed to take advantage of the multi-resolution embeddings generated by the Contracting ProbSparse Self-Attention Blocks. The fundamental design consideration is the adoption of U-Netinspired connections to extract encoder features at multiple resolutions and provide direct connection to the corresponding symmetric decoder block (simple illustration provided in Figure 4, appendix section A). Furthermore, the addition of reconstruction loss helps the model learn generalized embeddings that better approximate the data generating distribution.
The Y-Past Encoder of the Yformer is designed using a similar encoder structure as that of the Informer. The Y-Past Encoder embeds the past sequence (x, y) into a scalar projection along with the addition of positional and temporal embeddings. Multiple Contracting ProbSparse Self-Attention Blocks are used to generate encoder embeddings at various resolutions. The Informer model uses the final low-dimensional embedding as the input to the decoder (Figure 1a) whereas, the Yformer retains the embeddings at multiple resolutions to be passed on to the decoder. This allows the Yformer to use high-dimensional lower-level embeddings effectively.
The Y-Future Encoder of the Yformer mitigates the issue of the redundant reprocessing of parts of the past sequence (x, y) used as tokens (xtoken, ytoken) in the Informer architecture. The Informer model uses only the coarsest representation from the encoder embedding, leading to a loss in resolution and forcing the Informer to pass part of the past sequence as tokens (xtoken, ytoken) to the decoder (Figure 1a). The Yformer separates the future predictors and the past sequence (x, y) bypassing the future predictors (x′) through a separate encoder and utilizing the multi-resolution embeddings to dismiss the need for tokens entirely. Unlike the Y-Past Encoder, the attention blocks in the Y-Future encoder are based on masked canonical self-attention mechanism (Vaswani et al., 2017). Masking the attention ensures that there is no information leak from the future time steps to the past. Moreover, a masked canonical self-attention mechanism helps reduce the complexity, as half of the query-key interactions are restricted by design. Thus, the Y-Future Encoder is designed by stacking
multiple Contracting ProbSparse Self-Attention Blocks where the ProbSparse attention is replaced by the Masked Attention. We name these blocks Contracting Masked Self-Attention Blocks (Algorithm 3 appendix section C).
The Yformer processes the past inputs and the future predictors separately within its encoders. However, considering the time steps, the future predictors are a continuation of the past time steps. For this reason, the Yformer model concatenates (represented by the symbol + ) the past encoder embedding and the future encoder embedding along the time dimension after each encoder block, preserving the continuity between the past input time steps and the future time steps. Let i represent the index of an encoder block, then epasti+1 and e fut i+1 represent the output from the past encoder and the future encoder respectively. The final concatenated encoder embedding (ei+1) is calculated as,
epasti+1 = ContractingProbSparseSelfAttentionBlock(e past i ) efuti+1 = ContractingMaskedSelfAttentionBlock(e fut i ) ei+1 = e past i+1 ++ e fut i+1
(4)
The encoder embeddings represented by E = [e0, . . . , eI ] (where I is the number of encoder layers) contain the combination of past and future embeddings at multiple resolutions.
The Y-Decoder of the Yformer consists of two parts. The first part takes as input the final concatenated low-dimensional embedding (eI) and performs a multi-head canonical self-attention mechanism. Here, the past encoder embedding (epastI ) is allowed to attend to itself as well as the future encoder embedding (efutI ) in an unrestricted fashion. The encoder embedding (eI) is the lowdimensional distilled embedding, and skipping query-key interaction within these low-dimensional embeddings might deny the model useful pair-wise interactions. Therefore, it is by design that this is the only part of the Yformer model that uses canonical self-attention in comparison to the Informer that uses canonical attention within its repeating decoder block, as shown in Figure 1a. Since the canonical self-attention layer is separated from the repeating attention blocks within the decoder, the Yformer complexity from this full attention module does not increase with an increase in the number of decoder blocks. The U-Net architecture inspires the second part of the Y-Decoder. Consequently, the decoder is structured in a symmetric expanding path identical to the contracting encoder. We realize this architecture by introducing upsampling on the ProbSparse attention mechanism using Expanding ProbSparse Cross-Attention Block.
The Expanding ProbSparse Cross-Attention Block within the Yformer decoder performs two tasks: (1) upsample the compressed encoder embedding eI and (2) perform restricted cross attention between the expanding decoder embedding dI−i and the corresponding encoder embedding ei (represented in Figure 4 appendix section A). We accomplish both the tasks by introducing an Expanding ProbSparse Cross-Attention Block as illustrated in Algorithm 1.
Algorithm 1 Expanding ProbSparse Cross-Attention Block Input : dI−i, ei Output : dI−i+1 dI−i+1 ← ProbSparseCrossAttn(dI−i, ei) dI−i+1 ← Conv1d(dI−i+1) dI−i+1 ← Conv1d(dI−i+1) dI−i+1 ← LayerNorm(dI−i+1) dI−i+1 ← ELU(ConvTranspose1d(dI−i+1)))
The Expanding ProbSparse Cross-Attention Blocks within the Yformer decoder uses a ProbSparseCrossAttn to construct direct connections between the lower levels of the encoder and the corresponding symmetric higher levels of the decoder. Direct connections from the encoder to the decoder are an essential component for majority of the models within the image domain. For example, ResNet (He et al., 2016), and DenseNet (Huang et al., 2017) have demonstrated that direct connections between previous feature maps, strengthen feature propagation, reduce parameters, mitigate vanishing gradients and encourage feature reuse. However, current transformer-based architectures like the Informer fail to utilize such direct connections. The ProbSparseCrossAttn
takes in as input the decoder embedding from the previous layer dI−i as queries and the corresponding encoder embedding ei as keys. The Yformer uses the ProbSparse restricted attention so that the model is scalable with an increase in the number of decoder blocks.
We utilize ConvTranspose1d or popularly known as Deconvolution for incrementally increasing the embedding space. The famous U-Net architecture uses a symmetric expanding path using such Deconvolution layers. This property enables the model to not only aggregate over the input but also upscale the latent dimensions, improving the overall expressivity of the architecture. The decoder of Yformer follows a similar strategy by employing Deconvolution to expand the embedding space of the encoded output. We describe the different operators used in the appendix section C.
A fully connected layer (LinearLayer) predicts the future time steps ŷfut from the final decoder layer (dI) and additionally reconstructs the past input targets ŷpast.
[ŷpast, ŷfut] = LinearLayer(dI) (5)
The addition of reconstruction loss to the Yformer as an auxiliary loss, serves two significant purposes. Firstly, the reconstruction loss acts as a data-dependent regularization term that reduces overfitting by learning embeddings that are more general (Ghasedi Dizaji et al., 2017; Jarrett & van der Schaar, 2020). Secondly, the reconstruction loss helps in producing future-output in a similar distribution as the inputs (Bank et al., 2020). For far horizon forecasting, we are interested in learning a future-output distribution. However, the future-output distribution and the past-input distribution arise from the same data generating process. Therefore having an auxiliary reconstruction loss would direct the gradients to a better approximate of the data generating process. The Yformer model is trained on the combined loss `,
` = α `mse(y, ŷpast) + (1− α) `mse(y′, ŷfut) (6)
where the first term tries to learn the past targets y and the second term learns the future targets y′. We use the reconstruction factor (α) to vary the importance of reconstruction and future prediction and tune this as a hyperparameter.
6 EXPERIMENTS
6.1 DATASETS
To evaluate our proposed YFormer architecture, we use two real-world public datasets and one public benchmark to compare the experiment results with the published results by the Informer.
ETT (Electricity Transformer Temperature1): This real-world dataset for the electric power deployment introduced by Zhou et al. 2020 combines short-term periodical patterns, long-term periodical patterns, long-term trends, and irregular patterns. The data consists of load and temperature readings from two transformers and is split into two hourly subsets ETTh1 and ETTh2. The ETTm1 dataset is generated by splitting ETTh1 dataset into 15-minute intervals. The dataset has six features and 70,080 data points in total. For easy comparison, we kept the splits for train/val/test consistent with the published results in Zhou et al. 2020, where the available 20 months of data is split as 12/4/4.
ECL (Electricity Consuming Load2): This electricity dataset represents the electricity consumption from 2011 to 2014 of 370 clients recorded in 15-minutes periods in Kilowatt (kW). We split the data into 15/3/4 months for train, validation, and test respectively as in Zhou et al. 2020.
6.2 EXPERIMENTAL SETUP
Baseline: Our main baseline is the the Informer architecture by Zhou et al. 2020. The results presented in the paper were reported to have a data scaling issue3, and the authors have updated
1Available under https:// github.com/zhouhaoyi/ETDataset. 2Available under https://archive.ics.uci.edu/ml/ datasets/ElectricityLoadDiagrams20112014 3https://github.com/zhouhaoyi/Informer2020/issues/41
their results in the official GitHub repository4. Therefore we compare against these updated results. Recently, the Query Selector (Klimek et al., 2021) utilize the Informer framework for far horizon forecasting, however they report results from the Informer paper before the bug fix and hence we avoid comparison with this work. As a second baseline, we also compare the second-best performing model in Zhou et al. 2020 which is the Informer that uses canonical attention module. It is represented as Informer* in the tables. Furthermore, we also compare against DeepAR (Salinas et al., 2020), LogTrans (Li et al., 2019), and LSTnet (Lai et al., 2018) architectures as they outperformed the Informer baseline for certain forecasting horizons. For a quick analysis, we present the percent improvement achieved by the Yformer over the current best results.
For easy comparison, we choose two commonly used metrics for time series forecasting to evaluate the Yformer architecture, the MAE and MSE in Equation 2. We report the results for the MAE in the paper and provide the MSE results in the appendix section D. We performed our experiments on GeForce RTX 2080 Ti GPU nodes with 32 GB ram and provide results as an average of three runs.
6.3 RESULTS AND ANALYSIS
This section compares our results with the results reported in the Informer baseline both in uni- and multivariate settings for the multiple datasets and horizons. The best-performing and the second-best models (lowest MAE) are highlighted in bold and red, respectively.
Univariate: The proposed Yformer model is able to outperform the current state of the art Informer baseline in 19 out of the 20 available tasks across different datasets and horizons by an average of 13.62 % of MAE. The Table 1 illustrates that the superiority of the Yformer is not just limited to a far horizon but even for the shorter horizons and in general across datasets. Considering the individual datasets, the Yformer surpasses the current state of the art by 8, 6.8, 21.9, and 18.1% of MAE for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively as seen in Table 1. We also report MSE scores in the supplementary appendix section D, which illustrates an improvement of 16.7, 12.6, 34.8 and 15.2% for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively. We observe that the MAE for the model is greater at horizon 48 than the MAE at horizon 168 for the ETTh1 dataset. This may be a case where the reused hyperparameters from the Informer paper are far from optimal for
4Commit 702fb9bbc69847ecb84a8bb205693089efb41c6
the Yformer. The other results show consistent behavior of increasing error with increasing horizon length τ . Additionally, this behavior is also observed in the Informer baseline for ETTh2 dataset (Table 2), where the loss is 1.340 for horizon 336 and 1.515 for a horizon of 168.
Figure 2 differentiates the improvement of Y-former architecture from the baseline Informer and the reconstruction loss. We notice that the Yformer architecture without the reconstruction loss is able to outperform the Informer architecture across different datasets and horizons. Additionally, the optimal value for the loss weighting hyperparameter α is always greater than zero as shown in Table 3, confirming the assumption that the addition of the auxiliary reconstruction loss term helps the model to generalize well to the future distribution.
Multivariate: We observe a similar trend in the multivariate setting. Here the Yformer model outperforms the current state of the art method in all of the 20 tasks across the four datasets by a margin of 11.85% of MAE. There is a clear superiority of the proposed approach in longer horizons across the different datasets. Across the different datasets, the Yformer improves the current state of the art MAE results by 9, 13.5, 13.1, and 11.7% of MAE and 12, 26.3, 13.9, and 17.1% of MSE for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively (table in the supplementary appendix section D). We attribute the improvement in performance to superior architecture and the ability to approximate the data distribution for the multiple targets due to the reconstruction loss.
7 ABLATION STUDY
We performed additional experiments on the ETTh2 and ETTm1 datasets to study the impact of the Yformer model architecture and the effect of the reconstruction loss separately.
7.1 Y-FORMER ARCHITECTURE
In this section we attempt to understand (1) How much of an improvement is brought about by the Y-shaped model architecture? and (2) How much impact did the reconstruction loss have? From Figure 2, it is clear that the Yformer architecture performs better or is comparable to the Informer throughout the entire horizon range (without the reconstruction loss α = 0). Moreover, we can notice that for the larger horizons, the Yformer architecture (with α = 0) seems to have a clear
advantage over in Informer baseline. We attribute this improvement in performance to the direct UNet inspired connections within the Yformer architecture. Using feature maps at multiple resolutions offers a clear advantage by eliminating vanishing gradients and encouraging feature reuse.
The reconstruction loss seems to have a significant impact in reducing the loss for the ETTm1 multivariate case (right bottom graph Figure 2). And, in general, it helps to improve the results for the Yformer architecture as the green curve in the Figure 2 is almost always below the blue (Informer) and orange (Yformer with α = 0) curve. The significance of the loss depends on the dataset but, in general, the auxiliary loss helps to improve the overall performance.
7.2 RECONSTRUCTION FACTOR
How impactful is the reconstruction factor α from the proposed loss in Eq. 6? We analyzed the optimal value for α across different forecasting horizons and summarized them in Table 3. Interestingly, α = 0.7 seems to be the predominant optimal setting, implying a high weight for the reconstruction loss helps the Yformer to achieve a lower loss for the future targets. Additionally, we can observe a trend that α is on average larger for short forecasting horizons signifying that the input reconstruction loss is also important for the short horizon forecast.
8 CONCLUSION
Time series forecasting is an important business and research problem that has a broad impact in today’s world. This paper proposes a novel Y-shaped architecture, specifically designed for the far horizon time series forecasting problem. The study shows the importance of direct connections from the multi-resolution encoder to the decoder and reconstruction loss for the task of time series forecasting. The Yformer couples the U-Net architecture from the image segmentation domain on a sparse transformer model to achieve state of the art results in 39 out of the 40 tasks across various settings as presented in the Tables 1 and 2.
9 REPRODUCIBILITY STATEMENT
All the experimental datasets used to evaluate the Yformer architecture are publicly available and, we provide the source code for the proposed architecture with the supplementary materials for reproducibility. The hyperparameters needed to reproduce the reported results on the different datasets are presented in the appendix section E.
A APPENDIX : ADDITIONAL FIGURES
B APPENDIX : ADDITIONAL RELATED WORKS
Time-series Forecasting: Time Series forecasting has been a well-established research topic with steadily growing applications in a real-world environment. Traditionally, ARIMA Box & Jenkins (1968) and exponential smoothing methods Hyndman & Athanasopoulos (2018) have been used for time series forecasting. However, limited scalability and the inability to model non-linear patterns restrict their use for time series forecasting.
Reconstruction Loss The usage of a reconstruction loss originated in the domain of variational autoencoders Kingma & Welling (2013) to reconstruct the inputs during the generative process. Forecasting for a long horizon in a single forward pass can be considered as generating a distribution given the past time steps as the input distribution. A reconstruction loss would hence be beneficial as
the forecast distribution is similar to the past input distribution. Moreover, a recent study by Le et al. 2018 has shown that the addition of the reconstruction term to any loss function generally provides uniform stability and bounds on the generalization error, therefore leading to a more robust model overall with no negative effect on the performance.
Our work tries to combine the different ideas of U-Net from image segmentation with a sparse attention network for modeling time series that also utilizes the additional guidance from reconstruction loss.
C APPENDIX : DEFINITION
C.1 OPERATORS
ProbSparseAttn: Attention module that uses the ProbSparse method introduced in Zhou et al. (2020). The query matrix Q ∈ RLQ×d denotes the sparse query matrix with u dominant queries.
APropSparse(Q,K,V ) = Softmax(QK T
√ d )V (7)
MaskedAttn: Canonical self-attention with masking to prevent positions from attending to subsequent positions in the future (Vaswani et al., 2017).
Conv1d: Given N batches of 1D array of length L and C number of channels/dimensions. A convolution operation produces an output:
out(Ni, Coutj ) = bias(Coutj ) + Cin−1∑ k=0 weight(Coutj , k) ? input(Ni, k) (8)
For further reference please visit pytorch Conv1D page
LayerNorm: Layer Normalization introduced in Ba et al. (2016), normalizes the inputs across channels/dimensions. LayerNorm is the default normalization in common transformer architectures (Vaswani et al., 2017). Here, γ and β are learnable affine transformations.
out(N, ∗) = input(N, ∗)− E[input(N, ∗)]√ Var[input(N, ∗)] + ∗ γ + β (9)
MaxPool: Given N batches of 1D array of length L, and C number of channels/dimensions. A MaxPool operation produces an output.
out(Ni, Cj , k) = max m=0,...,kernel size−1 input(Ni, Cj , stride× k +m) (10)
For further reference please visit pytorch MaxPool1D page
ELU: Given an input x, the ELU applies element-wise non linear activation function as shown.
ELU(x) = { x, if x > 0 α ∗ (exp(x)− 1), if x ≤ 0 (11)
ConvTranspose1d: Also known as deconvolution or fractionally strided convolution, uses convolution on padded input to produce upsampled outputs (see pytorch ConvTranspose1d page).
C.2 CONTRACTING PROBSPARSE SELF-ATTENTION BLOCKS
The Informer model uses Contracting ProbSparse Self-Attention Blocks to distil out redundant information from the long history input sequence (x, y) in a pyramid structure similar to Lin et al.
(2017). The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query, key and value for self-attention. This is followed by multiple layers of convolution (Conv1d), and finally a MaxPool operation is performed to reduce the latent dimension at each block, after applying non-linearity with an ELU() activation function . The LayerNorm operation, regularizes the model using Layer Normalization (Ba et al., 2016). Algorithm 2 shows the multiple operations performed within a Contracting ProbSparse Self-Attention Block that takes an input hi and produces the hidden representation hi+1 for the i+ 1th block.
Algorithm 2 Contracting ProbSparse Self-Attention Block Input : hi Output : hi+1 hi+1 ← ProbSparseAttn(hi, hi) hi+1 ← Conv1d(hi+1) hi+1 ← Conv1d(hi+1) hi+1 ← LayerNorm(hi+1) hi+1 ← MaxPool(ELU(Conv1d(hi+1)))
C.3 CONTRACTING MASKED SELF-ATTENTION BLOCKS
The Y-Future Encoder, uses multiple blocks of Contracting Masked Self-Attention Blocks that replaces the ProbSparseAttn in the Contracting ProbSparse Self-Attention Blocks with a masked attention. Masking attention for the Y-Future Encoder, avoids any future information leak architecturally. In our experiments, the addition of restricted attention like the ProbSparse on an already masked attention resulted in performance loss. This could be the result of missing query-key interaction brought about by the ProbSparse on an already masked attention. Algorithm 3 shows the multiple operations performed within a Contracting Masked Self-Attention Block that takes an input hi and produces the hidden representation hi+1 for the i+ 1th block.
Algorithm 3 Contracting Masked Self-Attention Block Input : hi Output : hi+1 hi+1 ← MaskedAttn(hi, hi) hi+1 ← Conv1d(hi+1) hi+1 ← Conv1d(hi+1) hi+1 ← LayerNorm(hi+1) hi+1 ← MaxPool(ELU(Conv1d(hi+1)))
D APPENDIX : MSE RESULTS
E APPENDIX : HYPERPARAMETERS
E.1 HYPERPARAMETER SEARCH SPACE
For a fair comparison, we retain the design choices from the Informer baseline like the history input length (T ) for a particular forecast length (τ ), so that any performance improvement can exclusively be attributed to the architecture of the Yformer model and not to an increased history input length. We performed a grid search for learning rates of {0.001, 0.0001}, α-values of {0, 0.3, 0.5, 0.7, 1}, number of encoder and decoder blocks I = {2, 3, 4} while keeping all the other hyperparameters the same as the Informer. Furthermore, Adam optimizer was used for all experiments, and we employed an early stopping criterion with a patience of three epochs. To counteract overfitting, we tried dropout with varying ratios but interestingly found the effect to be minimal in the results. Therefore, we adopt weight-decay for our experiments with factors {0, 0.02, 0.05} for additional regularization. We select the optimal hyperparameters based on the lowest validation loss and will publish the code upon acceptance.
E.2 OPTIMAL HYPERPARAMETERS | 1. What is the main contribution of the paper regarding long-sequence temporal forecasting?
2. What are the strengths and weaknesses of the proposed Yformer architecture?
3. How does the reviewer assess the related works and their comparisons with the proposed model?
4. What are the limitations regarding the architectural details and their impact on the evaluation of the architecture?
5. How does the reviewer evaluate the benchmarks used in the paper, particularly for time series datasets?
6. Are there any typos or missing references in the review that need correction? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a new Transformer-based architecture for long-sequence temporal forecasting (LSTF) utilising ProbSparse attention mechanisms to efficiently capture long-term dependencies with L log(L) complexity.
The Yformer builds on the Informer architecture with 3 key innovations: 1. Using distinct encoders to capture historical and known future information separately. This improves representation learning for time series data, while still maintaining computational efficiency with ProbSparse attention. 2. Using a common decoder to process encoder representations jointly. The is also contains an upsampling step inspired by U-Net, although the benefits of upsampling are not explicitly evaluated. 3. Including an auxiliary reconstruction loss which uses the reconstruction error of past targets to regularise training.
Review
Strengths
Overall, the proposed architecture is intuitively compelling – echoing innovations observed in multi-horizon forecasting architectures (see related works comment below), while improving computational complexity using ProbSparse attention and downsampling. The strong improvements over the Informer baseline in numerous experiments also convincingly demonstrate the benefits of the proposed model for the LSTF problem.
Weaknesses
However, there are several key limitations that need to be addressed before the paper can be recommended for acceptance: 1. Architectural details – While the network diagrams and descriptive text do a good job in providing a high-level overview, the lack of details make it difficult to evaluate the architecture in depth. For instance, a couple of questions come to mind: * Do all historical input features need to be known in the future? The problem formulation is confusing here as x and x’ are R^{* x M} which seems to imply identical lengths T and number of features T – despite the text mentioning x’ is from T to T+tau. * How are dimensions modified in each layer of the network? As the downsampling/upsampling parallels to U net appear to be a key part of the model, details on how this is performed is important. * What are the keys, queries and values used for each attention layer (ProbSparseAttn, MaskedAttn, ProbSparseCrossAttn), and how is ProbSpraseCrossAttn implemented concretely? * What is the length of the Conv1d filters in the various blocks, and are they purely linear transformations? Do dimensions change between each transformation? * Is masked self-attention essential in the Y-Future encoder, and any reason why ProbSparse is not preferred? Does this affect computational efficiency, given that forecasting horizons appear to be larger than history lengths in many experiments from Appendix E.2? 2. Related works -- While the authors do a good job of citing models for LSTF, the paper lacks references to modern neural forecasting architectures, many of which are attention-based and show improvements over LogTrans [2, 3] and DeepAR [1-3]. While computationally more inefficient, they also contain similar modifications to those proposed by the YFormer. For instance, [2, 3] use of distinct encoding mechanisms for historical inputs, future inputs, and static variables -- all of which are fed into a common attention-based decoder. In addition, [1] also trains the network using past targets as a regulariser (backcast). Comparisons to these models would help to further motivate the YFormer architecture as well. 3. Benchmarks -- Given the focus on LSTF, comparison to simpler architectures that allow for extended receptive fields, e.g. dilated convolutions with WaveNet, would be useful. This is particularly important for time series datasets, which can be prone to overfitting with complex models -- as shown by the short-horizon outperformance of DeepAR on the ECL dataset in the Informer paper.
Typos 1. DeepAR is also mentioned as a benchmark, although results are not included in the paper.
References 1. Oreshkin et al. N-BEATS: NEURAL BASIS EXPANSION ANALYSIS FOR INTERPRETABLE TIME SERIES FORECASTING. ICLR 2020. 2. Lim et al. TEMPORAL FUSION TRANSFORMERS FOR INTERPRETABLE MULTI-HORIZON TIME SERIES FORECASTING. International Journal of Forecasting, Volume 3 Issue 4, 2021. 3. Eisenach et al. MQTRANSFORMER: MULTI-HORIZON FORECASTS WITH CONTEXT DEPENDENT AND FEEDBACK-AWARE ATTENTION. Arxiv 2020. |
ICLR | Title
Yformer: U-Net Inspired Transformer Architecture for Far Horizon Time Series Forecasting
Abstract
Time series data is ubiquitous in research as well as in a wide variety of industrial applications. Effectively analyzing the available historical data and providing insights into the far future allows us to make effective decisions. Recent research has witnessed the superior performance of transformer-based architectures, especially in the regime of far horizon time series forecasting. However, the current state of the art sparse Transformer architectures fail to couple downand upsampling procedures to produce outputs in a similar resolution as the input. We propose the Yformer model, based on a novel Y-shaped encoder-decoder architecture that (1) uses direct connection from the downscaled encoder layer to the corresponding upsampled decoder layer in a U-Net inspired architecture, (2) Combines the downscaling/upsampling with sparse attention to capture long-range effects, and (3) stabilizes the encoder-decoder stacks with the addition of an auxiliary reconstruction loss. Extensive experiments have been conducted with relevant baselines on four benchmark datasets, demonstrating an average improvement of 19.82, 18.41 percentage MSE and 13.62, 11.85 percentage MAE in comparison to the current state of the art for the univariate and the multivariate settings respectively.
1 INTRODUCTION
In the most simple case, time series forecasting deals with a scalar time-varying signal and aims to predict or forecast its values in the near future; for example, countless applications in finance, healthcare, production automatization, etc. (Carta et al., 2021; Cao et al., 2018; Sagheer & Kotb, 2019) can benefit from an accurate forecasting solution. Often not just a single scalar signal is of interest, but multiple at once, and further time-varying signals are available and even known for the future. For example, suppose one aims to forecast the energy consumption of a house, it likely depends on the social time that one seeks to forecast for (such as the next hour or day), and also on features of these time points (such as weekday, daylight, etc.), which are known already for the future. This is also the case in model predictive control (Camacho & Alba, 2013), where one is interested to forecast the expected value realized by some planned action, then this action is also known at the time of forecast. More generally, time series forecasting, nowadays deals with quadruples (x, y, x′, y′) of known past predictors x, known past targets y, known future predictors x′ and sought future targets y′. (Figure 3 in appendix section A provides a simple illustration)
Time series problems can often be addressed by methods developed initially for images, treating them as 1-dimensional images. Especially for time-series classification many typical time series encoder architectures have been adapted from models for images (Wang et al., 2017; Zou et al., 2019). Time series forecasting then is closely related to image outpainting (Van Hoorick, 2019), the task to predict how an image likely extends to the left, right, top or bottom, as well as to the more well-known task of image segmentation, where for each input pixel, an output pixel has to be predicted, whose channels encode pixel-wise classes such as vehicle, road, pedestrian say for road scenes. Time series forecasting combines aspects from both problem settings: information about targets from shifted positions (e.g., the past targets y as in image outpainting) and information about other channels from the same positions (e.g., the future predictors x′ as in image segmentation). One of the most successful, principled architectures for the image segmentation task are U-Nets introduced in Ronneberger et al. (2015), an architecture that successively downsamples / coarsens
its inputs and then upsamples / refines the latent representation with deconvolutions also using the latent representations of the same detail level, tightly coupling down- and upsampling procedures and thus yielding latent features on the same resolution as the inputs.
Following the great success in Natural Language Processing (NLP) applications, attention-based, esp. transformer-based architectures (Vaswani et al., 2017) that model pairwise interactions between sequence elements have been recently adapted for time series forecasting. One of the significant challenges, is that the length of the time series, are often one or two magnitudes of order larger than the (sentence-level) NLP problems. Plenty of approaches aim to mitigate the quadratic complexity O(T 2) in the sequence/time series length T to at mostO(T log T ). For example, the Informer architecture (Zhou et al., 2020), arguably one of the most accurate forecasting models researched so far, adapts the transformer by a sparse attention mechanism and a successive downsampling/coarsening of the past time series. As in the original transformer, only the coarsest representation is fed into the decoder. Possibly to remedy the loss in resolution by this procedure, the Informer feeds its input a second time into the decoder network, this time without any coarsening.
While forecasting problems share many commonalities with image segmentation problems, transformer-based architectures like the Informer do not involve coupled down- and upscaling procedures to yield predictions on the same resolution as the inputs. Thus, we propose a novel Y-shaped architecture called Yformer that
1. Couples downscaling/upscaling to leverage both, coarse and fine-grained features for time series forecasting,
2. Combines the coupled scaling mechanism with sparse attention modules to capture longrange effects on all scale levels, and
3. Stabilizes encoder and decoder stacks by reconstructing the recent past.
2 RELATED WORK
Deep Learning Based Time Series Forecasting: While Convolutional Neural Network (CNN) and Recurrent Neural network (RNN) based architectures (Salinas et al., 2020; Rangapuram et al., 2018) outperform traditional methods like ARIMA (Box & Jenkins, 1968) and exponential smoothing methods (Hyndman & Athanasopoulos, 2018), the addition of attention layers (Vaswani et al., 2017) to model time series forecasting has proven to be very beneficial across different problem settings (Fan et al., 2019; Qin et al., 2017; Lai et al., 2018). Attention allows direct pair-wise interaction with eccentric events (like holidays) and can model temporal dynamics inherently unlike RNN’s and CNN’s that fail to capture long-range dependencies directly. Recent work like Reformer (Kitaev et al., 2020), Linformer (Wang et al., 2020) and Informer (Zhou et al., 2020) have focused on reducing the quadratic complexity of modeling pair-wise interactions to O(T log T ) with the introduction of restricted attention layers. Consequently, they can predict for longer forecasting horizons but are hindered by their capability of aggregating features and maintaining the resolution required for far horizon forecasting.
U-Net: The Yformer model is inspired by the famous U-Net architecture introduced in Ronneberger et al. (2015) originating from the field of medical image segmentation. The U-net architecture is capable of compressing information by aggregating over the inputs and up-sampling embeddings to the same resolutions as that of the inputs from their compressed latent features. Current transformer architectures like the Informer (Zhou et al., 2020) do not utilize up-sampling techniques even though the network produces intermediate multi resolution feature maps. Our work aims to capitalize on these multi resolution feature maps and use the U-net shape effectively for the task of time series forecasting. Previous works like Stoller et al. (2019) and Perslev et al. (2019) have successfully applied U-Net architecture for the task of sequence modeling and time series segmentation, illustrating superior results in the respective tasks. These work motivate the use of a U-Net-inspired architecture for time series forecasting as current methods fail to couple sparse attention mechanism with the U-Net shaped architecture. Additional related works section is decoupled from the main text and is presented in the appendix section B.
3 PROBLEM FORMULATION
By a time series x with M channels, we mean a finite sequence of vectors in RM , denote their space by R∗×M := ⋃ T∈N RT×M , and their length by |x| := T (for x ∈ RT×M ,M ∈ N). We write (x, y) ∈ R∗×(M+O) to denote two time series of same length with M and O channels, respectively.
We model a time series forecasting instance as a quadruple (x, y, x′, y′) ∈ R∗×(M+O) × R∗×(M+O), where x, y denote the past predictors and targets until a reference time point T and x′, y′ denote the future predictors and targets from the reference point T to the next τ time steps. Here, τ = |x′| is called the forecast horizon. For a Time Series Forecasting Problem, given (i) a sample D := {(x1, y1, x′1, y′1), . . . , (xN , yN , x ′ N , y ′ N )} from an unknown distribution p of time series forecasting instances and (ii) a function ` : R∗×(O+O) → R called loss, we attempt to find a function ŷ : R∗×(M+O) ×R∗×M → R∗×O (with |ŷ(x, y, x′)| = |x′|) with minimal expected loss
E(x,y,x′,y′)∼p `(y′, ŷ(x, y, x′)) (1)
The loss ` usually is the mean absolute error (MAE) or mean squared error (MSE) averaged over future time points:
`mae(y′, ŷ) := 1
|y′| |y′|∑ t=1 1 O ||y′t − ŷt||1, `mse(y′, ŷ) := 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||22 (2)
Furthermore, if there is only one target channel and no predictor channels (O = 1,M = 0), the time series forecasting problem is called univariate, otherwise multivariate.
4 BACKGROUND
Our work incorporates restricted attention based Transformer in a U-Net inspired architecture. For this reason, we base our work on the current state of the art sparse attention model Informer, introduced in Zhou et al. (2020). We provide a brief overview of the sparse attention mechanism (ProbSparse) and the encoder block (Contracting ProbSparse Self-Attention Blocks) used in the Informer model for completeness.
ProbSparse Attention: The ProbSparse attention mechanism restricts the canonical attention (Vaswani et al., 2017) by selecting a subset u of dominant queries having the largest variance across all the keys. Consequently, the query Q ∈ RLQ×d in the canonical attention is replaced by a sparse query matrix Q ∈ RLQ×d consisting of only the u dominant queries. ProbSparse attention can hence be defined as:
APropSparse(Q,K,V ) = Softmax(QK T
√ d )V (3)
where d denotes the input dimension to the attention module. For more details on the ProbSparse attention mechanism, we refer the reader to Zhou et al. (2020).
Contracting ProbSparse Self-Attention Blocks: The Informer model uses Contracting ProbSparse Self-Attention Blocks to distill out redundant information from the long history input sequence (x, y) in a pyramid structure motivated from the image domain (Lin et al., 2017). The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query, key and value for self-attention. This is followed by multiple layers of convolution (Conv1d), and finally the MaxPool operation reduces the latent dimension by effectively distilling out redundant information at each block. We refer the reader to Algorithm 2 in the appendix section C where these operations are presented in an algorithmic structure for a comprehensive overview.
5 METHODOLOGY
The Yformer model is a Y-shaped (Figure 1b) symmetric encoder-decoder architecture that is specifically designed to take advantage of the multi-resolution embeddings generated by the Contracting ProbSparse Self-Attention Blocks. The fundamental design consideration is the adoption of U-Netinspired connections to extract encoder features at multiple resolutions and provide direct connection to the corresponding symmetric decoder block (simple illustration provided in Figure 4, appendix section A). Furthermore, the addition of reconstruction loss helps the model learn generalized embeddings that better approximate the data generating distribution.
The Y-Past Encoder of the Yformer is designed using a similar encoder structure as that of the Informer. The Y-Past Encoder embeds the past sequence (x, y) into a scalar projection along with the addition of positional and temporal embeddings. Multiple Contracting ProbSparse Self-Attention Blocks are used to generate encoder embeddings at various resolutions. The Informer model uses the final low-dimensional embedding as the input to the decoder (Figure 1a) whereas, the Yformer retains the embeddings at multiple resolutions to be passed on to the decoder. This allows the Yformer to use high-dimensional lower-level embeddings effectively.
The Y-Future Encoder of the Yformer mitigates the issue of the redundant reprocessing of parts of the past sequence (x, y) used as tokens (xtoken, ytoken) in the Informer architecture. The Informer model uses only the coarsest representation from the encoder embedding, leading to a loss in resolution and forcing the Informer to pass part of the past sequence as tokens (xtoken, ytoken) to the decoder (Figure 1a). The Yformer separates the future predictors and the past sequence (x, y) bypassing the future predictors (x′) through a separate encoder and utilizing the multi-resolution embeddings to dismiss the need for tokens entirely. Unlike the Y-Past Encoder, the attention blocks in the Y-Future encoder are based on masked canonical self-attention mechanism (Vaswani et al., 2017). Masking the attention ensures that there is no information leak from the future time steps to the past. Moreover, a masked canonical self-attention mechanism helps reduce the complexity, as half of the query-key interactions are restricted by design. Thus, the Y-Future Encoder is designed by stacking
multiple Contracting ProbSparse Self-Attention Blocks where the ProbSparse attention is replaced by the Masked Attention. We name these blocks Contracting Masked Self-Attention Blocks (Algorithm 3 appendix section C).
The Yformer processes the past inputs and the future predictors separately within its encoders. However, considering the time steps, the future predictors are a continuation of the past time steps. For this reason, the Yformer model concatenates (represented by the symbol + ) the past encoder embedding and the future encoder embedding along the time dimension after each encoder block, preserving the continuity between the past input time steps and the future time steps. Let i represent the index of an encoder block, then epasti+1 and e fut i+1 represent the output from the past encoder and the future encoder respectively. The final concatenated encoder embedding (ei+1) is calculated as,
epasti+1 = ContractingProbSparseSelfAttentionBlock(e past i ) efuti+1 = ContractingMaskedSelfAttentionBlock(e fut i ) ei+1 = e past i+1 ++ e fut i+1
(4)
The encoder embeddings represented by E = [e0, . . . , eI ] (where I is the number of encoder layers) contain the combination of past and future embeddings at multiple resolutions.
The Y-Decoder of the Yformer consists of two parts. The first part takes as input the final concatenated low-dimensional embedding (eI) and performs a multi-head canonical self-attention mechanism. Here, the past encoder embedding (epastI ) is allowed to attend to itself as well as the future encoder embedding (efutI ) in an unrestricted fashion. The encoder embedding (eI) is the lowdimensional distilled embedding, and skipping query-key interaction within these low-dimensional embeddings might deny the model useful pair-wise interactions. Therefore, it is by design that this is the only part of the Yformer model that uses canonical self-attention in comparison to the Informer that uses canonical attention within its repeating decoder block, as shown in Figure 1a. Since the canonical self-attention layer is separated from the repeating attention blocks within the decoder, the Yformer complexity from this full attention module does not increase with an increase in the number of decoder blocks. The U-Net architecture inspires the second part of the Y-Decoder. Consequently, the decoder is structured in a symmetric expanding path identical to the contracting encoder. We realize this architecture by introducing upsampling on the ProbSparse attention mechanism using Expanding ProbSparse Cross-Attention Block.
The Expanding ProbSparse Cross-Attention Block within the Yformer decoder performs two tasks: (1) upsample the compressed encoder embedding eI and (2) perform restricted cross attention between the expanding decoder embedding dI−i and the corresponding encoder embedding ei (represented in Figure 4 appendix section A). We accomplish both the tasks by introducing an Expanding ProbSparse Cross-Attention Block as illustrated in Algorithm 1.
Algorithm 1 Expanding ProbSparse Cross-Attention Block Input : dI−i, ei Output : dI−i+1 dI−i+1 ← ProbSparseCrossAttn(dI−i, ei) dI−i+1 ← Conv1d(dI−i+1) dI−i+1 ← Conv1d(dI−i+1) dI−i+1 ← LayerNorm(dI−i+1) dI−i+1 ← ELU(ConvTranspose1d(dI−i+1)))
The Expanding ProbSparse Cross-Attention Blocks within the Yformer decoder uses a ProbSparseCrossAttn to construct direct connections between the lower levels of the encoder and the corresponding symmetric higher levels of the decoder. Direct connections from the encoder to the decoder are an essential component for majority of the models within the image domain. For example, ResNet (He et al., 2016), and DenseNet (Huang et al., 2017) have demonstrated that direct connections between previous feature maps, strengthen feature propagation, reduce parameters, mitigate vanishing gradients and encourage feature reuse. However, current transformer-based architectures like the Informer fail to utilize such direct connections. The ProbSparseCrossAttn
takes in as input the decoder embedding from the previous layer dI−i as queries and the corresponding encoder embedding ei as keys. The Yformer uses the ProbSparse restricted attention so that the model is scalable with an increase in the number of decoder blocks.
We utilize ConvTranspose1d or popularly known as Deconvolution for incrementally increasing the embedding space. The famous U-Net architecture uses a symmetric expanding path using such Deconvolution layers. This property enables the model to not only aggregate over the input but also upscale the latent dimensions, improving the overall expressivity of the architecture. The decoder of Yformer follows a similar strategy by employing Deconvolution to expand the embedding space of the encoded output. We describe the different operators used in the appendix section C.
A fully connected layer (LinearLayer) predicts the future time steps ŷfut from the final decoder layer (dI) and additionally reconstructs the past input targets ŷpast.
[ŷpast, ŷfut] = LinearLayer(dI) (5)
The addition of reconstruction loss to the Yformer as an auxiliary loss, serves two significant purposes. Firstly, the reconstruction loss acts as a data-dependent regularization term that reduces overfitting by learning embeddings that are more general (Ghasedi Dizaji et al., 2017; Jarrett & van der Schaar, 2020). Secondly, the reconstruction loss helps in producing future-output in a similar distribution as the inputs (Bank et al., 2020). For far horizon forecasting, we are interested in learning a future-output distribution. However, the future-output distribution and the past-input distribution arise from the same data generating process. Therefore having an auxiliary reconstruction loss would direct the gradients to a better approximate of the data generating process. The Yformer model is trained on the combined loss `,
` = α `mse(y, ŷpast) + (1− α) `mse(y′, ŷfut) (6)
where the first term tries to learn the past targets y and the second term learns the future targets y′. We use the reconstruction factor (α) to vary the importance of reconstruction and future prediction and tune this as a hyperparameter.
6 EXPERIMENTS
6.1 DATASETS
To evaluate our proposed YFormer architecture, we use two real-world public datasets and one public benchmark to compare the experiment results with the published results by the Informer.
ETT (Electricity Transformer Temperature1): This real-world dataset for the electric power deployment introduced by Zhou et al. 2020 combines short-term periodical patterns, long-term periodical patterns, long-term trends, and irregular patterns. The data consists of load and temperature readings from two transformers and is split into two hourly subsets ETTh1 and ETTh2. The ETTm1 dataset is generated by splitting ETTh1 dataset into 15-minute intervals. The dataset has six features and 70,080 data points in total. For easy comparison, we kept the splits for train/val/test consistent with the published results in Zhou et al. 2020, where the available 20 months of data is split as 12/4/4.
ECL (Electricity Consuming Load2): This electricity dataset represents the electricity consumption from 2011 to 2014 of 370 clients recorded in 15-minutes periods in Kilowatt (kW). We split the data into 15/3/4 months for train, validation, and test respectively as in Zhou et al. 2020.
6.2 EXPERIMENTAL SETUP
Baseline: Our main baseline is the the Informer architecture by Zhou et al. 2020. The results presented in the paper were reported to have a data scaling issue3, and the authors have updated
1Available under https:// github.com/zhouhaoyi/ETDataset. 2Available under https://archive.ics.uci.edu/ml/ datasets/ElectricityLoadDiagrams20112014 3https://github.com/zhouhaoyi/Informer2020/issues/41
their results in the official GitHub repository4. Therefore we compare against these updated results. Recently, the Query Selector (Klimek et al., 2021) utilize the Informer framework for far horizon forecasting, however they report results from the Informer paper before the bug fix and hence we avoid comparison with this work. As a second baseline, we also compare the second-best performing model in Zhou et al. 2020 which is the Informer that uses canonical attention module. It is represented as Informer* in the tables. Furthermore, we also compare against DeepAR (Salinas et al., 2020), LogTrans (Li et al., 2019), and LSTnet (Lai et al., 2018) architectures as they outperformed the Informer baseline for certain forecasting horizons. For a quick analysis, we present the percent improvement achieved by the Yformer over the current best results.
For easy comparison, we choose two commonly used metrics for time series forecasting to evaluate the Yformer architecture, the MAE and MSE in Equation 2. We report the results for the MAE in the paper and provide the MSE results in the appendix section D. We performed our experiments on GeForce RTX 2080 Ti GPU nodes with 32 GB ram and provide results as an average of three runs.
6.3 RESULTS AND ANALYSIS
This section compares our results with the results reported in the Informer baseline both in uni- and multivariate settings for the multiple datasets and horizons. The best-performing and the second-best models (lowest MAE) are highlighted in bold and red, respectively.
Univariate: The proposed Yformer model is able to outperform the current state of the art Informer baseline in 19 out of the 20 available tasks across different datasets and horizons by an average of 13.62 % of MAE. The Table 1 illustrates that the superiority of the Yformer is not just limited to a far horizon but even for the shorter horizons and in general across datasets. Considering the individual datasets, the Yformer surpasses the current state of the art by 8, 6.8, 21.9, and 18.1% of MAE for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively as seen in Table 1. We also report MSE scores in the supplementary appendix section D, which illustrates an improvement of 16.7, 12.6, 34.8 and 15.2% for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively. We observe that the MAE for the model is greater at horizon 48 than the MAE at horizon 168 for the ETTh1 dataset. This may be a case where the reused hyperparameters from the Informer paper are far from optimal for
4Commit 702fb9bbc69847ecb84a8bb205693089efb41c6
the Yformer. The other results show consistent behavior of increasing error with increasing horizon length τ . Additionally, this behavior is also observed in the Informer baseline for ETTh2 dataset (Table 2), where the loss is 1.340 for horizon 336 and 1.515 for a horizon of 168.
Figure 2 differentiates the improvement of Y-former architecture from the baseline Informer and the reconstruction loss. We notice that the Yformer architecture without the reconstruction loss is able to outperform the Informer architecture across different datasets and horizons. Additionally, the optimal value for the loss weighting hyperparameter α is always greater than zero as shown in Table 3, confirming the assumption that the addition of the auxiliary reconstruction loss term helps the model to generalize well to the future distribution.
Multivariate: We observe a similar trend in the multivariate setting. Here the Yformer model outperforms the current state of the art method in all of the 20 tasks across the four datasets by a margin of 11.85% of MAE. There is a clear superiority of the proposed approach in longer horizons across the different datasets. Across the different datasets, the Yformer improves the current state of the art MAE results by 9, 13.5, 13.1, and 11.7% of MAE and 12, 26.3, 13.9, and 17.1% of MSE for the ETTh1, ETTh2, ETTm1, and ECL datasets respectively (table in the supplementary appendix section D). We attribute the improvement in performance to superior architecture and the ability to approximate the data distribution for the multiple targets due to the reconstruction loss.
7 ABLATION STUDY
We performed additional experiments on the ETTh2 and ETTm1 datasets to study the impact of the Yformer model architecture and the effect of the reconstruction loss separately.
7.1 Y-FORMER ARCHITECTURE
In this section we attempt to understand (1) How much of an improvement is brought about by the Y-shaped model architecture? and (2) How much impact did the reconstruction loss have? From Figure 2, it is clear that the Yformer architecture performs better or is comparable to the Informer throughout the entire horizon range (without the reconstruction loss α = 0). Moreover, we can notice that for the larger horizons, the Yformer architecture (with α = 0) seems to have a clear
advantage over in Informer baseline. We attribute this improvement in performance to the direct UNet inspired connections within the Yformer architecture. Using feature maps at multiple resolutions offers a clear advantage by eliminating vanishing gradients and encouraging feature reuse.
The reconstruction loss seems to have a significant impact in reducing the loss for the ETTm1 multivariate case (right bottom graph Figure 2). And, in general, it helps to improve the results for the Yformer architecture as the green curve in the Figure 2 is almost always below the blue (Informer) and orange (Yformer with α = 0) curve. The significance of the loss depends on the dataset but, in general, the auxiliary loss helps to improve the overall performance.
7.2 RECONSTRUCTION FACTOR
How impactful is the reconstruction factor α from the proposed loss in Eq. 6? We analyzed the optimal value for α across different forecasting horizons and summarized them in Table 3. Interestingly, α = 0.7 seems to be the predominant optimal setting, implying a high weight for the reconstruction loss helps the Yformer to achieve a lower loss for the future targets. Additionally, we can observe a trend that α is on average larger for short forecasting horizons signifying that the input reconstruction loss is also important for the short horizon forecast.
8 CONCLUSION
Time series forecasting is an important business and research problem that has a broad impact in today’s world. This paper proposes a novel Y-shaped architecture, specifically designed for the far horizon time series forecasting problem. The study shows the importance of direct connections from the multi-resolution encoder to the decoder and reconstruction loss for the task of time series forecasting. The Yformer couples the U-Net architecture from the image segmentation domain on a sparse transformer model to achieve state of the art results in 39 out of the 40 tasks across various settings as presented in the Tables 1 and 2.
9 REPRODUCIBILITY STATEMENT
All the experimental datasets used to evaluate the Yformer architecture are publicly available and, we provide the source code for the proposed architecture with the supplementary materials for reproducibility. The hyperparameters needed to reproduce the reported results on the different datasets are presented in the appendix section E.
A APPENDIX : ADDITIONAL FIGURES
B APPENDIX : ADDITIONAL RELATED WORKS
Time-series Forecasting: Time Series forecasting has been a well-established research topic with steadily growing applications in a real-world environment. Traditionally, ARIMA Box & Jenkins (1968) and exponential smoothing methods Hyndman & Athanasopoulos (2018) have been used for time series forecasting. However, limited scalability and the inability to model non-linear patterns restrict their use for time series forecasting.
Reconstruction Loss The usage of a reconstruction loss originated in the domain of variational autoencoders Kingma & Welling (2013) to reconstruct the inputs during the generative process. Forecasting for a long horizon in a single forward pass can be considered as generating a distribution given the past time steps as the input distribution. A reconstruction loss would hence be beneficial as
the forecast distribution is similar to the past input distribution. Moreover, a recent study by Le et al. 2018 has shown that the addition of the reconstruction term to any loss function generally provides uniform stability and bounds on the generalization error, therefore leading to a more robust model overall with no negative effect on the performance.
Our work tries to combine the different ideas of U-Net from image segmentation with a sparse attention network for modeling time series that also utilizes the additional guidance from reconstruction loss.
C APPENDIX : DEFINITION
C.1 OPERATORS
ProbSparseAttn: Attention module that uses the ProbSparse method introduced in Zhou et al. (2020). The query matrix Q ∈ RLQ×d denotes the sparse query matrix with u dominant queries.
APropSparse(Q,K,V ) = Softmax(QK T
√ d )V (7)
MaskedAttn: Canonical self-attention with masking to prevent positions from attending to subsequent positions in the future (Vaswani et al., 2017).
Conv1d: Given N batches of 1D array of length L and C number of channels/dimensions. A convolution operation produces an output:
out(Ni, Coutj ) = bias(Coutj ) + Cin−1∑ k=0 weight(Coutj , k) ? input(Ni, k) (8)
For further reference please visit pytorch Conv1D page
LayerNorm: Layer Normalization introduced in Ba et al. (2016), normalizes the inputs across channels/dimensions. LayerNorm is the default normalization in common transformer architectures (Vaswani et al., 2017). Here, γ and β are learnable affine transformations.
out(N, ∗) = input(N, ∗)− E[input(N, ∗)]√ Var[input(N, ∗)] + ∗ γ + β (9)
MaxPool: Given N batches of 1D array of length L, and C number of channels/dimensions. A MaxPool operation produces an output.
out(Ni, Cj , k) = max m=0,...,kernel size−1 input(Ni, Cj , stride× k +m) (10)
For further reference please visit pytorch MaxPool1D page
ELU: Given an input x, the ELU applies element-wise non linear activation function as shown.
ELU(x) = { x, if x > 0 α ∗ (exp(x)− 1), if x ≤ 0 (11)
ConvTranspose1d: Also known as deconvolution or fractionally strided convolution, uses convolution on padded input to produce upsampled outputs (see pytorch ConvTranspose1d page).
C.2 CONTRACTING PROBSPARSE SELF-ATTENTION BLOCKS
The Informer model uses Contracting ProbSparse Self-Attention Blocks to distil out redundant information from the long history input sequence (x, y) in a pyramid structure similar to Lin et al.
(2017). The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query, key and value for self-attention. This is followed by multiple layers of convolution (Conv1d), and finally a MaxPool operation is performed to reduce the latent dimension at each block, after applying non-linearity with an ELU() activation function . The LayerNorm operation, regularizes the model using Layer Normalization (Ba et al., 2016). Algorithm 2 shows the multiple operations performed within a Contracting ProbSparse Self-Attention Block that takes an input hi and produces the hidden representation hi+1 for the i+ 1th block.
Algorithm 2 Contracting ProbSparse Self-Attention Block Input : hi Output : hi+1 hi+1 ← ProbSparseAttn(hi, hi) hi+1 ← Conv1d(hi+1) hi+1 ← Conv1d(hi+1) hi+1 ← LayerNorm(hi+1) hi+1 ← MaxPool(ELU(Conv1d(hi+1)))
C.3 CONTRACTING MASKED SELF-ATTENTION BLOCKS
The Y-Future Encoder, uses multiple blocks of Contracting Masked Self-Attention Blocks that replaces the ProbSparseAttn in the Contracting ProbSparse Self-Attention Blocks with a masked attention. Masking attention for the Y-Future Encoder, avoids any future information leak architecturally. In our experiments, the addition of restricted attention like the ProbSparse on an already masked attention resulted in performance loss. This could be the result of missing query-key interaction brought about by the ProbSparse on an already masked attention. Algorithm 3 shows the multiple operations performed within a Contracting Masked Self-Attention Block that takes an input hi and produces the hidden representation hi+1 for the i+ 1th block.
Algorithm 3 Contracting Masked Self-Attention Block Input : hi Output : hi+1 hi+1 ← MaskedAttn(hi, hi) hi+1 ← Conv1d(hi+1) hi+1 ← Conv1d(hi+1) hi+1 ← LayerNorm(hi+1) hi+1 ← MaxPool(ELU(Conv1d(hi+1)))
D APPENDIX : MSE RESULTS
E APPENDIX : HYPERPARAMETERS
E.1 HYPERPARAMETER SEARCH SPACE
For a fair comparison, we retain the design choices from the Informer baseline like the history input length (T ) for a particular forecast length (τ ), so that any performance improvement can exclusively be attributed to the architecture of the Yformer model and not to an increased history input length. We performed a grid search for learning rates of {0.001, 0.0001}, α-values of {0, 0.3, 0.5, 0.7, 1}, number of encoder and decoder blocks I = {2, 3, 4} while keeping all the other hyperparameters the same as the Informer. Furthermore, Adam optimizer was used for all experiments, and we employed an early stopping criterion with a patience of three epochs. To counteract overfitting, we tried dropout with varying ratios but interestingly found the effect to be minimal in the results. Therefore, we adopt weight-decay for our experiments with factors {0, 0.02, 0.05} for additional regularization. We select the optimal hyperparameters based on the lowest validation loss and will publish the code upon acceptance.
E.2 OPTIMAL HYPERPARAMETERS | 1. What is the focus and contribution of the paper on time-series forecasting?
2. What are the strengths and weaknesses of the proposed Yformer model compared to the Informer?
3. Do you have any concerns regarding the experimental setup and comparisons made in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Can you provide additional information or experiments to support the effectiveness and efficiency of the Yformer model? | Summary Of The Paper
Review | Summary Of The Paper
Recent works such as the Informer have used efficient attention mechanisms and shown significant performance improvements in the long sequence time-series forecasting problems. However, the authors argued that using only the coarsest past representations for the decoder could be a major limitation. In this paper, the authors proposed the Yformer model by combining the Informer and the U-Net architectures. They adopted direct connections from the multi-resolution encoder to decoder to leverage both coarse and fine-grained representations. The authors claimed the effectiveness of the proposed method through three benchmark datasets used in the Informer paper.
Review
While the proposed methods hold great promise, my biggest concern is that the experiments seem like unfair comparisons. The authors compare the performance of Yformer against the excerpted results from the Informer. However, in my view, the Yformer and Informer use different problem formulations. According to the authors’ problem formulation, the Yformer predicts the future targets y’ based on the three inputs: past predictors x, past targets y, and future predictors x’. On the other hand, the Informer does not rely on the future predictors x’. The claimed performance improvement by the Yformer could be due to additional information within the future predictors. Furthermore, I’m not sure whether the authors’ problem formulation is appropriate in the real-world setting. Future predictors such as power load features in the ETT dataset would not be the “known” variables for the prediction.
While, in the abstract, the authors stated that they used four benchmark datasets, the manuscript only contains experiment results for the three benchmark datasets. Compared to the Informer paper, it seems the results for the Weather dataset are missing. If there are no particular reasons for the exclusion, can you also provide results for the Weather dataset?
In my view, the authors must provide more detailed explanations for the proposed model to be self-contained. I think the current version is not easy to follow if readers were not already familiar with the Informer and the U-Net. In addition, the current version does not provide detailed information on dataset statistics (e.g. number of predictors and targets) and their pre-processing procedures.
Can you provide more in-depth experiment analyses for showing (1) how the U-Net shaped architecture helps long sequence time-series forecasting and (2) how does it affect the computational and memory efficiency of the model. |
ICLR | Title
Information Theoretic Model Predictive Q-Learning
Abstract
Model-free reinforcement learning (RL) algorithms work well in sequential decision-making problems when experience can be collected cheaply and modelbased RL is effective when system dynamics can be modeled accurately. However, both of these assumptions can be violated in real world problems such as robotics, where querying the system can be prohibitively expensive and real-world dynamics can be difficult to model accurately. Although sim-to-real approaches such as domain randomization attempt to mitigate the effects of biased simulation, they can still suffer from optimization challenges such as local minima and handdesigned distributions for randomization, making it difficult to learn an accurate global value function or policy that directly transfers to the real world. In contrast to RL, model predictive control (MPC) algorithms use a simulator to optimize a simple policy class online, constructing a closed-loop controller that can effectively contend with real-world dynamics. MPC performance is usually limited by factors such as model bias and the limited horizon of optimization. In this work, we present a novel theoretical connection between information theoretic MPC and entropy regularized RL and develop a Q-learning algorithm that can leverage biased models. We validate the proposed algorithm on sim-to-sim control tasks to demonstrate the improvements over optimal control and reinforcement learning from scratch. Our approach paves the way for deploying reinforcement learning algorithms on real-robots in a systematic manner.
1 INTRODUCTION
Deep reinforcement learning algorithms have recently generated great interest due to their successful application to a range of difficult problems including Computer Go (Silver et al., 2016) and highdimensional control tasks such as humanoid locomotion (Lillicrap et al., 2015; Schulman et al., 2015). While these methods are extremely general and can learn policies and value functions for complex tasks directly from data, they can also be sample inefficient, and partially-optimized solutions can be arbitrarily poor. These challenges severely restrict RL’s applicability to real systems such as robots due to data collection challenges and safety concerns.
One straightforward way to mitigate these issues is to learn a policy or value function entirely in a high-fidelity simulator (Shah et al., 2017; Todorov et al., 2012) and then deploy the optimized policy on the real system. However, this approach can fail due to model bias, external disturbances, the subtle differences between the real robot’s hardware and poorly modeled phenomena such as friction and contact dynamics. Sim-to-real transfer approaches based on domain randomization (Sadeghi & Levine, 2016; Tobin et al., 2017) and model ensembles (Kurutach et al., 2018; Shyam et al., 2019) aim to make the policy robust by training it to be invariant to varying dynamics. However, learning a globally consistent value function or policy is hard due to optimization issues such as local optima and covariate shift between the exploration policy used for learning the model and the actual control policy executed on the task (Ross & Bagnell, 2012).
Model predictive control (MPC) is a widely used method for generating feedback controllers that repeatedly re-optimizes a finite horizon sequence of controls using an approximate dynamics model that predicts the effect of these controls on the system. The first control in the optimized sequence is executed on the real system and the optimization is performed again from the resulting next state. However, the performance of MPC can suffer due to approximate or simplified models and lim-
ited lookahead. Therefore the parameters of MPC, including the model and horizon H should be carefully tuned to obtain good performance. While using a longer horizon is generally preferred, real-time requirements may limit the amount of lookahead and a biased model can result in compounding model errors.
In this work, we present an approach to RL that leverages the complementary properties of modelfree reinforcement learning and model-based optimal control. Our proposed method views MPC as a way to simultaneously approximate and optimize a local Q function via simulation, and Q learning as a way to improve MPC using real-world data. We focus on the paradigm of entropy regularized reinforcement learning where the aim is to learn a stochastic policy that minimizes the cost-to-go as well as KL divergence with respect to a prior policy. This approach enables faster convergence by mitigating the over-commitment issue in the early stages of Q-learning and better exploration (Fox et al., 2015). We discuss how this formulation of reinforcement learning has deep connections to information theoretic stochastic optimal control where the objective is to find control inputs that minimize the cost while staying close to the passive dynamics of the system (Theodorou & Todorov, 2012). This helps in both injecting domain knowledge into the controller as well as mitigating issues caused by over optimizing the biased estimate of the current cost due to model error and the limited horizon of optimization. We explore this connection in depth and derive an infinite horizon information theoretic model predictive control algorithm based on Williams et al. (2017). We test our approach called Model Predictive Q Learning (MPQ) on simulated continuous control tasks and compare it against information theoretic MPC and soft Q-Learning (Haarnoja et al., 2017), where we demonstrate faster learning with fewer system interactions and better performance as compared to MPC and soft Q-Learning even in the presence of sparse rewards. The learned Q function allows us to truncate the MPC planning horizon which provides additional computational benefits. Finally, we also compare MPQ versus domain randomization(DR) on sim-to-sim tasks. We conclude that DR approaches can be sensitive to the hand-designed distributions for randomizing parameters which causes the learned Q function to be biased and suboptimal on the true system’s parameters, whereas learning from data generated on true system is able to overcome biases and adapt to the real dynamics.
2 RELATED WORK
Model predictive control has a rich history in robotics, ranging from control of mobile robots such as quadrotors (Desaraju & Michael, 2016) and aggressive autonomous vehicles (Wagener et al., 2019; Williams et al., 2017) to generating complex behaviors for high-dimensional systems such as contact-rich manipulation (Fu et al., 2016; Kumar et al., 2014) and humanoid locomotion (Erez et al., 2013). The success of MPC can largely be attributed to online policy optimization which helps mitigate model bias. The information theoretic view of MPC aims to find a policy at every timestep that minimizes the cost over a finite horizon as well as the KL-divergence with respect to a prior policy usually specified by the system’s passive dynamics (Theodorou & Todorov, 2012; Williams et al., 2017). This helps maintain exploratory behavior and avoid over-commitment to the current estimate of the cost function, which is biased due to modeling errors and a finite horizon. Sampling-based MPC algorithms (Wagener et al., 2019; Williams et al., 2017) are also highly parallelizable enabling GPU implementations that aid with real-time control. However, efficient MPC implementations still require careful system identification and extensive amounts of manual tuning.
Deep RL methods are extremely general and can optimize neural network policies from raw sensory inputs with little knowledge of the system dynamics. Both value-based and policy-based approaches (Schulman et al., 2015) have demonstrated excellent performance on complex control problems. These approaches, however, fall short on several accounts when applying them to a real robotic system. First, they have high sample complexity, potentially requiring millions of interactions with the environment. This can be very expensive on a real robot, not least because the initial performance of the policy can be arbitrarily bad. Using random exploration methods such a -greedy can further aggravate this problem. Second, a value function or policy learned entirely in simulation inherits the biases of the simulator. Even if a perfect simulation is available, learning a globally consistent value function or policy is an extremely hard task as noted in (Silver et al., 2016; Zhong et al., 2013). This can be attributed to local optima when using neural network representations or the inherent biases in the Q learning update rules (Fox et al., 2015; Van Hasselt et al., 2016). In fact, it can be difficult to explain why Q-learning algorithms work or fail (Schulman et al., 2017).
Domain randomization aims to make policies learned in simulation more robust by randomizing simulation parameters during training with the aim of making the policies invariant to potential parameter error (Peng et al., 2018; Sadeghi & Levine, 2016; Tobin et al., 2017). However, these policies are not adaptive to unmodelled effects, i.e they take into account only aleoteric and not epistemic uncertainty. Also, such approaches are highly sensitive to hand-designed distributions used for randomizing simulation parameters and can be highly suboptimal on the real-systems parameters, for example, if a very large range of simulation parameters is used. Model-based approaches aim to use real data to improve the model of the system and then perform reinforcement learning or optimal control using the new model or ensemble of models (Kurutach et al., 2018; Ross & Bagnell, 2012; Shyam et al., 2019). Although learning accurate models is a promising avenue, we argue that learning a globally consistent model is an extremely hard problem and instead we should learn a policy that can rapidly adapt to experienced real-world dynamics.
The use of entropy regularization has been explored in RL and Inverse RL for its better sample efficiency and exploration properties (Fox et al., 2015; Haarnoja et al., 2017; 2018; Schulman et al., 2017; Ziebart et al., 2008). This framework allows incorporating prior knowledge into the problem and learning multi-modal policies that can generalize across different tasks. Fox et al. (2015) analyze the theoretical properties of the update rule derived using mutual information minimization and show that this framework can overcome the over-estimation issue inherent in the vanilla Q-learning update. In the past, Todorov (2009) have shown that using KL-divergence can convert the optimal control problem into one that is linearly solvable.
Infinite horizon MPC aims to learn a terminal cost function that can add global information to the finite horizon optimization. Rosolia & Borrelli (2017) learn a terminal cost as a control Lyapunov function and a safety set for the terminal state. These quantites are calculated using all previously visited states and they assume the presence of a controller that can deterministically drive the any state to the goal. Tamar et al. (2017) learns a cost shaping to make a short horizon MPC mimic the actions produced by long horizon MPC offline. However, since their approach is to mimic a longer horizon MPC, the performance of the learner is fundamentally limited by the the performance of the longer horizon MPC. On the contrary, learning an optimal value function as the terminal cost can potentially lead to close to optimal performance.
Using local optimization is an effective way of improving an imperfect value function as noted in RL literature by Anthony et al. (2017); Lowrey et al. (2018); Silver et al. (2016; 2017); Sun et al. (2018). However, these approaches assume that a perfect model of the system is available. In order to make the policy work on the real system, we argue that it is essential to learn a value function form real data and utilize local optimization to stabilize learning.
3 PRELIMINARIES
3.1 REINFORCEMENT LEARNING WITH ENTROPY REGULARIZATION
A Markov Decision Process (MDP) is defined by tuple (S,A, c,P, γ) where S is the state space, A is the action space, c is a one step cost function, P is the space of transition functions and γ is a discount factor. Let P ∈ P be a particular transition function. A closed-loop policy π(a|s) is a distribution over actions given state. Given a policy π and a prior policy π, the KL divergence between them at a state is given by KL (π(a|s)||π(a|s)) = Eπ [log (π(a|s)/π(a|s))]. Entropyregularized RL (Fox et al., 2015) aims to optimize the following objective
π∗ = arg min π Eπ,P [ ∞∑ t=1 γt−1 (c(st, at) + λKL (πt||πt)) ] ∀ s0 ∈ S (1)
where πt and πt are shorthand for π(at|st) and π(at|st) respectively, λ is the temperature parameter that penalizes deviation of π from π. For a policy π, we can define the soft value and action-value functions
V π(s) = Eπ,P [ ∞∑ t=1 γt−1 (c(st, at) + λKL (πt||πt)) |s0 = s ] Qπ(s, a) = c(s, a) + γEs′∼P (s′|s,a) [V π(s′)] (2)
Given a horizon of H timesteps, we can use above definitions to write the value functions as
V π(s) = Eπ,P [ H−1∑ t=1 γt−1(c(st, at) + λKL (πt||πt)) + γH−1V π(sH)|s1 = s ]
Qπ(s, a) = c(s, a) + Eπ,P [ H−1∑ t=2 γt−1(c(st, at) + λKL (πt||πt))
+ γH−1(λKL (πH ||πH) +Q(sH , aH))|s1 = s, a1 = a ] (3)
It is straightforward to verify that V π(s) = Ea∼π [log(π(a|s)/π(a|s)) +Q(s, a)]. The objective in Eq. (1) can equivalently be written as
π∗ = arg min π V π(s) ∀ s ∈ S (4)
This optimization can be performed either by policy gradient methods that aim to find the optimal policy π ∈ Π via stochastic gradient descent (Schulman et al., 2017; Williams, 1992) or value based methods that try to iteratively approximate the value function of the optimal policy. In either case, the output of solving the above optimization is a global closed-loop control policy π∗(a|s).
3.2 INFORMATION THEORETIC MPC
Solving the above optimization can be prohibitively expensive and hard to accomplish online. In contrast to RL, MPC performs online optimization of a simple policy class with a truncated horizon. This process effectively creates a closed-loop controller. In order to do so, MPC algorithms such as MPPI (Williams et al., 2017) use an approximate dynamics model P̂ , which can be deterministic. This is the case when using a simulator such as MuJoCo (Todorov et al., 2012) as the dynamics model. At timestep t, starting from the current state st, an open loop sequence of actions A = (at, at+1, . . . at+H) is sampled from the control distribution denoted by π(A). The objective is to find an optimal sequence of actions to solve
A∗ = arg min
A Eπ(A) [ t+H−1∑ l=t γl−tc(sl, al) + λKL (πl||πl) + γH−1(cf (st+H , at+H) + λKL (πt+H ||πt+H)) ] (5) where cf (st+H , at+H) is a terminal cost function and π(A) is the passive dynamics of the system, i.e the distribution over actions produced when the control input is zero. The first action in the sequence is then executed on the system and the optimization is performed again from the resulting next state. The re-optimization and entropy regularization helps in mitigating model-bias and inaccuracies with optimization by avoiding overcommitment to the current estimate of the cost. A shortcoming of the MPC procedure is the finite horizon. This is especially pronounced in tasks with sparse rewards where a short horizon can make the agent myopic to future rewards. To mitigate this, an approach known as infinite horizon MPC sets the terminal cost cf as a value function that adds global information to the problem.
In the next section, we build our approach by focussing on the MPPI algorithm and its relationship with entropy regularized reinforcement learning. Specifically, we use the definitions of the soft value functions from Eq. (2) to derive an optimal Boltzmann distribution forH-step actions that optimally solves the infinite horizon control problem. This helps us derive the MPPI update rule from Williams et al. (2017) for the infinite horizon case, which then leads to our Model Predictive Q Learning (MPQ) algorithm, which utilizes a predictive model for Q updates and stochastic optimal control as the policy. In the case where H = 1, the algorithm is equivalent to soft Q learning (Haarnoja et al., 2018) or G-learning (Fox et al., 2015).
4 APPROACH
4.1 OPTIMAL H-STEP BOLTZMANN DISTRIBUTION
Let π(A) and π(A) be the joint control distribution and prior over H-horizon open-loop actions. The distributions are assumed to be independent over timesteps, i.e π(a1 . . . aH) = ∏H t=1 πt, where
πt is shorthand for π(at). Since P̂ is deterministic, the following equations hold
V π(s) = Eπ [λ log(π/π) +Qπ(s, a)] Qπ(s, a) = c(s, a) + γV π(s′) (6)
For clarity, we consider γ = 1. Substituting from Eq. (3) for Qπ(s, a)
V π(s) = Eπ1...πH [ H−1∑ t=1 c(st, at) + λ H∑ t=1 log(πt/πt) +Q π(sH , aH) ]
= Eπ1...πH [ H−1∑ t=1 c(st, at) + λ log H∏ t=1 (πt/πt) +Q π(sH , aH) ]
= Eπ [ H−1∑ t=1 c(st, at) + λ log(π/π) +Q π(sH , aH) ] (7)
Consider the following distribution over H-horizon
π = 1
η exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π(sH , aH) )) π(a1 . . . aH) (8)
where η is a normalization constant given by
η = Eπ(a1...aH)
[ exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π(sH , aH) ))] (9)
We show that this is the optimal control distribution as∇V π(s) = 0. Substituting Eq. (8) in (7)
V π(s) = Eπ [ H−1∑ t=1 c(st, at)− λ log η − H−1∑ t=1 c(st, at)−Qπ(sH , aH) +Qπ(sH , aH) ] V π(s) = Eπ [−λ log η]
Since η is a constant, we have V π(s) = −λ log η. Hence for π in Eq. (8), the soft value function is a constant with gradient zero given by
V π ∗ (s) = −λ logEπ(a1...aH)
[ exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π(sH , aH) ))] (10)
which is often referred to in optimal control literature as the “free energy” of the system (Theodorou & Todorov, 2012; Williams et al., 2017). For H=1, Eq. (10) takes the form of the soft value function from Fox et al. (2015) and Haarnoja et al. (2018).
4.2 INFINITE HORIZON MPPI UPDATE RULE
Similar to Williams et al. (2017), we derive the MPPI update rule for online policy optimization. Since sampling actions from the optimal control distribution in Eq. (8) is intractable, we consider control policies π(A) ∈ Π which are easy to sample from. We then optimize for a vector of H control inputs U , such that the resulting action distribution minimizes the KL divergence with the optimal policy
U∗ = arg min π(A) KL (π∗(A)||π(A)) (11)
The objective can be expanded out as KL (π∗(A)||π(A)) = ∫ A π∗(A) log π∗(A) π(A) dA = ∫ A π∗(A) log π∗(A) π(A) π(A) π(A) dA
= ∫ A π∗(A) log π∗(A) π(A) dA− ∫ A π∗(A) log π(A) π(A) dA (12)
Since the first term does not depend on the control input, we can remove it from the optimization
U∗ = arg max π(A) ∫ A π∗(A) log π(A) π(A) dA (13)
Consider Π to be independent multivariate Gaussians over sequence of the H controls with constant covariance Σ at each timestep. We can write the control distribution and prior as follows
π(A) = 1
Z H∏ t=1 exp ( −1 2 (ut − at)T Σ−1 (ut − at) ) π(A) = 1 Z H∏ t=1 exp ( −1 2 aTt Σ −1at ) (14)
where ut and at are the control inputs and actions at timestep t and Z is the normalization constant. Here the prior corresponds to the passive dynamics of the system (Theodorou & Todorov, 2012; Williams et al., 2017), although other choices of prior are possible. Substituting in Eq. (13) we get
U∗ = arg max π(A)
∫ π∗(A) ( H∑ t=1 −1 2 uTt Σ −1ut + u T t Σ −1at ) dA (15)
The objective can be simplified to the following by integrating out the probability in the first term H∑ t=1 −1 2 uTt Σ −1ut + u T t ∫ π∗(A)Σ−1at dA (16)
Since this is a concave function with respect to every ut, we can find the maximum by setting its gradient with respect to ut to zero to solve for optimal u∗t
u∗t = ∫ π∗(A)atdA = ∫ π(A) π∗(A)
π(A)
π(A) π(A) atdA = Eπ(A)
[ π∗(A)
π(A)
π(A) π(A) at
] = Eπ(A) [w(A)at]
(17) where the second equality comes from importance sampling to convert the optimal controls into an expectation over the control distribution instead of the optimal distribution which is impossible to sample from. The importance weight w(A) can be written as follows (substituting π∗ from Eq. (8))
w(A) = 1
η Eπ(A)
[ exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π∗(sH , aH) )) π(A) π(A) ] (18)
Making change of variables ut + t = at for noise sequence E = ( 1 . . . H) sampled from independant Gaussians with zero mean and covariance Σ we get
w(E) = 1 η Eπ(A)
[ exp ( −1 λ ( H−1∑ t=1 c(st, ut + t) + λ π(U + E) π(U + E) +Qπ ∗ (sH , uH + H) ))]
= 1
η Eπ(A)
[ exp ( −1 λ ( H−1∑ t=1 c(st, ut + t) + λ H∑ t=0 1 2 uTt Σ −1(ut + 2 t) +Q π∗(sH , uH + H) ))] (19)
Note that η is the optimal H-step free energy derived in Eq. (10) and can be estimated from N Monte-Carlo samples as
η = N∑ n=1 exp ( −1 λ ( H−1∑ t=1 c(st, ut + n t ) + λ H∑ t=0 1 2 uTt Σ −1(ut + 2 n t ) +Q π∗(sH , uH + n H) )) (20) We can form the following iterative update rule where at every iteration i the sampled control sequence is updated according to
ui+1t = u i t + α N∑ n=1 w(En) n (21)
where α is a step-size parameter as proposed by Wagener et al. (2019). This gives us the infinite horizon MPPI update rule. For H = 1, this corresponds soft Q-learning where stochastic optimization is performed to solve for the optimal action online. Now we develop soft Q-learning algorithm that utilizes infinite horizon MPPI to generate actions as well as Q-targets.
4.3 THE INFORMATION THEORETIC MODEL PREDICTIVE Q-LEARNING ALGORITHM
Since we do not have access to Qπ ∗ , we can not estimate the importance weight in Eq. (19) exactly. Hence, we consider Q functions parameterized by θ denoted byQθ(s, a). Similar to deep Q-learning algorithms, we maintain an replay buffer (Mnih et al., 2015), and update parameters by stochastic gradient descent on the loss L = 1K ∑K i=1(yi − Qθ(si, ai))2 for a batch of K experience tuples (s, a, c, s′) sampled from the buffer where targets yi are given by
y = c(s, a)−λ logEπ∗(a1...aH ) exp(−1 λ ( H−1∑ t=1 c(st, at) + λ log π∗t πt +Qθ(sH , aH) )) ∣∣∣∣∣s1 = s′ (22)
Since the value function updates are performed offline we can utilize large amount of computation (Tamar et al., 2017) to calculate π∗(a1 . . . aH). In our case it is obtained by performing the infinite horizon MPPI update in Eq. (21) for multiple iterations starting from state s′. This allows for directed exploration at a state which leads to better approximation of the free energy (which is akin to approaches such as Covariance Matrix Adaption, except MPPI does not adapt the covariance). This especially helps in early stages of learning by providing better quality targets than a random Q function. Intuitively, this update rule leverages the biased dynamics model P̂ for H steps and a soft Q function at the end learned from interactions with the real system.
At every timestep t during online rollouts, a H-horizon sequence of actions is optimized using a single iteration of infinite horizon MPPI update rule in Eq. (21) and the first action is executed on the system. Online optimization with predictive models can lookahead to produce better actions than acting greedily with respect to the biased Q function and makes ad-hoc exploration strategies such as -greedy unnecessary. Using predictive models for generating value targets and online policy optimization helps accelerate convergence as we demonstrate in our experiments in the next section. Algorithm 1 shows the complete MPQ algorithm.
A closely related approach in literature is (Lowrey et al., 2018) which also uses online MPC and offline value function learning, however they assume access to the true dynamics of the system and do not explore the connection between MPPI and entropy regularized RL and hence do not use free energy targets, even though they use MPPI in their implementation.
Algorithm 1: MPQ
Input : Approximate model P̂ , initial Q function parameters θ1, experience buffer D Parameter: Number of episodes N , length of episode T , planning horizon H , number of update
episodes Nupdate, minibatch-size K, number of minibatches M 1 for i = 1 . . . N do 2 for t = 1 . . . T do 3 (at, . . . , at+H)← Infinite horizon MPPI (Eq. (21)) 4 Execute at on the real system to obtain c(st, at) and next state st+1 5 D ← (st, at, c, st+1) 6 if i%Nupdate == 0 then 7 Sample M minibatches of size K from D 8 Generate targets using Eq. (22) and update parameters to θi+1 9 return θN or best θ on validation.
5 EXPERIMENTS
We perform experiments to test the efficacy of MPQ in overcoming the shortcomings of stochastic optimal control and model free RL in terms of convergence rate, computational requirements and model bias. We also compare MPQ against domain randomization for learning policies that perform well on systems for which accurate models are not known.
5.1 EXPERIMENTAL SETUP
We test our approach on sim-to-sim continuous control tasks based on the Mujoco simulator (Todorov et al., 2012) to study the properties of the algorithm in a controlled manner. The
agent is not provided with the true dynamics parameters, but a uniform distribution over them with a biased mean and added noise. This serves as a reasonable approximation of model bias due to inaccurate measurements of physical quantities. Details of the tasks considered are as follows
1. PENDULUMSWINGUP: the agent tries to swingup and stabilize a pendulum by applying torque on the hinge. The agent is provided with a distribution over the mass and length of the pendulum. The state of the system is given by (Θ, Θ̇), where Θ is the angular displacement. The cost function penalizes the deviation from the upright position and angular velocity. The initial state of the system is randomized after every episode which is 10 seconds long.
2. BALLINCUPSPARSE: a sparse version of the ball in cup task inspired from Tassa et al. (2018). Given a cup and spherical ball attached by a tendon, the goal is to swing and catch the ball. The agent can actuate motors on the two slide joints on the cup and is provided with a biased distribution over the mass of the ball, its moment of inertia and stiffness of the tendon. A cost of 1 is incurred at every timestep and 0 if the ball is in the cup. The position of the ball is randomized after every episode which is 4 seconds long. An episode is successful if agent catches the ball in the cup .
3. FETCHPUSHBLOCK: proposed by Plappert et al. (2018), the agent position controls the endeffector of a simulated Fetch robot to push a block to a goal location on the table. The cost is the distance between the center of mass of the block and the goal. We provide the agent a biased distribution over the mass, moment of inertia inertia, friction coefficients and size of the object. An episode is considered successful if the agent gets the block within 5cm of the goal in 4 seconds. The positions of both block and goal is randomized after every episode.
4. FRANKADRAWEROPEN: the agent velocity controls a 7DOF Franka Panda arm to open a drawer on a cabinet. A simple cost function based on Euclidean distance and relative orientation of the end effector with respect to the handle and the displacement of the slide joint on the drawer is used. The agent is provided a biased distribution over damping and frictionloss of robot and drawer joints. Every episode is 4 seconds long after which the agent’s starting configuration is randomized. Success corresponds to opening the drawer within 1cm of target displacement.
The parameters we selected to randomize are reasonable in real world scenarios since estimating quantities like moment inertia and friction coefficients is especially error prone. All our experiments are performed on a desktop with 12 Intel Core i7-3930K @ 3.20GHz CPUs and 32 GB RAM with only few hours of CPU training. Q-functions are parameterized with feed-forward neural networks that take as input an observation vector and action. Refer to A.1 for detailed explanation of tasks.
5.2 BASELINES
We compare MPQ with a fixed horizon H against three baselines: MPPI using same horizon as MPQ and no terminal value function, MPPI using a longer horizon and SOFTQLEARNING. For
SOFTQLEARNING we additionally use a target network to stabilize learning whereas MPQ does not use target networks.
5.3 ANALYSIS OF OVERALL PERFORMANCE
5.3.1 COMPARISON OF MPQ WITH MPPI AND soft Q LEARNING
We test the hypotheses that (1) using a soft Q function as the terminal cost can improve MPPI performance even with a shorter horizon (2) learning from real-data can mitigate effects of model error by adapting to true system dynamics and (3) finite horizon optimization leads to faster convergence as compared to soft Q Learning especially in sparse reward tasks. Fig. 1 shows the training curves for MPQ versus soft Q learning. Online optimization is able to improve upon the inaccuracies of the Q function and lead to faster convergence. In sparse reward task such as BALLINCUPSPARSE and high-dimensional FETCHPUSHBLOCK and FRANKADRAWEROPEN, SOFTQLEARNING is unable to learn a consistent policy whereas MPQ improves very rapidly. Additionally, an MPQ agent with a short horizon consistently outperforms MPPI with much longer horizon in all tasks. This can be attributed to (1) larger model bias in longer horizon optimization (2) hardness of optimizing longer sequences and (3) global information encapsulated in the Q function. Using a shorter horizon has added computational benefits as well. Since the Q function is learned using data generated from the true system parameters, it is not affected by model bias. In FETCHPUSHBLOCK, the agent outperforms MPPI with H=64 within the first 30 episodes of training which corresponds to roughly 2 minutes of experience on simulation with true parameters. In contrast, SOFTQLEARNING barely ever moves the block and MPPI with H=10 succeeds only when arm is close to initial position of block. Similarly, in FRANKADRAWEROPEN, the agent with H = 10 achieves a success rate of more than 5 times as compared to MPPI with H=10 and outperforms MPPI with H=64 as well. The consistent results across all different tasks demonstrates the robustness and scalability of MPQ.
5.3.2 BENEFIT OF MPQ OVER DOMAIN RANDOMIZATION
Domain randomization(DR) techniques aim to make the policy learned in simulation robust to modelling errors by randomizing the simulation parameters using manually chosen distributions. However, such policies can be far from optimal on the true system parameters as the learned Q function is inherently biased. However, A Q function learned using rollouts from the real system can overcome model bias. We validate this hypothesis on the BALLINCUPSPARSE task by taking a DR approach inspired by Peng et al. (2018). Simulated rollouts are generated by sampling different parameters at every timestep from a broad range of dynamics parameters in Table 1, whereas real system rollouts use the true parameters. The average success rate reported in Table 2 demonstrates
that a Q function learned solely using DR is unable to generalize to the true system parameters and MPQ has more than twice the success rate when learned on real system.
6 DISCUSSION
In this work we have presented a theoretical connection between information theoretic MPC and soft Q learning approaches that naturally provides an algorithm to combine stochastic optimal control and model-free reinforcement learning. The theoretical insight not only ties together the different fields, but opens avenues to designing pragmatic RL algorithms that leverage the benefits of both. However, some important questions are yet to be answered. The optimal horizon for MPC is inextricably tied with the model error and optimization artifacts. Investigating this dependence in a principled manner is important for real-world applications. Another interesting avenue of research is characterizing the performance of a parameterized Q function and using it to adapt the horizon of MPC rollouts for smarter exploration.
A APPENDIX
A.1 FURTHER EXPERIMENTAL DETAILS
The learned Q function takes as input the current action and a observation vector per task: 1. PENDULUMSWINGUP: [ cos(Θ), sin(Θ), Θ̇ ] (3 dim)
2. BALLINCUPSPARSE: [xball, xtarget, ˙xball, ˙xtarget, xtarget−xball, cos(Θ), sin(Θ)] (12 dim) where Θ is angle of line joining ball and target.
3. FETCHPUSHBLOCK: [xgripper, xobj , xobj − xgrip, gripper opening, rotobj , ẋobj , ωobj gripper opening vel, ẋgripper, d(gripper, obj), xgoal − xobj , d(goal, obj), xgoal] (33 dim)
4. FRANKADRAWEROPEN: [xee, xh, xh − xee, ẋee, ẋh, quatee, quath, drawerdisp d(ee, h), dquat(ee, h), dangee,h] (39 dim)
For all our experiments we parameterize Q functions with feedforward neural networks with two layers containing 100 units each and tanh activation. We use Adam (Kingma & Ba, 2014) optimization with a learning rate of 0.001. For generating value function targets in Eq. (22), we use 3 iterations of MPPI optimization except FRANKADRAWEROPEN where we use 1. The MPPI parameters used are listed in Table 3. | 1. What is the main contribution of the paper, and how does it extend previous work in model-based RL?
2. How effective is the proposed algorithm compared to other methods in model-based RL, specifically MPPI and soft Q-learning?
3. Are there any concerns regarding the novelty of the paper's content, particularly in relation to previous works such as MPPI?
4. How convincing are the empirical experiments, and what further comparisons or evaluations would strengthen the paper's claims?
5. Are there any minor issues or questions regarding specific parts of the paper, such as the derivation from 15-16 or certain claims without justification? | Review | Review
In this paper, the authors proposed the algorithm to introduce model-free reinforcement learning (RL) to model predictive control~(MPC), which is a representative algorithm in model-based RL, to overcome the finite-horizon issue in the existing MPC. The authors evaluated the algorithm on three environments and demonstrated the outperformance comparing to the model predictive path integral (MPPI) and soft Q-learning.
1, The major issue of this paper is its novelty. The proposed algorithm is a straightforward extension of MPPI [1], which adds a Q-function to the finite accumulated reward to predict the future rewards to infinite horizon. The eventual algorithm ends up like a straightforward combination of MPPI and DQN. The algorithm derivation in Sec. 4 follows almost exact as [1] without appropriate reference.
2, The empirical comparison is quite weak. The algorithm is only tested on three environments and only one (Pendulum) from the MuJoCo benchmark. Without more comprehensive comparison on other MuJoCo environments, the empirical experiment is not convincing.
Minors:
1, The derivation from 15-16 is wrong. The first term should be negative.
There are many claims without justification. For example:
"Solving the above optimization can be prohibitively expensive and hard to accomplish online."
- This is not true. There are already plenty of work solving the entropy-regularized MDP online [2, 3, 4] and achieving good empirical performance.
"The re-optimization and entropy regularization helps in mitigating model-bias and
inaccuracies with optimization by avoiding overcommitment to the current estimate of the cost."
- There is no evidence that the entropy-regularization will reduce the mode-bias. Actually, as discussed in [4, 5] the entropy regularization will also incur some extra bias.
[1] Grady Williams, Nolan Wagener, Brian Goldfain, Paul Drews, James M Rehg, Byron Boots, and Evangelos A Theodorou. Information theoretic mpc for model-based reinforcement learning. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1714–1721. IEEE, 2017.
[2] Haarnoja, Tuomas, Aurick Zhou, Pieter Abbeel, and Sergey Levine. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." arXiv preprint arXiv:1801.01290 (2018).
[3] Nachum, O., Norouzi, M., Xu, K. and Schuurmans, D., 2017. Bridging the gap between value and policy based reinforcement learning. In Advances in Neural Information Processing Systems (pp. 2775-2785).
[4] Dai, Bo, Albert Shaw, Lihong Li, Lin Xiao, Niao He, Zhen Liu, Jianshu Chen, and Le Song. "SBEED: Convergent reinforcement learning with nonlinear function approximation." arXiv preprint arXiv:1712.10285 (2017).
[5] Geist, Matthieu, Bruno Scherrer, and Olivier Pietquin. "A Theory of Regularized Markov Decision Processes." arXiv preprint arXiv:1901.11275 (2019). |
ICLR | Title
Information Theoretic Model Predictive Q-Learning
Abstract
Model-free reinforcement learning (RL) algorithms work well in sequential decision-making problems when experience can be collected cheaply and modelbased RL is effective when system dynamics can be modeled accurately. However, both of these assumptions can be violated in real world problems such as robotics, where querying the system can be prohibitively expensive and real-world dynamics can be difficult to model accurately. Although sim-to-real approaches such as domain randomization attempt to mitigate the effects of biased simulation, they can still suffer from optimization challenges such as local minima and handdesigned distributions for randomization, making it difficult to learn an accurate global value function or policy that directly transfers to the real world. In contrast to RL, model predictive control (MPC) algorithms use a simulator to optimize a simple policy class online, constructing a closed-loop controller that can effectively contend with real-world dynamics. MPC performance is usually limited by factors such as model bias and the limited horizon of optimization. In this work, we present a novel theoretical connection between information theoretic MPC and entropy regularized RL and develop a Q-learning algorithm that can leverage biased models. We validate the proposed algorithm on sim-to-sim control tasks to demonstrate the improvements over optimal control and reinforcement learning from scratch. Our approach paves the way for deploying reinforcement learning algorithms on real-robots in a systematic manner.
1 INTRODUCTION
Deep reinforcement learning algorithms have recently generated great interest due to their successful application to a range of difficult problems including Computer Go (Silver et al., 2016) and highdimensional control tasks such as humanoid locomotion (Lillicrap et al., 2015; Schulman et al., 2015). While these methods are extremely general and can learn policies and value functions for complex tasks directly from data, they can also be sample inefficient, and partially-optimized solutions can be arbitrarily poor. These challenges severely restrict RL’s applicability to real systems such as robots due to data collection challenges and safety concerns.
One straightforward way to mitigate these issues is to learn a policy or value function entirely in a high-fidelity simulator (Shah et al., 2017; Todorov et al., 2012) and then deploy the optimized policy on the real system. However, this approach can fail due to model bias, external disturbances, the subtle differences between the real robot’s hardware and poorly modeled phenomena such as friction and contact dynamics. Sim-to-real transfer approaches based on domain randomization (Sadeghi & Levine, 2016; Tobin et al., 2017) and model ensembles (Kurutach et al., 2018; Shyam et al., 2019) aim to make the policy robust by training it to be invariant to varying dynamics. However, learning a globally consistent value function or policy is hard due to optimization issues such as local optima and covariate shift between the exploration policy used for learning the model and the actual control policy executed on the task (Ross & Bagnell, 2012).
Model predictive control (MPC) is a widely used method for generating feedback controllers that repeatedly re-optimizes a finite horizon sequence of controls using an approximate dynamics model that predicts the effect of these controls on the system. The first control in the optimized sequence is executed on the real system and the optimization is performed again from the resulting next state. However, the performance of MPC can suffer due to approximate or simplified models and lim-
ited lookahead. Therefore the parameters of MPC, including the model and horizon H should be carefully tuned to obtain good performance. While using a longer horizon is generally preferred, real-time requirements may limit the amount of lookahead and a biased model can result in compounding model errors.
In this work, we present an approach to RL that leverages the complementary properties of modelfree reinforcement learning and model-based optimal control. Our proposed method views MPC as a way to simultaneously approximate and optimize a local Q function via simulation, and Q learning as a way to improve MPC using real-world data. We focus on the paradigm of entropy regularized reinforcement learning where the aim is to learn a stochastic policy that minimizes the cost-to-go as well as KL divergence with respect to a prior policy. This approach enables faster convergence by mitigating the over-commitment issue in the early stages of Q-learning and better exploration (Fox et al., 2015). We discuss how this formulation of reinforcement learning has deep connections to information theoretic stochastic optimal control where the objective is to find control inputs that minimize the cost while staying close to the passive dynamics of the system (Theodorou & Todorov, 2012). This helps in both injecting domain knowledge into the controller as well as mitigating issues caused by over optimizing the biased estimate of the current cost due to model error and the limited horizon of optimization. We explore this connection in depth and derive an infinite horizon information theoretic model predictive control algorithm based on Williams et al. (2017). We test our approach called Model Predictive Q Learning (MPQ) on simulated continuous control tasks and compare it against information theoretic MPC and soft Q-Learning (Haarnoja et al., 2017), where we demonstrate faster learning with fewer system interactions and better performance as compared to MPC and soft Q-Learning even in the presence of sparse rewards. The learned Q function allows us to truncate the MPC planning horizon which provides additional computational benefits. Finally, we also compare MPQ versus domain randomization(DR) on sim-to-sim tasks. We conclude that DR approaches can be sensitive to the hand-designed distributions for randomizing parameters which causes the learned Q function to be biased and suboptimal on the true system’s parameters, whereas learning from data generated on true system is able to overcome biases and adapt to the real dynamics.
2 RELATED WORK
Model predictive control has a rich history in robotics, ranging from control of mobile robots such as quadrotors (Desaraju & Michael, 2016) and aggressive autonomous vehicles (Wagener et al., 2019; Williams et al., 2017) to generating complex behaviors for high-dimensional systems such as contact-rich manipulation (Fu et al., 2016; Kumar et al., 2014) and humanoid locomotion (Erez et al., 2013). The success of MPC can largely be attributed to online policy optimization which helps mitigate model bias. The information theoretic view of MPC aims to find a policy at every timestep that minimizes the cost over a finite horizon as well as the KL-divergence with respect to a prior policy usually specified by the system’s passive dynamics (Theodorou & Todorov, 2012; Williams et al., 2017). This helps maintain exploratory behavior and avoid over-commitment to the current estimate of the cost function, which is biased due to modeling errors and a finite horizon. Sampling-based MPC algorithms (Wagener et al., 2019; Williams et al., 2017) are also highly parallelizable enabling GPU implementations that aid with real-time control. However, efficient MPC implementations still require careful system identification and extensive amounts of manual tuning.
Deep RL methods are extremely general and can optimize neural network policies from raw sensory inputs with little knowledge of the system dynamics. Both value-based and policy-based approaches (Schulman et al., 2015) have demonstrated excellent performance on complex control problems. These approaches, however, fall short on several accounts when applying them to a real robotic system. First, they have high sample complexity, potentially requiring millions of interactions with the environment. This can be very expensive on a real robot, not least because the initial performance of the policy can be arbitrarily bad. Using random exploration methods such a -greedy can further aggravate this problem. Second, a value function or policy learned entirely in simulation inherits the biases of the simulator. Even if a perfect simulation is available, learning a globally consistent value function or policy is an extremely hard task as noted in (Silver et al., 2016; Zhong et al., 2013). This can be attributed to local optima when using neural network representations or the inherent biases in the Q learning update rules (Fox et al., 2015; Van Hasselt et al., 2016). In fact, it can be difficult to explain why Q-learning algorithms work or fail (Schulman et al., 2017).
Domain randomization aims to make policies learned in simulation more robust by randomizing simulation parameters during training with the aim of making the policies invariant to potential parameter error (Peng et al., 2018; Sadeghi & Levine, 2016; Tobin et al., 2017). However, these policies are not adaptive to unmodelled effects, i.e they take into account only aleoteric and not epistemic uncertainty. Also, such approaches are highly sensitive to hand-designed distributions used for randomizing simulation parameters and can be highly suboptimal on the real-systems parameters, for example, if a very large range of simulation parameters is used. Model-based approaches aim to use real data to improve the model of the system and then perform reinforcement learning or optimal control using the new model or ensemble of models (Kurutach et al., 2018; Ross & Bagnell, 2012; Shyam et al., 2019). Although learning accurate models is a promising avenue, we argue that learning a globally consistent model is an extremely hard problem and instead we should learn a policy that can rapidly adapt to experienced real-world dynamics.
The use of entropy regularization has been explored in RL and Inverse RL for its better sample efficiency and exploration properties (Fox et al., 2015; Haarnoja et al., 2017; 2018; Schulman et al., 2017; Ziebart et al., 2008). This framework allows incorporating prior knowledge into the problem and learning multi-modal policies that can generalize across different tasks. Fox et al. (2015) analyze the theoretical properties of the update rule derived using mutual information minimization and show that this framework can overcome the over-estimation issue inherent in the vanilla Q-learning update. In the past, Todorov (2009) have shown that using KL-divergence can convert the optimal control problem into one that is linearly solvable.
Infinite horizon MPC aims to learn a terminal cost function that can add global information to the finite horizon optimization. Rosolia & Borrelli (2017) learn a terminal cost as a control Lyapunov function and a safety set for the terminal state. These quantites are calculated using all previously visited states and they assume the presence of a controller that can deterministically drive the any state to the goal. Tamar et al. (2017) learns a cost shaping to make a short horizon MPC mimic the actions produced by long horizon MPC offline. However, since their approach is to mimic a longer horizon MPC, the performance of the learner is fundamentally limited by the the performance of the longer horizon MPC. On the contrary, learning an optimal value function as the terminal cost can potentially lead to close to optimal performance.
Using local optimization is an effective way of improving an imperfect value function as noted in RL literature by Anthony et al. (2017); Lowrey et al. (2018); Silver et al. (2016; 2017); Sun et al. (2018). However, these approaches assume that a perfect model of the system is available. In order to make the policy work on the real system, we argue that it is essential to learn a value function form real data and utilize local optimization to stabilize learning.
3 PRELIMINARIES
3.1 REINFORCEMENT LEARNING WITH ENTROPY REGULARIZATION
A Markov Decision Process (MDP) is defined by tuple (S,A, c,P, γ) where S is the state space, A is the action space, c is a one step cost function, P is the space of transition functions and γ is a discount factor. Let P ∈ P be a particular transition function. A closed-loop policy π(a|s) is a distribution over actions given state. Given a policy π and a prior policy π, the KL divergence between them at a state is given by KL (π(a|s)||π(a|s)) = Eπ [log (π(a|s)/π(a|s))]. Entropyregularized RL (Fox et al., 2015) aims to optimize the following objective
π∗ = arg min π Eπ,P [ ∞∑ t=1 γt−1 (c(st, at) + λKL (πt||πt)) ] ∀ s0 ∈ S (1)
where πt and πt are shorthand for π(at|st) and π(at|st) respectively, λ is the temperature parameter that penalizes deviation of π from π. For a policy π, we can define the soft value and action-value functions
V π(s) = Eπ,P [ ∞∑ t=1 γt−1 (c(st, at) + λKL (πt||πt)) |s0 = s ] Qπ(s, a) = c(s, a) + γEs′∼P (s′|s,a) [V π(s′)] (2)
Given a horizon of H timesteps, we can use above definitions to write the value functions as
V π(s) = Eπ,P [ H−1∑ t=1 γt−1(c(st, at) + λKL (πt||πt)) + γH−1V π(sH)|s1 = s ]
Qπ(s, a) = c(s, a) + Eπ,P [ H−1∑ t=2 γt−1(c(st, at) + λKL (πt||πt))
+ γH−1(λKL (πH ||πH) +Q(sH , aH))|s1 = s, a1 = a ] (3)
It is straightforward to verify that V π(s) = Ea∼π [log(π(a|s)/π(a|s)) +Q(s, a)]. The objective in Eq. (1) can equivalently be written as
π∗ = arg min π V π(s) ∀ s ∈ S (4)
This optimization can be performed either by policy gradient methods that aim to find the optimal policy π ∈ Π via stochastic gradient descent (Schulman et al., 2017; Williams, 1992) or value based methods that try to iteratively approximate the value function of the optimal policy. In either case, the output of solving the above optimization is a global closed-loop control policy π∗(a|s).
3.2 INFORMATION THEORETIC MPC
Solving the above optimization can be prohibitively expensive and hard to accomplish online. In contrast to RL, MPC performs online optimization of a simple policy class with a truncated horizon. This process effectively creates a closed-loop controller. In order to do so, MPC algorithms such as MPPI (Williams et al., 2017) use an approximate dynamics model P̂ , which can be deterministic. This is the case when using a simulator such as MuJoCo (Todorov et al., 2012) as the dynamics model. At timestep t, starting from the current state st, an open loop sequence of actions A = (at, at+1, . . . at+H) is sampled from the control distribution denoted by π(A). The objective is to find an optimal sequence of actions to solve
A∗ = arg min
A Eπ(A) [ t+H−1∑ l=t γl−tc(sl, al) + λKL (πl||πl) + γH−1(cf (st+H , at+H) + λKL (πt+H ||πt+H)) ] (5) where cf (st+H , at+H) is a terminal cost function and π(A) is the passive dynamics of the system, i.e the distribution over actions produced when the control input is zero. The first action in the sequence is then executed on the system and the optimization is performed again from the resulting next state. The re-optimization and entropy regularization helps in mitigating model-bias and inaccuracies with optimization by avoiding overcommitment to the current estimate of the cost. A shortcoming of the MPC procedure is the finite horizon. This is especially pronounced in tasks with sparse rewards where a short horizon can make the agent myopic to future rewards. To mitigate this, an approach known as infinite horizon MPC sets the terminal cost cf as a value function that adds global information to the problem.
In the next section, we build our approach by focussing on the MPPI algorithm and its relationship with entropy regularized reinforcement learning. Specifically, we use the definitions of the soft value functions from Eq. (2) to derive an optimal Boltzmann distribution forH-step actions that optimally solves the infinite horizon control problem. This helps us derive the MPPI update rule from Williams et al. (2017) for the infinite horizon case, which then leads to our Model Predictive Q Learning (MPQ) algorithm, which utilizes a predictive model for Q updates and stochastic optimal control as the policy. In the case where H = 1, the algorithm is equivalent to soft Q learning (Haarnoja et al., 2018) or G-learning (Fox et al., 2015).
4 APPROACH
4.1 OPTIMAL H-STEP BOLTZMANN DISTRIBUTION
Let π(A) and π(A) be the joint control distribution and prior over H-horizon open-loop actions. The distributions are assumed to be independent over timesteps, i.e π(a1 . . . aH) = ∏H t=1 πt, where
πt is shorthand for π(at). Since P̂ is deterministic, the following equations hold
V π(s) = Eπ [λ log(π/π) +Qπ(s, a)] Qπ(s, a) = c(s, a) + γV π(s′) (6)
For clarity, we consider γ = 1. Substituting from Eq. (3) for Qπ(s, a)
V π(s) = Eπ1...πH [ H−1∑ t=1 c(st, at) + λ H∑ t=1 log(πt/πt) +Q π(sH , aH) ]
= Eπ1...πH [ H−1∑ t=1 c(st, at) + λ log H∏ t=1 (πt/πt) +Q π(sH , aH) ]
= Eπ [ H−1∑ t=1 c(st, at) + λ log(π/π) +Q π(sH , aH) ] (7)
Consider the following distribution over H-horizon
π = 1
η exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π(sH , aH) )) π(a1 . . . aH) (8)
where η is a normalization constant given by
η = Eπ(a1...aH)
[ exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π(sH , aH) ))] (9)
We show that this is the optimal control distribution as∇V π(s) = 0. Substituting Eq. (8) in (7)
V π(s) = Eπ [ H−1∑ t=1 c(st, at)− λ log η − H−1∑ t=1 c(st, at)−Qπ(sH , aH) +Qπ(sH , aH) ] V π(s) = Eπ [−λ log η]
Since η is a constant, we have V π(s) = −λ log η. Hence for π in Eq. (8), the soft value function is a constant with gradient zero given by
V π ∗ (s) = −λ logEπ(a1...aH)
[ exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π(sH , aH) ))] (10)
which is often referred to in optimal control literature as the “free energy” of the system (Theodorou & Todorov, 2012; Williams et al., 2017). For H=1, Eq. (10) takes the form of the soft value function from Fox et al. (2015) and Haarnoja et al. (2018).
4.2 INFINITE HORIZON MPPI UPDATE RULE
Similar to Williams et al. (2017), we derive the MPPI update rule for online policy optimization. Since sampling actions from the optimal control distribution in Eq. (8) is intractable, we consider control policies π(A) ∈ Π which are easy to sample from. We then optimize for a vector of H control inputs U , such that the resulting action distribution minimizes the KL divergence with the optimal policy
U∗ = arg min π(A) KL (π∗(A)||π(A)) (11)
The objective can be expanded out as KL (π∗(A)||π(A)) = ∫ A π∗(A) log π∗(A) π(A) dA = ∫ A π∗(A) log π∗(A) π(A) π(A) π(A) dA
= ∫ A π∗(A) log π∗(A) π(A) dA− ∫ A π∗(A) log π(A) π(A) dA (12)
Since the first term does not depend on the control input, we can remove it from the optimization
U∗ = arg max π(A) ∫ A π∗(A) log π(A) π(A) dA (13)
Consider Π to be independent multivariate Gaussians over sequence of the H controls with constant covariance Σ at each timestep. We can write the control distribution and prior as follows
π(A) = 1
Z H∏ t=1 exp ( −1 2 (ut − at)T Σ−1 (ut − at) ) π(A) = 1 Z H∏ t=1 exp ( −1 2 aTt Σ −1at ) (14)
where ut and at are the control inputs and actions at timestep t and Z is the normalization constant. Here the prior corresponds to the passive dynamics of the system (Theodorou & Todorov, 2012; Williams et al., 2017), although other choices of prior are possible. Substituting in Eq. (13) we get
U∗ = arg max π(A)
∫ π∗(A) ( H∑ t=1 −1 2 uTt Σ −1ut + u T t Σ −1at ) dA (15)
The objective can be simplified to the following by integrating out the probability in the first term H∑ t=1 −1 2 uTt Σ −1ut + u T t ∫ π∗(A)Σ−1at dA (16)
Since this is a concave function with respect to every ut, we can find the maximum by setting its gradient with respect to ut to zero to solve for optimal u∗t
u∗t = ∫ π∗(A)atdA = ∫ π(A) π∗(A)
π(A)
π(A) π(A) atdA = Eπ(A)
[ π∗(A)
π(A)
π(A) π(A) at
] = Eπ(A) [w(A)at]
(17) where the second equality comes from importance sampling to convert the optimal controls into an expectation over the control distribution instead of the optimal distribution which is impossible to sample from. The importance weight w(A) can be written as follows (substituting π∗ from Eq. (8))
w(A) = 1
η Eπ(A)
[ exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π∗(sH , aH) )) π(A) π(A) ] (18)
Making change of variables ut + t = at for noise sequence E = ( 1 . . . H) sampled from independant Gaussians with zero mean and covariance Σ we get
w(E) = 1 η Eπ(A)
[ exp ( −1 λ ( H−1∑ t=1 c(st, ut + t) + λ π(U + E) π(U + E) +Qπ ∗ (sH , uH + H) ))]
= 1
η Eπ(A)
[ exp ( −1 λ ( H−1∑ t=1 c(st, ut + t) + λ H∑ t=0 1 2 uTt Σ −1(ut + 2 t) +Q π∗(sH , uH + H) ))] (19)
Note that η is the optimal H-step free energy derived in Eq. (10) and can be estimated from N Monte-Carlo samples as
η = N∑ n=1 exp ( −1 λ ( H−1∑ t=1 c(st, ut + n t ) + λ H∑ t=0 1 2 uTt Σ −1(ut + 2 n t ) +Q π∗(sH , uH + n H) )) (20) We can form the following iterative update rule where at every iteration i the sampled control sequence is updated according to
ui+1t = u i t + α N∑ n=1 w(En) n (21)
where α is a step-size parameter as proposed by Wagener et al. (2019). This gives us the infinite horizon MPPI update rule. For H = 1, this corresponds soft Q-learning where stochastic optimization is performed to solve for the optimal action online. Now we develop soft Q-learning algorithm that utilizes infinite horizon MPPI to generate actions as well as Q-targets.
4.3 THE INFORMATION THEORETIC MODEL PREDICTIVE Q-LEARNING ALGORITHM
Since we do not have access to Qπ ∗ , we can not estimate the importance weight in Eq. (19) exactly. Hence, we consider Q functions parameterized by θ denoted byQθ(s, a). Similar to deep Q-learning algorithms, we maintain an replay buffer (Mnih et al., 2015), and update parameters by stochastic gradient descent on the loss L = 1K ∑K i=1(yi − Qθ(si, ai))2 for a batch of K experience tuples (s, a, c, s′) sampled from the buffer where targets yi are given by
y = c(s, a)−λ logEπ∗(a1...aH ) exp(−1 λ ( H−1∑ t=1 c(st, at) + λ log π∗t πt +Qθ(sH , aH) )) ∣∣∣∣∣s1 = s′ (22)
Since the value function updates are performed offline we can utilize large amount of computation (Tamar et al., 2017) to calculate π∗(a1 . . . aH). In our case it is obtained by performing the infinite horizon MPPI update in Eq. (21) for multiple iterations starting from state s′. This allows for directed exploration at a state which leads to better approximation of the free energy (which is akin to approaches such as Covariance Matrix Adaption, except MPPI does not adapt the covariance). This especially helps in early stages of learning by providing better quality targets than a random Q function. Intuitively, this update rule leverages the biased dynamics model P̂ for H steps and a soft Q function at the end learned from interactions with the real system.
At every timestep t during online rollouts, a H-horizon sequence of actions is optimized using a single iteration of infinite horizon MPPI update rule in Eq. (21) and the first action is executed on the system. Online optimization with predictive models can lookahead to produce better actions than acting greedily with respect to the biased Q function and makes ad-hoc exploration strategies such as -greedy unnecessary. Using predictive models for generating value targets and online policy optimization helps accelerate convergence as we demonstrate in our experiments in the next section. Algorithm 1 shows the complete MPQ algorithm.
A closely related approach in literature is (Lowrey et al., 2018) which also uses online MPC and offline value function learning, however they assume access to the true dynamics of the system and do not explore the connection between MPPI and entropy regularized RL and hence do not use free energy targets, even though they use MPPI in their implementation.
Algorithm 1: MPQ
Input : Approximate model P̂ , initial Q function parameters θ1, experience buffer D Parameter: Number of episodes N , length of episode T , planning horizon H , number of update
episodes Nupdate, minibatch-size K, number of minibatches M 1 for i = 1 . . . N do 2 for t = 1 . . . T do 3 (at, . . . , at+H)← Infinite horizon MPPI (Eq. (21)) 4 Execute at on the real system to obtain c(st, at) and next state st+1 5 D ← (st, at, c, st+1) 6 if i%Nupdate == 0 then 7 Sample M minibatches of size K from D 8 Generate targets using Eq. (22) and update parameters to θi+1 9 return θN or best θ on validation.
5 EXPERIMENTS
We perform experiments to test the efficacy of MPQ in overcoming the shortcomings of stochastic optimal control and model free RL in terms of convergence rate, computational requirements and model bias. We also compare MPQ against domain randomization for learning policies that perform well on systems for which accurate models are not known.
5.1 EXPERIMENTAL SETUP
We test our approach on sim-to-sim continuous control tasks based on the Mujoco simulator (Todorov et al., 2012) to study the properties of the algorithm in a controlled manner. The
agent is not provided with the true dynamics parameters, but a uniform distribution over them with a biased mean and added noise. This serves as a reasonable approximation of model bias due to inaccurate measurements of physical quantities. Details of the tasks considered are as follows
1. PENDULUMSWINGUP: the agent tries to swingup and stabilize a pendulum by applying torque on the hinge. The agent is provided with a distribution over the mass and length of the pendulum. The state of the system is given by (Θ, Θ̇), where Θ is the angular displacement. The cost function penalizes the deviation from the upright position and angular velocity. The initial state of the system is randomized after every episode which is 10 seconds long.
2. BALLINCUPSPARSE: a sparse version of the ball in cup task inspired from Tassa et al. (2018). Given a cup and spherical ball attached by a tendon, the goal is to swing and catch the ball. The agent can actuate motors on the two slide joints on the cup and is provided with a biased distribution over the mass of the ball, its moment of inertia and stiffness of the tendon. A cost of 1 is incurred at every timestep and 0 if the ball is in the cup. The position of the ball is randomized after every episode which is 4 seconds long. An episode is successful if agent catches the ball in the cup .
3. FETCHPUSHBLOCK: proposed by Plappert et al. (2018), the agent position controls the endeffector of a simulated Fetch robot to push a block to a goal location on the table. The cost is the distance between the center of mass of the block and the goal. We provide the agent a biased distribution over the mass, moment of inertia inertia, friction coefficients and size of the object. An episode is considered successful if the agent gets the block within 5cm of the goal in 4 seconds. The positions of both block and goal is randomized after every episode.
4. FRANKADRAWEROPEN: the agent velocity controls a 7DOF Franka Panda arm to open a drawer on a cabinet. A simple cost function based on Euclidean distance and relative orientation of the end effector with respect to the handle and the displacement of the slide joint on the drawer is used. The agent is provided a biased distribution over damping and frictionloss of robot and drawer joints. Every episode is 4 seconds long after which the agent’s starting configuration is randomized. Success corresponds to opening the drawer within 1cm of target displacement.
The parameters we selected to randomize are reasonable in real world scenarios since estimating quantities like moment inertia and friction coefficients is especially error prone. All our experiments are performed on a desktop with 12 Intel Core i7-3930K @ 3.20GHz CPUs and 32 GB RAM with only few hours of CPU training. Q-functions are parameterized with feed-forward neural networks that take as input an observation vector and action. Refer to A.1 for detailed explanation of tasks.
5.2 BASELINES
We compare MPQ with a fixed horizon H against three baselines: MPPI using same horizon as MPQ and no terminal value function, MPPI using a longer horizon and SOFTQLEARNING. For
SOFTQLEARNING we additionally use a target network to stabilize learning whereas MPQ does not use target networks.
5.3 ANALYSIS OF OVERALL PERFORMANCE
5.3.1 COMPARISON OF MPQ WITH MPPI AND soft Q LEARNING
We test the hypotheses that (1) using a soft Q function as the terminal cost can improve MPPI performance even with a shorter horizon (2) learning from real-data can mitigate effects of model error by adapting to true system dynamics and (3) finite horizon optimization leads to faster convergence as compared to soft Q Learning especially in sparse reward tasks. Fig. 1 shows the training curves for MPQ versus soft Q learning. Online optimization is able to improve upon the inaccuracies of the Q function and lead to faster convergence. In sparse reward task such as BALLINCUPSPARSE and high-dimensional FETCHPUSHBLOCK and FRANKADRAWEROPEN, SOFTQLEARNING is unable to learn a consistent policy whereas MPQ improves very rapidly. Additionally, an MPQ agent with a short horizon consistently outperforms MPPI with much longer horizon in all tasks. This can be attributed to (1) larger model bias in longer horizon optimization (2) hardness of optimizing longer sequences and (3) global information encapsulated in the Q function. Using a shorter horizon has added computational benefits as well. Since the Q function is learned using data generated from the true system parameters, it is not affected by model bias. In FETCHPUSHBLOCK, the agent outperforms MPPI with H=64 within the first 30 episodes of training which corresponds to roughly 2 minutes of experience on simulation with true parameters. In contrast, SOFTQLEARNING barely ever moves the block and MPPI with H=10 succeeds only when arm is close to initial position of block. Similarly, in FRANKADRAWEROPEN, the agent with H = 10 achieves a success rate of more than 5 times as compared to MPPI with H=10 and outperforms MPPI with H=64 as well. The consistent results across all different tasks demonstrates the robustness and scalability of MPQ.
5.3.2 BENEFIT OF MPQ OVER DOMAIN RANDOMIZATION
Domain randomization(DR) techniques aim to make the policy learned in simulation robust to modelling errors by randomizing the simulation parameters using manually chosen distributions. However, such policies can be far from optimal on the true system parameters as the learned Q function is inherently biased. However, A Q function learned using rollouts from the real system can overcome model bias. We validate this hypothesis on the BALLINCUPSPARSE task by taking a DR approach inspired by Peng et al. (2018). Simulated rollouts are generated by sampling different parameters at every timestep from a broad range of dynamics parameters in Table 1, whereas real system rollouts use the true parameters. The average success rate reported in Table 2 demonstrates
that a Q function learned solely using DR is unable to generalize to the true system parameters and MPQ has more than twice the success rate when learned on real system.
6 DISCUSSION
In this work we have presented a theoretical connection between information theoretic MPC and soft Q learning approaches that naturally provides an algorithm to combine stochastic optimal control and model-free reinforcement learning. The theoretical insight not only ties together the different fields, but opens avenues to designing pragmatic RL algorithms that leverage the benefits of both. However, some important questions are yet to be answered. The optimal horizon for MPC is inextricably tied with the model error and optimization artifacts. Investigating this dependence in a principled manner is important for real-world applications. Another interesting avenue of research is characterizing the performance of a parameterized Q function and using it to adapt the horizon of MPC rollouts for smarter exploration.
A APPENDIX
A.1 FURTHER EXPERIMENTAL DETAILS
The learned Q function takes as input the current action and a observation vector per task: 1. PENDULUMSWINGUP: [ cos(Θ), sin(Θ), Θ̇ ] (3 dim)
2. BALLINCUPSPARSE: [xball, xtarget, ˙xball, ˙xtarget, xtarget−xball, cos(Θ), sin(Θ)] (12 dim) where Θ is angle of line joining ball and target.
3. FETCHPUSHBLOCK: [xgripper, xobj , xobj − xgrip, gripper opening, rotobj , ẋobj , ωobj gripper opening vel, ẋgripper, d(gripper, obj), xgoal − xobj , d(goal, obj), xgoal] (33 dim)
4. FRANKADRAWEROPEN: [xee, xh, xh − xee, ẋee, ẋh, quatee, quath, drawerdisp d(ee, h), dquat(ee, h), dangee,h] (39 dim)
For all our experiments we parameterize Q functions with feedforward neural networks with two layers containing 100 units each and tanh activation. We use Adam (Kingma & Ba, 2014) optimization with a learning rate of 0.001. For generating value function targets in Eq. (22), we use 3 iterations of MPPI optimization except FRANKADRAWEROPEN where we use 1. The MPPI parameters used are listed in Table 3. | 1. What is the main contribution of the paper regarding policy search acceleration using imperfect dynamics?
2. What are the strengths and weaknesses of the proposed algorithm in terms of its performance and comparison with other methods?
3. Do you have any concerns or suggestions regarding the experimental setup and results?
4. How does the reviewer assess the significance and impact of the work in the field of reinforcement learning?
5. Are there any minor comments or suggestions regarding the writing style, citations, and grammar? | Review | Review
This paper studies how a known, imperfect dynamics can be used to accelerate policy search. The main contribution of this paper is a model-based control algorithm that uses n-step lookahead planning and estimates the value of the last state with the prediction from a learned, soft Q value. The n-step planning uses the imperfect dynamics model, which can be cheaply queried offline. A second contribution of the paper is an efficient, iterative procedure for optimizing the n-step actions w.r.t. the imperfect model. The proposed algorithm is evaluated on three continuous control tasks. The proposed algorithm outperforms the two baselines on one of the three tasks. Perhaps the most impressive aspect of the paper is that the experiments only require a few minutes of "real-world" interaction to solve the tasks.
I am leaning towards rejecting this paper. My main reservation is that the empirical results are not very compelling. In Figure 1, it seems like the proposed method (MPQ) only beats that MPPI baseline in BallInCupSparse. The paper seems remiss to not include comparisons to any recent MBRL algorithms (e.g., [Chua 2018, Kurutach 2018, Janner 2019]). Moreover, the tasks considered are all quite simple. Finally, for a paper claiming to "pave the way for deploying reinforcement learning algorithms on real-robots," it seems important to show results on real robots, or at least on a very close approximation thereof. Instead, since the experiments are done in simulation and the true model is known exactly, the paper constructs an approximate model that is a noisy version of the true model.
I do think that this paper is tackling an important problem, and am excited to see work in this area. I would consider increasing my review if the paper were revised to include a comparison to a state-of-the-art MBRL method, if it included experiments on more complex task, and if the proposed method were shown to consistently outperform most baselines on most tasks.
Minor comments:
* The derivation of the update rule in Section 4.2 was unclear to me. In Equation 17, why can the RHS not be computed directly? In Equation 21, where did this iterative procedure come from?
* "Reinforcement Learning" -- No need to capitalize
* "sim-to-real approaches … suffer from optimization challenges such as local minima and hand designed distributions." -- Can you provide a citation for the "local minima" claim? The part about "hand designed distributions" seems to have a grammar error.
* "Model Predictive Control" -- No need to capitalize.
* "Computer Go" -- How is this different from the board game Go?
* "Entropy-regularized" -- Missing a period before this.
* Eq 1 -- The "\forall s_0" suggests that this equation depends on s_0. Can you clarify this dependency?
* "stochastic gradient descent" -- I believe that [Williams 1992] is a relevant citation here.
----------------------UPDATE AFTER AUTHOR RESPONSE------------------------
Thanks for the detailed response! The new experiments seem strong, and help convince me that the method isn't restricted to simple tasks. I'm inclined to agree that it's not overly onerous to assume access to an imperfect model of the world. Because of this, I will increase my vote to "weak accept."
In the next version, I hope that the authors (1) include multiple random seeds for all experiments, and (2) student the degree to which model misspecification degrades performance (i.e., if you use the wrong model, how much worse do you do?). |
ICLR | Title
Information Theoretic Model Predictive Q-Learning
Abstract
Model-free reinforcement learning (RL) algorithms work well in sequential decision-making problems when experience can be collected cheaply and modelbased RL is effective when system dynamics can be modeled accurately. However, both of these assumptions can be violated in real world problems such as robotics, where querying the system can be prohibitively expensive and real-world dynamics can be difficult to model accurately. Although sim-to-real approaches such as domain randomization attempt to mitigate the effects of biased simulation, they can still suffer from optimization challenges such as local minima and handdesigned distributions for randomization, making it difficult to learn an accurate global value function or policy that directly transfers to the real world. In contrast to RL, model predictive control (MPC) algorithms use a simulator to optimize a simple policy class online, constructing a closed-loop controller that can effectively contend with real-world dynamics. MPC performance is usually limited by factors such as model bias and the limited horizon of optimization. In this work, we present a novel theoretical connection between information theoretic MPC and entropy regularized RL and develop a Q-learning algorithm that can leverage biased models. We validate the proposed algorithm on sim-to-sim control tasks to demonstrate the improvements over optimal control and reinforcement learning from scratch. Our approach paves the way for deploying reinforcement learning algorithms on real-robots in a systematic manner.
1 INTRODUCTION
Deep reinforcement learning algorithms have recently generated great interest due to their successful application to a range of difficult problems including Computer Go (Silver et al., 2016) and highdimensional control tasks such as humanoid locomotion (Lillicrap et al., 2015; Schulman et al., 2015). While these methods are extremely general and can learn policies and value functions for complex tasks directly from data, they can also be sample inefficient, and partially-optimized solutions can be arbitrarily poor. These challenges severely restrict RL’s applicability to real systems such as robots due to data collection challenges and safety concerns.
One straightforward way to mitigate these issues is to learn a policy or value function entirely in a high-fidelity simulator (Shah et al., 2017; Todorov et al., 2012) and then deploy the optimized policy on the real system. However, this approach can fail due to model bias, external disturbances, the subtle differences between the real robot’s hardware and poorly modeled phenomena such as friction and contact dynamics. Sim-to-real transfer approaches based on domain randomization (Sadeghi & Levine, 2016; Tobin et al., 2017) and model ensembles (Kurutach et al., 2018; Shyam et al., 2019) aim to make the policy robust by training it to be invariant to varying dynamics. However, learning a globally consistent value function or policy is hard due to optimization issues such as local optima and covariate shift between the exploration policy used for learning the model and the actual control policy executed on the task (Ross & Bagnell, 2012).
Model predictive control (MPC) is a widely used method for generating feedback controllers that repeatedly re-optimizes a finite horizon sequence of controls using an approximate dynamics model that predicts the effect of these controls on the system. The first control in the optimized sequence is executed on the real system and the optimization is performed again from the resulting next state. However, the performance of MPC can suffer due to approximate or simplified models and lim-
ited lookahead. Therefore the parameters of MPC, including the model and horizon H should be carefully tuned to obtain good performance. While using a longer horizon is generally preferred, real-time requirements may limit the amount of lookahead and a biased model can result in compounding model errors.
In this work, we present an approach to RL that leverages the complementary properties of modelfree reinforcement learning and model-based optimal control. Our proposed method views MPC as a way to simultaneously approximate and optimize a local Q function via simulation, and Q learning as a way to improve MPC using real-world data. We focus on the paradigm of entropy regularized reinforcement learning where the aim is to learn a stochastic policy that minimizes the cost-to-go as well as KL divergence with respect to a prior policy. This approach enables faster convergence by mitigating the over-commitment issue in the early stages of Q-learning and better exploration (Fox et al., 2015). We discuss how this formulation of reinforcement learning has deep connections to information theoretic stochastic optimal control where the objective is to find control inputs that minimize the cost while staying close to the passive dynamics of the system (Theodorou & Todorov, 2012). This helps in both injecting domain knowledge into the controller as well as mitigating issues caused by over optimizing the biased estimate of the current cost due to model error and the limited horizon of optimization. We explore this connection in depth and derive an infinite horizon information theoretic model predictive control algorithm based on Williams et al. (2017). We test our approach called Model Predictive Q Learning (MPQ) on simulated continuous control tasks and compare it against information theoretic MPC and soft Q-Learning (Haarnoja et al., 2017), where we demonstrate faster learning with fewer system interactions and better performance as compared to MPC and soft Q-Learning even in the presence of sparse rewards. The learned Q function allows us to truncate the MPC planning horizon which provides additional computational benefits. Finally, we also compare MPQ versus domain randomization(DR) on sim-to-sim tasks. We conclude that DR approaches can be sensitive to the hand-designed distributions for randomizing parameters which causes the learned Q function to be biased and suboptimal on the true system’s parameters, whereas learning from data generated on true system is able to overcome biases and adapt to the real dynamics.
2 RELATED WORK
Model predictive control has a rich history in robotics, ranging from control of mobile robots such as quadrotors (Desaraju & Michael, 2016) and aggressive autonomous vehicles (Wagener et al., 2019; Williams et al., 2017) to generating complex behaviors for high-dimensional systems such as contact-rich manipulation (Fu et al., 2016; Kumar et al., 2014) and humanoid locomotion (Erez et al., 2013). The success of MPC can largely be attributed to online policy optimization which helps mitigate model bias. The information theoretic view of MPC aims to find a policy at every timestep that minimizes the cost over a finite horizon as well as the KL-divergence with respect to a prior policy usually specified by the system’s passive dynamics (Theodorou & Todorov, 2012; Williams et al., 2017). This helps maintain exploratory behavior and avoid over-commitment to the current estimate of the cost function, which is biased due to modeling errors and a finite horizon. Sampling-based MPC algorithms (Wagener et al., 2019; Williams et al., 2017) are also highly parallelizable enabling GPU implementations that aid with real-time control. However, efficient MPC implementations still require careful system identification and extensive amounts of manual tuning.
Deep RL methods are extremely general and can optimize neural network policies from raw sensory inputs with little knowledge of the system dynamics. Both value-based and policy-based approaches (Schulman et al., 2015) have demonstrated excellent performance on complex control problems. These approaches, however, fall short on several accounts when applying them to a real robotic system. First, they have high sample complexity, potentially requiring millions of interactions with the environment. This can be very expensive on a real robot, not least because the initial performance of the policy can be arbitrarily bad. Using random exploration methods such a -greedy can further aggravate this problem. Second, a value function or policy learned entirely in simulation inherits the biases of the simulator. Even if a perfect simulation is available, learning a globally consistent value function or policy is an extremely hard task as noted in (Silver et al., 2016; Zhong et al., 2013). This can be attributed to local optima when using neural network representations or the inherent biases in the Q learning update rules (Fox et al., 2015; Van Hasselt et al., 2016). In fact, it can be difficult to explain why Q-learning algorithms work or fail (Schulman et al., 2017).
Domain randomization aims to make policies learned in simulation more robust by randomizing simulation parameters during training with the aim of making the policies invariant to potential parameter error (Peng et al., 2018; Sadeghi & Levine, 2016; Tobin et al., 2017). However, these policies are not adaptive to unmodelled effects, i.e they take into account only aleoteric and not epistemic uncertainty. Also, such approaches are highly sensitive to hand-designed distributions used for randomizing simulation parameters and can be highly suboptimal on the real-systems parameters, for example, if a very large range of simulation parameters is used. Model-based approaches aim to use real data to improve the model of the system and then perform reinforcement learning or optimal control using the new model or ensemble of models (Kurutach et al., 2018; Ross & Bagnell, 2012; Shyam et al., 2019). Although learning accurate models is a promising avenue, we argue that learning a globally consistent model is an extremely hard problem and instead we should learn a policy that can rapidly adapt to experienced real-world dynamics.
The use of entropy regularization has been explored in RL and Inverse RL for its better sample efficiency and exploration properties (Fox et al., 2015; Haarnoja et al., 2017; 2018; Schulman et al., 2017; Ziebart et al., 2008). This framework allows incorporating prior knowledge into the problem and learning multi-modal policies that can generalize across different tasks. Fox et al. (2015) analyze the theoretical properties of the update rule derived using mutual information minimization and show that this framework can overcome the over-estimation issue inherent in the vanilla Q-learning update. In the past, Todorov (2009) have shown that using KL-divergence can convert the optimal control problem into one that is linearly solvable.
Infinite horizon MPC aims to learn a terminal cost function that can add global information to the finite horizon optimization. Rosolia & Borrelli (2017) learn a terminal cost as a control Lyapunov function and a safety set for the terminal state. These quantites are calculated using all previously visited states and they assume the presence of a controller that can deterministically drive the any state to the goal. Tamar et al. (2017) learns a cost shaping to make a short horizon MPC mimic the actions produced by long horizon MPC offline. However, since their approach is to mimic a longer horizon MPC, the performance of the learner is fundamentally limited by the the performance of the longer horizon MPC. On the contrary, learning an optimal value function as the terminal cost can potentially lead to close to optimal performance.
Using local optimization is an effective way of improving an imperfect value function as noted in RL literature by Anthony et al. (2017); Lowrey et al. (2018); Silver et al. (2016; 2017); Sun et al. (2018). However, these approaches assume that a perfect model of the system is available. In order to make the policy work on the real system, we argue that it is essential to learn a value function form real data and utilize local optimization to stabilize learning.
3 PRELIMINARIES
3.1 REINFORCEMENT LEARNING WITH ENTROPY REGULARIZATION
A Markov Decision Process (MDP) is defined by tuple (S,A, c,P, γ) where S is the state space, A is the action space, c is a one step cost function, P is the space of transition functions and γ is a discount factor. Let P ∈ P be a particular transition function. A closed-loop policy π(a|s) is a distribution over actions given state. Given a policy π and a prior policy π, the KL divergence between them at a state is given by KL (π(a|s)||π(a|s)) = Eπ [log (π(a|s)/π(a|s))]. Entropyregularized RL (Fox et al., 2015) aims to optimize the following objective
π∗ = arg min π Eπ,P [ ∞∑ t=1 γt−1 (c(st, at) + λKL (πt||πt)) ] ∀ s0 ∈ S (1)
where πt and πt are shorthand for π(at|st) and π(at|st) respectively, λ is the temperature parameter that penalizes deviation of π from π. For a policy π, we can define the soft value and action-value functions
V π(s) = Eπ,P [ ∞∑ t=1 γt−1 (c(st, at) + λKL (πt||πt)) |s0 = s ] Qπ(s, a) = c(s, a) + γEs′∼P (s′|s,a) [V π(s′)] (2)
Given a horizon of H timesteps, we can use above definitions to write the value functions as
V π(s) = Eπ,P [ H−1∑ t=1 γt−1(c(st, at) + λKL (πt||πt)) + γH−1V π(sH)|s1 = s ]
Qπ(s, a) = c(s, a) + Eπ,P [ H−1∑ t=2 γt−1(c(st, at) + λKL (πt||πt))
+ γH−1(λKL (πH ||πH) +Q(sH , aH))|s1 = s, a1 = a ] (3)
It is straightforward to verify that V π(s) = Ea∼π [log(π(a|s)/π(a|s)) +Q(s, a)]. The objective in Eq. (1) can equivalently be written as
π∗ = arg min π V π(s) ∀ s ∈ S (4)
This optimization can be performed either by policy gradient methods that aim to find the optimal policy π ∈ Π via stochastic gradient descent (Schulman et al., 2017; Williams, 1992) or value based methods that try to iteratively approximate the value function of the optimal policy. In either case, the output of solving the above optimization is a global closed-loop control policy π∗(a|s).
3.2 INFORMATION THEORETIC MPC
Solving the above optimization can be prohibitively expensive and hard to accomplish online. In contrast to RL, MPC performs online optimization of a simple policy class with a truncated horizon. This process effectively creates a closed-loop controller. In order to do so, MPC algorithms such as MPPI (Williams et al., 2017) use an approximate dynamics model P̂ , which can be deterministic. This is the case when using a simulator such as MuJoCo (Todorov et al., 2012) as the dynamics model. At timestep t, starting from the current state st, an open loop sequence of actions A = (at, at+1, . . . at+H) is sampled from the control distribution denoted by π(A). The objective is to find an optimal sequence of actions to solve
A∗ = arg min
A Eπ(A) [ t+H−1∑ l=t γl−tc(sl, al) + λKL (πl||πl) + γH−1(cf (st+H , at+H) + λKL (πt+H ||πt+H)) ] (5) where cf (st+H , at+H) is a terminal cost function and π(A) is the passive dynamics of the system, i.e the distribution over actions produced when the control input is zero. The first action in the sequence is then executed on the system and the optimization is performed again from the resulting next state. The re-optimization and entropy regularization helps in mitigating model-bias and inaccuracies with optimization by avoiding overcommitment to the current estimate of the cost. A shortcoming of the MPC procedure is the finite horizon. This is especially pronounced in tasks with sparse rewards where a short horizon can make the agent myopic to future rewards. To mitigate this, an approach known as infinite horizon MPC sets the terminal cost cf as a value function that adds global information to the problem.
In the next section, we build our approach by focussing on the MPPI algorithm and its relationship with entropy regularized reinforcement learning. Specifically, we use the definitions of the soft value functions from Eq. (2) to derive an optimal Boltzmann distribution forH-step actions that optimally solves the infinite horizon control problem. This helps us derive the MPPI update rule from Williams et al. (2017) for the infinite horizon case, which then leads to our Model Predictive Q Learning (MPQ) algorithm, which utilizes a predictive model for Q updates and stochastic optimal control as the policy. In the case where H = 1, the algorithm is equivalent to soft Q learning (Haarnoja et al., 2018) or G-learning (Fox et al., 2015).
4 APPROACH
4.1 OPTIMAL H-STEP BOLTZMANN DISTRIBUTION
Let π(A) and π(A) be the joint control distribution and prior over H-horizon open-loop actions. The distributions are assumed to be independent over timesteps, i.e π(a1 . . . aH) = ∏H t=1 πt, where
πt is shorthand for π(at). Since P̂ is deterministic, the following equations hold
V π(s) = Eπ [λ log(π/π) +Qπ(s, a)] Qπ(s, a) = c(s, a) + γV π(s′) (6)
For clarity, we consider γ = 1. Substituting from Eq. (3) for Qπ(s, a)
V π(s) = Eπ1...πH [ H−1∑ t=1 c(st, at) + λ H∑ t=1 log(πt/πt) +Q π(sH , aH) ]
= Eπ1...πH [ H−1∑ t=1 c(st, at) + λ log H∏ t=1 (πt/πt) +Q π(sH , aH) ]
= Eπ [ H−1∑ t=1 c(st, at) + λ log(π/π) +Q π(sH , aH) ] (7)
Consider the following distribution over H-horizon
π = 1
η exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π(sH , aH) )) π(a1 . . . aH) (8)
where η is a normalization constant given by
η = Eπ(a1...aH)
[ exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π(sH , aH) ))] (9)
We show that this is the optimal control distribution as∇V π(s) = 0. Substituting Eq. (8) in (7)
V π(s) = Eπ [ H−1∑ t=1 c(st, at)− λ log η − H−1∑ t=1 c(st, at)−Qπ(sH , aH) +Qπ(sH , aH) ] V π(s) = Eπ [−λ log η]
Since η is a constant, we have V π(s) = −λ log η. Hence for π in Eq. (8), the soft value function is a constant with gradient zero given by
V π ∗ (s) = −λ logEπ(a1...aH)
[ exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π(sH , aH) ))] (10)
which is often referred to in optimal control literature as the “free energy” of the system (Theodorou & Todorov, 2012; Williams et al., 2017). For H=1, Eq. (10) takes the form of the soft value function from Fox et al. (2015) and Haarnoja et al. (2018).
4.2 INFINITE HORIZON MPPI UPDATE RULE
Similar to Williams et al. (2017), we derive the MPPI update rule for online policy optimization. Since sampling actions from the optimal control distribution in Eq. (8) is intractable, we consider control policies π(A) ∈ Π which are easy to sample from. We then optimize for a vector of H control inputs U , such that the resulting action distribution minimizes the KL divergence with the optimal policy
U∗ = arg min π(A) KL (π∗(A)||π(A)) (11)
The objective can be expanded out as KL (π∗(A)||π(A)) = ∫ A π∗(A) log π∗(A) π(A) dA = ∫ A π∗(A) log π∗(A) π(A) π(A) π(A) dA
= ∫ A π∗(A) log π∗(A) π(A) dA− ∫ A π∗(A) log π(A) π(A) dA (12)
Since the first term does not depend on the control input, we can remove it from the optimization
U∗ = arg max π(A) ∫ A π∗(A) log π(A) π(A) dA (13)
Consider Π to be independent multivariate Gaussians over sequence of the H controls with constant covariance Σ at each timestep. We can write the control distribution and prior as follows
π(A) = 1
Z H∏ t=1 exp ( −1 2 (ut − at)T Σ−1 (ut − at) ) π(A) = 1 Z H∏ t=1 exp ( −1 2 aTt Σ −1at ) (14)
where ut and at are the control inputs and actions at timestep t and Z is the normalization constant. Here the prior corresponds to the passive dynamics of the system (Theodorou & Todorov, 2012; Williams et al., 2017), although other choices of prior are possible. Substituting in Eq. (13) we get
U∗ = arg max π(A)
∫ π∗(A) ( H∑ t=1 −1 2 uTt Σ −1ut + u T t Σ −1at ) dA (15)
The objective can be simplified to the following by integrating out the probability in the first term H∑ t=1 −1 2 uTt Σ −1ut + u T t ∫ π∗(A)Σ−1at dA (16)
Since this is a concave function with respect to every ut, we can find the maximum by setting its gradient with respect to ut to zero to solve for optimal u∗t
u∗t = ∫ π∗(A)atdA = ∫ π(A) π∗(A)
π(A)
π(A) π(A) atdA = Eπ(A)
[ π∗(A)
π(A)
π(A) π(A) at
] = Eπ(A) [w(A)at]
(17) where the second equality comes from importance sampling to convert the optimal controls into an expectation over the control distribution instead of the optimal distribution which is impossible to sample from. The importance weight w(A) can be written as follows (substituting π∗ from Eq. (8))
w(A) = 1
η Eπ(A)
[ exp ( −1 λ ( H−1∑ t=1 c(st, at) +Q π∗(sH , aH) )) π(A) π(A) ] (18)
Making change of variables ut + t = at for noise sequence E = ( 1 . . . H) sampled from independant Gaussians with zero mean and covariance Σ we get
w(E) = 1 η Eπ(A)
[ exp ( −1 λ ( H−1∑ t=1 c(st, ut + t) + λ π(U + E) π(U + E) +Qπ ∗ (sH , uH + H) ))]
= 1
η Eπ(A)
[ exp ( −1 λ ( H−1∑ t=1 c(st, ut + t) + λ H∑ t=0 1 2 uTt Σ −1(ut + 2 t) +Q π∗(sH , uH + H) ))] (19)
Note that η is the optimal H-step free energy derived in Eq. (10) and can be estimated from N Monte-Carlo samples as
η = N∑ n=1 exp ( −1 λ ( H−1∑ t=1 c(st, ut + n t ) + λ H∑ t=0 1 2 uTt Σ −1(ut + 2 n t ) +Q π∗(sH , uH + n H) )) (20) We can form the following iterative update rule where at every iteration i the sampled control sequence is updated according to
ui+1t = u i t + α N∑ n=1 w(En) n (21)
where α is a step-size parameter as proposed by Wagener et al. (2019). This gives us the infinite horizon MPPI update rule. For H = 1, this corresponds soft Q-learning where stochastic optimization is performed to solve for the optimal action online. Now we develop soft Q-learning algorithm that utilizes infinite horizon MPPI to generate actions as well as Q-targets.
4.3 THE INFORMATION THEORETIC MODEL PREDICTIVE Q-LEARNING ALGORITHM
Since we do not have access to Qπ ∗ , we can not estimate the importance weight in Eq. (19) exactly. Hence, we consider Q functions parameterized by θ denoted byQθ(s, a). Similar to deep Q-learning algorithms, we maintain an replay buffer (Mnih et al., 2015), and update parameters by stochastic gradient descent on the loss L = 1K ∑K i=1(yi − Qθ(si, ai))2 for a batch of K experience tuples (s, a, c, s′) sampled from the buffer where targets yi are given by
y = c(s, a)−λ logEπ∗(a1...aH ) exp(−1 λ ( H−1∑ t=1 c(st, at) + λ log π∗t πt +Qθ(sH , aH) )) ∣∣∣∣∣s1 = s′ (22)
Since the value function updates are performed offline we can utilize large amount of computation (Tamar et al., 2017) to calculate π∗(a1 . . . aH). In our case it is obtained by performing the infinite horizon MPPI update in Eq. (21) for multiple iterations starting from state s′. This allows for directed exploration at a state which leads to better approximation of the free energy (which is akin to approaches such as Covariance Matrix Adaption, except MPPI does not adapt the covariance). This especially helps in early stages of learning by providing better quality targets than a random Q function. Intuitively, this update rule leverages the biased dynamics model P̂ for H steps and a soft Q function at the end learned from interactions with the real system.
At every timestep t during online rollouts, a H-horizon sequence of actions is optimized using a single iteration of infinite horizon MPPI update rule in Eq. (21) and the first action is executed on the system. Online optimization with predictive models can lookahead to produce better actions than acting greedily with respect to the biased Q function and makes ad-hoc exploration strategies such as -greedy unnecessary. Using predictive models for generating value targets and online policy optimization helps accelerate convergence as we demonstrate in our experiments in the next section. Algorithm 1 shows the complete MPQ algorithm.
A closely related approach in literature is (Lowrey et al., 2018) which also uses online MPC and offline value function learning, however they assume access to the true dynamics of the system and do not explore the connection between MPPI and entropy regularized RL and hence do not use free energy targets, even though they use MPPI in their implementation.
Algorithm 1: MPQ
Input : Approximate model P̂ , initial Q function parameters θ1, experience buffer D Parameter: Number of episodes N , length of episode T , planning horizon H , number of update
episodes Nupdate, minibatch-size K, number of minibatches M 1 for i = 1 . . . N do 2 for t = 1 . . . T do 3 (at, . . . , at+H)← Infinite horizon MPPI (Eq. (21)) 4 Execute at on the real system to obtain c(st, at) and next state st+1 5 D ← (st, at, c, st+1) 6 if i%Nupdate == 0 then 7 Sample M minibatches of size K from D 8 Generate targets using Eq. (22) and update parameters to θi+1 9 return θN or best θ on validation.
5 EXPERIMENTS
We perform experiments to test the efficacy of MPQ in overcoming the shortcomings of stochastic optimal control and model free RL in terms of convergence rate, computational requirements and model bias. We also compare MPQ against domain randomization for learning policies that perform well on systems for which accurate models are not known.
5.1 EXPERIMENTAL SETUP
We test our approach on sim-to-sim continuous control tasks based on the Mujoco simulator (Todorov et al., 2012) to study the properties of the algorithm in a controlled manner. The
agent is not provided with the true dynamics parameters, but a uniform distribution over them with a biased mean and added noise. This serves as a reasonable approximation of model bias due to inaccurate measurements of physical quantities. Details of the tasks considered are as follows
1. PENDULUMSWINGUP: the agent tries to swingup and stabilize a pendulum by applying torque on the hinge. The agent is provided with a distribution over the mass and length of the pendulum. The state of the system is given by (Θ, Θ̇), where Θ is the angular displacement. The cost function penalizes the deviation from the upright position and angular velocity. The initial state of the system is randomized after every episode which is 10 seconds long.
2. BALLINCUPSPARSE: a sparse version of the ball in cup task inspired from Tassa et al. (2018). Given a cup and spherical ball attached by a tendon, the goal is to swing and catch the ball. The agent can actuate motors on the two slide joints on the cup and is provided with a biased distribution over the mass of the ball, its moment of inertia and stiffness of the tendon. A cost of 1 is incurred at every timestep and 0 if the ball is in the cup. The position of the ball is randomized after every episode which is 4 seconds long. An episode is successful if agent catches the ball in the cup .
3. FETCHPUSHBLOCK: proposed by Plappert et al. (2018), the agent position controls the endeffector of a simulated Fetch robot to push a block to a goal location on the table. The cost is the distance between the center of mass of the block and the goal. We provide the agent a biased distribution over the mass, moment of inertia inertia, friction coefficients and size of the object. An episode is considered successful if the agent gets the block within 5cm of the goal in 4 seconds. The positions of both block and goal is randomized after every episode.
4. FRANKADRAWEROPEN: the agent velocity controls a 7DOF Franka Panda arm to open a drawer on a cabinet. A simple cost function based on Euclidean distance and relative orientation of the end effector with respect to the handle and the displacement of the slide joint on the drawer is used. The agent is provided a biased distribution over damping and frictionloss of robot and drawer joints. Every episode is 4 seconds long after which the agent’s starting configuration is randomized. Success corresponds to opening the drawer within 1cm of target displacement.
The parameters we selected to randomize are reasonable in real world scenarios since estimating quantities like moment inertia and friction coefficients is especially error prone. All our experiments are performed on a desktop with 12 Intel Core i7-3930K @ 3.20GHz CPUs and 32 GB RAM with only few hours of CPU training. Q-functions are parameterized with feed-forward neural networks that take as input an observation vector and action. Refer to A.1 for detailed explanation of tasks.
5.2 BASELINES
We compare MPQ with a fixed horizon H against three baselines: MPPI using same horizon as MPQ and no terminal value function, MPPI using a longer horizon and SOFTQLEARNING. For
SOFTQLEARNING we additionally use a target network to stabilize learning whereas MPQ does not use target networks.
5.3 ANALYSIS OF OVERALL PERFORMANCE
5.3.1 COMPARISON OF MPQ WITH MPPI AND soft Q LEARNING
We test the hypotheses that (1) using a soft Q function as the terminal cost can improve MPPI performance even with a shorter horizon (2) learning from real-data can mitigate effects of model error by adapting to true system dynamics and (3) finite horizon optimization leads to faster convergence as compared to soft Q Learning especially in sparse reward tasks. Fig. 1 shows the training curves for MPQ versus soft Q learning. Online optimization is able to improve upon the inaccuracies of the Q function and lead to faster convergence. In sparse reward task such as BALLINCUPSPARSE and high-dimensional FETCHPUSHBLOCK and FRANKADRAWEROPEN, SOFTQLEARNING is unable to learn a consistent policy whereas MPQ improves very rapidly. Additionally, an MPQ agent with a short horizon consistently outperforms MPPI with much longer horizon in all tasks. This can be attributed to (1) larger model bias in longer horizon optimization (2) hardness of optimizing longer sequences and (3) global information encapsulated in the Q function. Using a shorter horizon has added computational benefits as well. Since the Q function is learned using data generated from the true system parameters, it is not affected by model bias. In FETCHPUSHBLOCK, the agent outperforms MPPI with H=64 within the first 30 episodes of training which corresponds to roughly 2 minutes of experience on simulation with true parameters. In contrast, SOFTQLEARNING barely ever moves the block and MPPI with H=10 succeeds only when arm is close to initial position of block. Similarly, in FRANKADRAWEROPEN, the agent with H = 10 achieves a success rate of more than 5 times as compared to MPPI with H=10 and outperforms MPPI with H=64 as well. The consistent results across all different tasks demonstrates the robustness and scalability of MPQ.
5.3.2 BENEFIT OF MPQ OVER DOMAIN RANDOMIZATION
Domain randomization(DR) techniques aim to make the policy learned in simulation robust to modelling errors by randomizing the simulation parameters using manually chosen distributions. However, such policies can be far from optimal on the true system parameters as the learned Q function is inherently biased. However, A Q function learned using rollouts from the real system can overcome model bias. We validate this hypothesis on the BALLINCUPSPARSE task by taking a DR approach inspired by Peng et al. (2018). Simulated rollouts are generated by sampling different parameters at every timestep from a broad range of dynamics parameters in Table 1, whereas real system rollouts use the true parameters. The average success rate reported in Table 2 demonstrates
that a Q function learned solely using DR is unable to generalize to the true system parameters and MPQ has more than twice the success rate when learned on real system.
6 DISCUSSION
In this work we have presented a theoretical connection between information theoretic MPC and soft Q learning approaches that naturally provides an algorithm to combine stochastic optimal control and model-free reinforcement learning. The theoretical insight not only ties together the different fields, but opens avenues to designing pragmatic RL algorithms that leverage the benefits of both. However, some important questions are yet to be answered. The optimal horizon for MPC is inextricably tied with the model error and optimization artifacts. Investigating this dependence in a principled manner is important for real-world applications. Another interesting avenue of research is characterizing the performance of a parameterized Q function and using it to adapt the horizon of MPC rollouts for smarter exploration.
A APPENDIX
A.1 FURTHER EXPERIMENTAL DETAILS
The learned Q function takes as input the current action and a observation vector per task: 1. PENDULUMSWINGUP: [ cos(Θ), sin(Θ), Θ̇ ] (3 dim)
2. BALLINCUPSPARSE: [xball, xtarget, ˙xball, ˙xtarget, xtarget−xball, cos(Θ), sin(Θ)] (12 dim) where Θ is angle of line joining ball and target.
3. FETCHPUSHBLOCK: [xgripper, xobj , xobj − xgrip, gripper opening, rotobj , ẋobj , ωobj gripper opening vel, ẋgripper, d(gripper, obj), xgoal − xobj , d(goal, obj), xgoal] (33 dim)
4. FRANKADRAWEROPEN: [xee, xh, xh − xee, ẋee, ẋh, quatee, quath, drawerdisp d(ee, h), dquat(ee, h), dangee,h] (39 dim)
For all our experiments we parameterize Q functions with feedforward neural networks with two layers containing 100 units each and tanh activation. We use Adam (Kingma & Ba, 2014) optimization with a learning rate of 0.001. For generating value function targets in Eq. (22), we use 3 iterations of MPPI optimization except FRANKADRAWEROPEN where we use 1. The MPPI parameters used are listed in Table 3. | 1. What is the main contribution of the paper in connecting information theoretical MPC and entropy-regularized RL?
2. What are the strengths of the proposed Model Predictive Q-Learning algorithm?
3. What are the weaknesses of the paper regarding its experimental setup and comparisons with other works?
4. Do you have any concerns about the derivation and implementation of the additional Q^pi^*?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
This paper builds a connection between information theoretical MPC and entropy-regularized RL and also develops a novel Q learning algorithm (Model Predictive Q-Learning). Experiments show that the proposed MBQ algorithm outperforms MPPI and soft Q learning in practice.
The paper is well-written. The derivation is clean and easy to understand. Adding a terminal Q function, which makes horizon infinite, is a very natural idea. Most of the derivation is unrelated to the additional Q^pi^* (Eq 11-17) and Eq (18-20) are simply plugging in the formula of pi^*, which is derived at Eq (9), which is also quite natural if one knows the result for finite horizon horizon MPPI (i.e., without the terminal Q). For experiments, I'd like to see some results on more complex environments (e.g., continuous control tasks in OpenAI Gym) and more comparison with recent model-based RL work.
Questions:
1. The experiment setup is that a uniform distribution of dynamics parameters (with biased mean and added noise) are used. Why not using a neural network?
2. Eq (22): The estimation for eta is unbiased, However, 1/eta might be biased, so is the estimator for w(E) also biased?
Minor Comments:
1. Eq (16): should the first term be negated?
2. Page 6, last paragraph: performved -> performed.
3. Page 7, second paragraph: produces -> produce.
4. Algorithm 1, L6: N % N_update -> i % N_update.
5. Appendix A.1, second paragraph: geneerating -> generating. |
ICLR | Title
Finding and only finding local Nash equilibria by both pretending to be a follower
Abstract
Finding Nash equilibria in two-player differentiable games is a classical problem in game theory with important relevance in machine learning. We propose double Follow-the-Ridge (double-FTR), an algorithm that locally converges to and only to local Nash equilibria in general-sum two-player differentiable games. To our knowledge, double-FTR is the first algorithm with such guarantees for general-sum games. Furthermore, we show that by varying its preconditioner, double-FTR leads to a broader family of algorithms with the same convergence guarantee. In addition, double-FTR avoids oscillation near equilibria due to the real-eigenvalues of its Jacobian at fixed points. Empirically, we validate the double-FTR algorithm on a range of simple zero-sum and general sum games, as well as simple Generative Adversarial Network (GAN) tasks.
N/A
Finding Nash equilibria in two-player differentiable games is a classical problem in game theory with important relevance in machine learning. We propose double Follow-the-Ridge (double-FTR), an algorithm that locally converges to and only to local Nash equilibria in general-sum two-player differentiable games. To our knowledge, double-FTR is the first algorithm with such guarantees for general-sum games. Furthermore, we show that by varying its preconditioner, double-FTR leads to a broader family of algorithms with the same convergence guarantee. In addition, double-FTR avoids oscillation near equilibria due to the real-eigenvalues of its Jacobian at fixed points. Empirically, we validate the double-FTR algorithm on a range of simple zero-sum and general sum games, as well as simple Generative Adversarial Network (GAN) tasks.
1 INTRODUCTION
Much of the recent success in deep learning can be attributed to the effectiveness of gradient-based optimization. It is well-known that for a minimization problem, with appropriate choice of learning rates, gradient descent has convergence guarantee to local minima (Lee et al., 2016; 2019). Based on this foundational result, an array of accelerated and higher-order methods have since been proposed and widely applied in training neural networks (Duchi et al., 2011; Kingma and Ba, 2014; Reddi et al., 2018; Zhang et al., 2019b).
However, once we leave the realm of minimization problems and consider the multi-agent setting, the optimization landscape becomes much more complicated. Multi-agent optimization problems arise in diverse fields such as robotics, economics and machine learning (Foerster et al., 2016; Von Neumann and Morgenstern, 2007; Goodfellow et al., 2014; Ben-Tal and Nemirovski, 2002; Gemp et al., 2020; Anil et al., 2021).
A classical abstraction that is especially relevant for machine learning is two-player differentiable games, where the objective is to find global or local Nash equilibria. The equivalent of gradient descent in such a game would be each agent applying gradient descent to minimize their own objective function. However, in stark contrast with gradient descent in solving minimization problems, this gradient-descent-style algorithm may converge to spurious critical points that are not local Nash equilibria, and in the general-sum game case, local Nash equilibria might not even be stable critical points for this algorithm (Mazumdar et al., 2020b)!
These negative results have driven a surge of recent interest in developing other gradient-based algorithms for finding Nash equilibria in differentiable games. Among them is Mazumdar et al. (2019), who proposed an update algorithm whose attracting critical points are only local Nash equilibria in the special case of zero-sum games. However, to the best of our knowledge, such guarantees have not been extended to general-sum games.
We propose double Follow-the-Ridge (double-FTR), a gradient-based algorithm for general-sum differentiable games that locally converges to and only to differential Nash equilibria. Double-FTR is closely related to the Follow-the-Ridge (FTR) algorithm for Stackelberg games (Wang et al., 2019), which converges to and only to local Stackelberg equilibria (Fiez et al., 2019). Double-FTR can be viewed as its counterpart for simultaneous games, where each player adopts the “follower” strategy in FTR.
The rest of this paper is organized as follows. In Section 2, we give background on two-player differentiable games and equilibrium concepts. We also explain the issues with using gradientdescent-style algorithm on such games. In Section 3, we present the double-FTR algorithm and prove its local convergence to and only to differential Nash equilibria. We also identify a more general class of algorithms that share these properties. We discuss recent works directly relevant to double-FTR in Section 4 and other related work in Section 5. In Section 6, we show empirical evidence of double-FTR’s convergence to and only to local Nash equilibria.
2 BACKGROUND
2.1 TWO-PLAYER DIFFERENTIABLE GAMES AND EQUILIBRIUM CONCEPTS
In a general-sum two-player differentiable game, player 1 aims to minimize f : Rn+m ! R with respect to x 2 Rn, whereas player 2 aims to maximize g : Rn+m ! R with respect to y 2 Rm. Following the notation in Mazumdar et al. (2019), we denote such the game as {(f, g),Rn+m}. We also make the following assumption on the twice-differentiability of f and g. Assumption 1. 8 x 2 Rn,y 2 Rm, f and g are twice-differentiable, and the second derivatives are continuous. Also, r2xxf and r2yyg are invertible.
For two rational, non-cooperative players, their optimal outcome is to achieve a local Nash equilibrium (Ratliff et al., 2013). A point (x⇤,y⇤) is a local Nash equilibrium1 of {(f, g),Rn+m} if there exists open sets Sx ⇢ Rn, Sy ⇢ Rm such that x⇤ 2 Sx, y⇤ 2 Sy , and f(x⇤,y⇤) f(x,y⇤), g(x⇤,y⇤) g(x⇤,y), 8x 2 Sx, 8y 2 Sy. A closely related notion of equilibrium is the differential Nash equilibrium (DNE) (Ratliff et al., 2013), which satisfies a second-order sufficient condition for local Nash equilibrium. Definition 2.1 (Differential Nash equilibrium). (x⇤,y⇤) is a differential Nash equilibrium of {(f, g),Rn+m} if the following two conditions hold:
• rxf(x⇤,y⇤) = 0 and ryg(x⇤,y⇤) = 0.
• r2xxf(x⇤,y⇤) 0 and r2yyg(x⇤,y⇤) 0.
The conditions of DNE are slightly stronger than that of local Nash equilibria in that the second-order conditions are definite instead of semi-definite. In this paper, we focus on DNE, as they make up almost all local Nash equilibria in the mathematical sense, and are well-suited for the analysis of second-order algorithms.
2.2 ISSUES WITH GRADIENT-BASED ALGORITHMS
A natural strategy for agents to search for local Nash equilibria in a differentiable game is to use gradient-based algorithms. The simplest gradient-based algorithm is the gradient descent-ascent (GDA) (Ryu and Boyd, 2016; Zhang et al., 2021b) (Algorithm 1) or its variants (Zhang et al., 2021a; Korpelevich, 1976; Mokhtari et al., 2020).
Algorithm 1 Gradient descent-ascent (GDA) Require: Number of iterations T , learning rate
1: for t = 1, . . . , T do 2: xt+1 = xt rxf(xt,yt) 3: yt+1 = yt + ryg(xt,yt) 4: end for
Let z = x y and > 0 be the learning rate, a gradient-based update algorithm can be written as:
zt+1 = zt !(zt). (1) 1Note that local Nash equilibrium is not guaranteed to exist in nonconvex-nonconcave games ((Jin et al.,
2020), Proposition 6), although the (non-)existence of local NE is out of the scope of this paper.
The Jacobian of !(z) is defined as J(z) := @!(z)@z . In the case of GDA, we have:
!GDA(z) = rxf(x,y) ryg(x,y) , JGDA = r2xxf r2xyf r2yxg r2yyg .
Using the Jacobian matrix, we characterize the fixed points of equation 1. Definition 2.2 ((Strictly) stable fixed point). z⇤ is a stable fixed point of the discrete-time dynamical system in equation 1 if
!(z⇤) = 0 and ⇢(I J(z⇤)) 1, where ⇢(·) denotes the spectral radius of a matrix. If we additionally have ⇢(I J(z⇤)) < 1, then z⇤ is a strictly stable fixed point.
Strictly stable fixed points are important for analysis, as they are locally asymptotically convergent (Galor, 2007), i.e. there exists an open set Sz such that z⇤ 2 Sz and limt!1 zt = z⇤ 8z0 2 Sz . A closely related concept is the locally asymptotically stable equilibrium (LASE) for the continuoustime system ż = !(z). (Ratliff et al., 2013). Definition 2.3 (Locally asymptotically stable equilibrium (LASE)). z⇤ is a locally asymptotically stable equilibrium of the continuous-time dynamics ż = !(z) if
!(z⇤) = 0 and Re( ) > 0 for 8 2 spec(J(z⇤)), where Re(·) denotes the real part of a complex number, and spec(·) returns the spectrum (i.e. the set of eigenvalues) of a matrix.
Note that when ! 0, strictly stable fixed points of equation 1 are equivalent to LASE of ż = !(z). In this paper, we prove convergence results in discrete-time (using Definition 2.2), but we often provide intuition using continuous-time concepts such as LASE.
Unfortunately, GDA is not guaranteed to converge to DNE, nor are DNE necessarily (strictly) stable fixed points of the GDA dynamics. Even in the special case of zero-sum games (g = f ), GDA dynamics can still have stable fixed points that are not DNE (Daskalakis and Panageas, 2018; Mazumdar et al., 2020b). The relationship is shown in the Venn diagrams in Figure 1 (to eliminate the effect of , we show illustration in the continuous-time limit ż = !GDA(z)). In Figure 2, we demonstrate the failure modes of GDA in zero-sum games. In 2a, GDA converges to a spurious strictly stable fixed point which is not DNE. In 2b, GDA fails to converge to the unique DNE (Hsieh et al., 2020). Instead, it goes into a limit cycle, due to the strong rotation introduced by large complex parts in its Jacobian eigenvalues. We stress that these pathologies are not limited to GDA, but common for many other first-order algorithms (Wang et al., 2019).
3 DOUBLE FOLLOW-THE-RIDGE
We propose double Follow-the-Ridge (double-FTR), an update rule for general-sum differential games that locally converges to and only to differential Nash equilibria. The double-FTR update is shown in Algorithm 2 (the arguments xt, yt of f and g are dropped to avoid notational clutter).
Algorithm 2 Double Follow-the-Ridge Require: Learning rate ⌘x and ⌘y; number of iterations T .
1: for t = 1, . . . , T do 2: xt+1 xt ⌘xrxf ⌘y(r2xxf) 1r2xygryg 3: yt+1 yt + ⌘yryg + ⌘x(r2yyg) 1r2yxfrxf 4: end for
Let z = x y , = ⌘x and c = ⌘y ⌘x , we can express Algorithm 2 in vectorized form (equation 2). To
simplify the notation, we drop the subscript t for f and g.
zt+1 = zt !FTR(zt), !FTR(zt) =
I (r2xxf) 1r2xyg (r2yyg) 1r2yxf I rxf cryg .
(2)
3.1 LOCAL CONVERGENCE OF DOUBLE-FTR
In this section, we give our main theoretical result. First, we introduce an additional assumption. Assumption 2. At fixed points of equation 2, JGDA(z) has full rank.
Assumption 2 ensures that in double-FTR, the additional terms in the update do not exactly cancel out the GDA terms. In practice, `2 regularization might need to be added to the objective functions. Note a similar assumption is introduced in Mazumdar et al. (2019) Theorem 4.
Our main theoretical result is stated below. Theorem 1. Under Assumptions 1 and 2 and with an appropriate choice of learning rate , z⇤ is a strictly stable fixed point of the double-FTR update (equation 2) if and only if it is a differential Nash equilibrium of the game {(f, g),Rn+m}. Furthermore, at fixed points of equation 2, all eigenvalues of the Jacobian JFTR := @!FTR @z are real.
Intuitively, the first part of the theorem classifies the strictly stable fixed points of double-FTR, and the second part ensures that there is no rotation caused by complex eigenvalues in the neighbourhood of the DNEs. We defer the proof of Theorem 1 to Appendix A. Corollary 1 (Local convergence). Let z⇤ be a DNE of the game {(f, g),Rn+m}. Under Assumptions 1 and 2 and with an appropriate choice of learning rate , there exists an open set Sz ⇢ Rn+m where z⇤ 2 Sz , such that when following equation 2, 8 z0 2 Sz , limt!1 zt ! z⇤.
Proof. The proof follows naturally by combining Theorem 1 with the local convergence of strictly stable fixed points (Galor (2007), Proposition 1.9).
To the best of our knowledge, double FTR is the first algorithm with such local convergence result for general-sum games.
3.2 GENERAL PRECONDITIONERS
In the following remark, we show that double-FTR can be generalized to include a whole family of algorithms. Remark 1. Theorem 1 applies to a more general version of the double FTR algorithm. In particular, we can generalize equation 2 to allow a broader class of “preconditioners”:
zt+1 = zt !̃FTR(zt), !̃FTR(z) = Px
Py
J>GDA(zt) rxf cryg , (3)
where Px, Py are functions of x,y respectively, which satisfy Px 0 () r2xxf 0 and Py 0() r2yyg 0.
Equation 2 corresponds to the special case of Px = (r2xxf) 1, Py = (r2yyg) 1. The proof for Theorem 1 directly applies to the case of general preconditioners in Remark 1.
Remark 1 provides intuition on the convergence properties of double-FTR. Without the preconditioner Px and Py , double-FTR reduces to Hamiltonian gradient descent (Mescheder et al., 2017; Balduzzi et al., 2018; Loizou et al., 2020; Abernethy et al., 2021), which is not guaranteed to only converge to local Nash equilibria. It is the introduction of the preconditioner that enables strictly stable fixed points to satisfy the second-order condition of DNE.
Remark 1 also sheds lights on how to derive a more practical algorithm. Naively implementing Algorithm 2 might cause instability when r2xxf and r2yyg are near singular. In practice, we use (r2xxfr2xxf + I) 1r2xxf instead of (r2xxf) 1 in Algorithm 2 (where a small > 0 is the damping parameter). Note that this also allows us to drop the assumption on the invertibility ofr2xxf and r2yyg in Assumption 1.
4 CONNECTION WITH OTHER ALGORITHMS
Mazumdar et al. (2019) proposed local symplectic surgery (LSS) – a gradient-based algorithm whose LASE are exactly local Nash equilibria in two-player zero-sum games. LSS avoids oscillatory behaviour at local Nash equilibria (similar to double-FTR). Compared to LSS, double-FTR appears to have a simpler form and enables a broader family of algorithms with such local convergence result in general-sum games.
The Follow-the-Ridge (FTR) algorithm (Wang et al., 2019) is closely related to our proposed doubleFTR. FTR was proposed for two-player sequential games and is guaranteed to converge to and only to local minimax. FTR applies a gradient correction term on the follower in a sequential game, so that the agents approximately follow a ridge in the landscape of the objective function. The double-FTR can be viewed as a counterpart of FTR for simultaneous games. The update rule of double-FTR resembles that of FTR, with the gradient modification term applied on both players.
Another related algorithm is the Hamiltonian gradient descent (HGD) (Mescheder et al., 2017; Balduzzi et al., 2018; Loizou et al., 2020; Abernethy et al., 2021). HGD performs gradient-descent on the Hamiltonian, or the squared norm of the gradient. HGD is guaranteed to converge, as it is essentially a minimization problem. However, in general it is not guaranteed to converge only to local Nash equilibria. Interestingly, our double-FTR can be viewed as a preconditioned HGD.
5 RELATED WORK
Mazumdar et al. (2020b) introduced a general framework for competitive gradient-based learning. They characterized local Nash equilibria in terms of the critical points of the gradient algorithms.
They showed the lack of convergence of the gradient algorithm in games, which motivated the development of the double-FTR algorithm.
Much work has focused on improving the dynamics in finding stable fixed points, which is crucial in applications such as GANs, where oscillation caused by eigenvalues with zero real parts of large imaginary parts in the gradient Jacobian can lead to training instability. Mescheder et al. (2017) proposes Consensus Optimization, which encourages agreement between the two players by introducing a regularization term in the objectives of both players. The regularization term results in a more negative real-part for the eigenvalues of the gradient Jacobian, therefore reduces oscillation and allows larger learning rates. Balduzzi et al. (2018); Gemp and Mahadevan (2018) proposes Symplectic Gradient Adjustment (SGA), which decomposes the gradient Jacobian into symmetric (potential) and asymmetric (Hamiltonian) parts and adds a gradient adjustment term for rapid convergence to stable fixed points. Schäfer and Anandkumar (2019) proposes Competitive Gradient Descent (CGD), whose update is given by the Nash equilibrium of a regularized bilinear approximation of the original game. Compared to other methods, CGD has the advantage of not needing to adapt step size when the interaction strength changes between players. Many other methods have been proposed with different strategies for predicting other agents’ moves, such as Learning with Opponent Learning Awareness (LOLA) (Foerster et al., 2016) and optimistic gradient descent-ascent (OGDA) (Popov, 1980; Rakhlin and Sridharan, 2013; Daskalakis et al., 2018; Mertikopoulos et al., 2018). However, none of these existing methods address the problem of spurious (i.e. non-Nash) stable fixed points.
6 EXPERIMENTS
We conduct simple experiments to demonstrate the implications of our theoretical results. In Section 6.1, we show that the double-FTR algorithm empirically converges to and only to differential Nash equilibria, as predicted by Theorem 1. In Section 6.2, we show that double-FTR is able to converge to local Nash equilibria that naive gradient-play avoids in general-sum linear quadratic games. In Section 6.3, we demonstrate the practical implications of another property of double-FTR — eigenvalues of JFTR at fixed points are real.
6.1 2-D TOY EXAMPLE
We consider the zero-sum game {f, f},R2 with the following 2-D function (same as in Mazumdar et al. (2019)):
f(x, y) = e 0.01(x 2+y2) (0.3x2 + y)2 + (0.5y2 + x)2 .
This function has several strictly stable fixed points for the GDA dynamics, among which some are DNE and some are not. As shown in Figure 3, while GDA may converge to fixed points that are not local Nash equilibria, double-FTR avoids such spurious fixed points. Also, in the neighbourhood
of local Nash equilibria, GDA exhibits oscillatory behaviour due to complex eigenvalues of the Jacobian matrix. In contrast, the double-FTR does not have oscillatory behaviour near local Nash equilibria. For reference, we also show the trajectories of the Local Symplectic Surgery (LSS). In this experiment, LSS has similar convergence properties – it avoids spurious fixed points and does not have oscillatory behaviour near local Nash equilibria.
6.2 GENERAL-SUM LINEAR QUADRATIC GAME
The linear quadratic (LQ) game is a classic problem in multi-agent learning. It is an extension of the famous linear quadratic regulator (LQR) problem of optimal control to the multi-agent setting. Just as LQR being a simple yet important benchmark problem for studying properties of reinforcement learning algorithms, the LQ game provides valuable insights to multi-agent RL algorithms (Fazel et al., 2018; Zhang et al., 2019a).
Consider the discrete-time linear dynamical system, where z 2 Rdz is the state, and two players provide control inputs u 2 Rdu and v 2 Rdv respectively.
zt+1 = Azt +Buut +Bvvt, z0 ⇠ p(z0)
Each player adopts a linear state-feedback policy: ut = Kuzt, vt = Kvzt, where the parameters Ku 2 Rdu⇥dz , Kv 2 Rdv⇥dz are to be determined by optimization. In a general-sum LQ game, each player aims to find their corresponding policy parameters K that minimizes their individual quadratic loss function f (shown in equation 4, fv(Ku,Kv) defined analogously using Qv and Rv).
fu(Ku,Kv) = Ez0⇠p(z0) 1X
t=0
z>t Quzt + u > t Ruut (Qu 0, Ru 0) (4)
Despite their simplicity, LQ games are challenging to optimize, because even though the loss functions are quadratic in the states and actions, they are not convex with respect to the player parameters Ku and Kv. Importantly, Mazumdar et al. (2020a) show that in general sum LQ games, using naive gradient-play almost surely avoids some Nash equilibria.
We demonstrate in general-sum LQ games, double-FTR is able to find the Nash equilibria that are avoided by gradient-play. We use a setting mentioned in Mazumdar et al. (2020a), where dz = 2, du = dv = 1, Ru = Rv = 0.01, and
A = 0.511 0.064 0.533 0.993 ,Bu = 1 1 ,Bv = 0 1 ,Qu = 0.01 0 0 1 ,Qv = 1 0 0 0.147 .
The initial state z0 is set to [1 1] > or [1 1.1]> with equal probability.
Figure 4 and 5 shows an instance where the double-FTR is able to converge to a local Nash equilibrium, but the gradient-play fails to. For both algorithms, we use the same initial policy parameters Ku and Kv. Figure 4 visualizes the loss landscape for fu(Ku,Kv) and fv(Ku,Kv) when optimized by double-FTR. It confirms that the solution double-FTR converges to is indeed a Nash equilibrium (the
second-order condition in Definition 2.1). Figure 5a visualizes the local vector field Jacobian (i.e. JGDA) and shows that the Jacobian contains negative eigenvalues, which makes it a saddle point for gradient-play optimization. Indeed, gradient-play (shown in Figure 5b), avoids this Nash equilibrium. Instead, it eventually finds another Nash equilibrium that is stable fixed point.
6.3 PARAMETERIZED BILINEAR GAME
We consider another zero-sum game, the stochastic parameterized bilinear game, as in Prajapat et al. (2021). We use this experiment to demonstrate that double-FTR is also beneficial for stochastic games, and does not exhibit oscillatory behaviour due to having real eigenvalues at fixed points.
min µx, x r(x, y), min µy, y
r(x, y) where x ⇠ N (µx, 2x), y ⇠ N (µy, 2y), r(x, y) = xy.
The unique Nash equilibrium with respect to (x, y) is (0, 0). However, the learnable parameters are the mean and the standard deviation of the distributions where x and y are drawn from. We denote the learnable parameters for x and y as ✓ and respectively. At each time step, we obtain an unbiased estimate of the gradient using REINFORCE over a mini-batch of size B:
r̃✓r(✓, ) = 1
B
BX
i=1
r✓ logN (xi;✓)r(xi, yi), ✓ = µx x , r̃ r(✓, ) computed analogously.
As is often the case, GDA has oscillatory behaviour due to the complex eigenvalues of its Jacobian at fixed points. In this stochastic setting, the oscillation prevents convergence for GDA (Figure 6). In contrast, the double-FTR algorithm does not have rotational behaviour at fixed points, and converges to the unique Nash equilibrium (x, y) = (0, 0).
6.4 GENERATIVE ADVERSARIAL NETWORKS
The Generative Adversarial Network (GAN) (Goodfellow et al., 2014) is a popular deep learning application for two-player games. The goal is to find the Nash equilibrium where the generator perfectly matches the target distribution, and the discriminator is completely fooled by the generator.
In this experiment, we use the GAN framework to learn mixture of Gaussians (MoG). We use the original saturating loss function. Both the generator and the discriminator are multi-layer perceptrons with 2 hidden layers and 64 hidden units in each layer. With neural networks, directly implementing the Hessian would be computationally inefficient or infeasible. Instead, we use conjugate gradient to approximate the Hessian inverse. Details of the experiments can be found in Appendix C.
As shown in Figure 7 and 8, we apply GDA and double-FTR to learn MoG in 1D and 2D. In both cases, GDA gets stuck at a spurious equilibrium and suffers from mode collapse. In contrast, double-FTR recovers all the modes, and the generated distribution closely matches the target.
7 CONCLUSION
We propose double Follow-the-Ridge (double-FTR), a gradient-based algorithm for finding local Nash equilibria in differentiable games. We prove that under mild assumptions, double-FTR locally converges to and only to differential Nash equilibria in the general-sum games, and avoids oscillation in the neighbourhood of fixed points. Furthermore, we remark that by varying the preconditioner, double-FTR leads to a broader family of algorithms that share the same convergence guarantee. Finally, we empirically verify the effectiveness of double-FTR in finding and only finding local Nash equilibria across a broad range of problems.
8 REPREDUCABILITY STATEMENT
For empirical results, we describe the experiment settings in detail in Appendix C. We also provide code for all experiments in the supplementary material. For the theoretical results, proofs are included in the appendix. | 1. What is the main contribution of the paper regarding local convergence for finding second-order Nash equilibria?
2. What are the strengths and weaknesses of the paper's approach to solving this problem?
3. Do you have any concerns about the paper's assumptions and their impact on the results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any additional comments or suggestions for improving the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors consider the problem of designing methods with local convergence for finding second order Nash equilibria in nonconvex two-player general-sum games. Their main result is that, under some assumptions, a variation of the Follow-the-Ridge algorithm is locally asymptotically stable around any second-order Nash equilibria of the game and is not locally asymptotically stable in any other point.
Strengths And Weaknesses
Strengths
The problem that the authors consider is very important and difficult, and hence even the local convergence results in this area have significant theoretical importance.
The paper is well-written and the algorithms and techniques well-explained.
The dynamics that the authors provide are simple and intuitive and can potentially have significant practical applications.
Weaknesses - Comments
General comment: I believe that the results of this paper are indeed interesting but I think their current motivation is wrong. I will justify this with the comments that I present below.
If we assume that there exists a Differential Nash Equilibrium according to Definition 2.1 and we also assume Assumption 1 then, unless I am missing something, f is convex with respect to x and g is concave with respect to y everywhere. Proof sketch: Let
z
∗
=
(
x
∗
,
y
∗
)
be a Differential Nash Equilibrium according to Definition 2.1. Assume also that there exist a point
z
0
=
(
x
0
,
y
0
)
such that
f
is not convex with respect to
x
at
z
0
. This means that
∇
x
x
2
f
(
z
0
)
has a negative eigenvalue. Consider now the line segment
L
that connects
z
∗
and
z
0
. We know that the eigenvalues of
∇
x
x
2
f
(
z
∗
)
are all positive and that
∇
x
x
2
f
(
z
)
is a continuous function of
z
according to Assumption 1. It is also well-known that the eigenvalues are continuous functions of the matrices. Hence, the minimum eigenvalue of
∇
x
x
2
f
(
z
)
goes from positive at
z
=
z
∗
to negative at
z
=
z
0
in a continuous way. Since
∇
x
x
2
f
(
z
)
is symmetric all the eigenvalues are real and hence there should be a point
z
¯
∈
L
such that the minimum eigenvalue of
∇
x
x
2
f
(
z
¯
)
is zero which means that
∇
x
x
2
f
(
z
¯
)
is not invertible which violates Assumption 1. Therefore, f is convex with respect to x and similarly g is concave with respect to y. Am I missing something here? Given the above, it is not an achievement of the proposed algorithm that is it stable only around stationary points that are second-order Nash because only such stationary points exist. So the main motivation of the paper is not correct.
The achievement of the proposed algorithm though is that it is stable around all the simultaneous stationary points of f with respect to x and g with respect to y. This to my knowledge is a not trivial property even when we assume that f is convex with respect to x and g is concave with respect to y. The reason that this is interesting is that this is applied in a general-sum setting. Finding such simultaneous stationary points is as difficult as finding fixed points of any continuous map! Proof sketch: Let
H
:
R
n
→
R
n
be a continuous map and let
f
(
x
,
y
)
=
|
x
−
H
(
y
)
|
2
and
g
(
x
,
y
)
=
−
|
x
−
y
|
2
. Then it is easy to see that all the simultaneous stationary points of f with respect to x and g with respect to y, correspond exactly to the fixed points of
H
. Finding fixed points of continuous maps is more general that finding solutions of general non-monotone variational inequalities, which is more general than multi-agent general-sum concave games which have a ton of applications in game theory and economics. So the fact that the proposed algorithm is locally asymptotically stable only on those points is an interesting property. Nevertheless, it should be compared to the known results from the variational inequalities community or the computation of fixed points community which is missing from the current version of the paper.
Clarity, Quality, Novelty And Reproducibility
See above. |
ICLR | Title
Finding and only finding local Nash equilibria by both pretending to be a follower
Abstract
Finding Nash equilibria in two-player differentiable games is a classical problem in game theory with important relevance in machine learning. We propose double Follow-the-Ridge (double-FTR), an algorithm that locally converges to and only to local Nash equilibria in general-sum two-player differentiable games. To our knowledge, double-FTR is the first algorithm with such guarantees for general-sum games. Furthermore, we show that by varying its preconditioner, double-FTR leads to a broader family of algorithms with the same convergence guarantee. In addition, double-FTR avoids oscillation near equilibria due to the real-eigenvalues of its Jacobian at fixed points. Empirically, we validate the double-FTR algorithm on a range of simple zero-sum and general sum games, as well as simple Generative Adversarial Network (GAN) tasks.
N/A
Finding Nash equilibria in two-player differentiable games is a classical problem in game theory with important relevance in machine learning. We propose double Follow-the-Ridge (double-FTR), an algorithm that locally converges to and only to local Nash equilibria in general-sum two-player differentiable games. To our knowledge, double-FTR is the first algorithm with such guarantees for general-sum games. Furthermore, we show that by varying its preconditioner, double-FTR leads to a broader family of algorithms with the same convergence guarantee. In addition, double-FTR avoids oscillation near equilibria due to the real-eigenvalues of its Jacobian at fixed points. Empirically, we validate the double-FTR algorithm on a range of simple zero-sum and general sum games, as well as simple Generative Adversarial Network (GAN) tasks.
1 INTRODUCTION
Much of the recent success in deep learning can be attributed to the effectiveness of gradient-based optimization. It is well-known that for a minimization problem, with appropriate choice of learning rates, gradient descent has convergence guarantee to local minima (Lee et al., 2016; 2019). Based on this foundational result, an array of accelerated and higher-order methods have since been proposed and widely applied in training neural networks (Duchi et al., 2011; Kingma and Ba, 2014; Reddi et al., 2018; Zhang et al., 2019b).
However, once we leave the realm of minimization problems and consider the multi-agent setting, the optimization landscape becomes much more complicated. Multi-agent optimization problems arise in diverse fields such as robotics, economics and machine learning (Foerster et al., 2016; Von Neumann and Morgenstern, 2007; Goodfellow et al., 2014; Ben-Tal and Nemirovski, 2002; Gemp et al., 2020; Anil et al., 2021).
A classical abstraction that is especially relevant for machine learning is two-player differentiable games, where the objective is to find global or local Nash equilibria. The equivalent of gradient descent in such a game would be each agent applying gradient descent to minimize their own objective function. However, in stark contrast with gradient descent in solving minimization problems, this gradient-descent-style algorithm may converge to spurious critical points that are not local Nash equilibria, and in the general-sum game case, local Nash equilibria might not even be stable critical points for this algorithm (Mazumdar et al., 2020b)!
These negative results have driven a surge of recent interest in developing other gradient-based algorithms for finding Nash equilibria in differentiable games. Among them is Mazumdar et al. (2019), who proposed an update algorithm whose attracting critical points are only local Nash equilibria in the special case of zero-sum games. However, to the best of our knowledge, such guarantees have not been extended to general-sum games.
We propose double Follow-the-Ridge (double-FTR), a gradient-based algorithm for general-sum differentiable games that locally converges to and only to differential Nash equilibria. Double-FTR is closely related to the Follow-the-Ridge (FTR) algorithm for Stackelberg games (Wang et al., 2019), which converges to and only to local Stackelberg equilibria (Fiez et al., 2019). Double-FTR can be viewed as its counterpart for simultaneous games, where each player adopts the “follower” strategy in FTR.
The rest of this paper is organized as follows. In Section 2, we give background on two-player differentiable games and equilibrium concepts. We also explain the issues with using gradientdescent-style algorithm on such games. In Section 3, we present the double-FTR algorithm and prove its local convergence to and only to differential Nash equilibria. We also identify a more general class of algorithms that share these properties. We discuss recent works directly relevant to double-FTR in Section 4 and other related work in Section 5. In Section 6, we show empirical evidence of double-FTR’s convergence to and only to local Nash equilibria.
2 BACKGROUND
2.1 TWO-PLAYER DIFFERENTIABLE GAMES AND EQUILIBRIUM CONCEPTS
In a general-sum two-player differentiable game, player 1 aims to minimize f : Rn+m ! R with respect to x 2 Rn, whereas player 2 aims to maximize g : Rn+m ! R with respect to y 2 Rm. Following the notation in Mazumdar et al. (2019), we denote such the game as {(f, g),Rn+m}. We also make the following assumption on the twice-differentiability of f and g. Assumption 1. 8 x 2 Rn,y 2 Rm, f and g are twice-differentiable, and the second derivatives are continuous. Also, r2xxf and r2yyg are invertible.
For two rational, non-cooperative players, their optimal outcome is to achieve a local Nash equilibrium (Ratliff et al., 2013). A point (x⇤,y⇤) is a local Nash equilibrium1 of {(f, g),Rn+m} if there exists open sets Sx ⇢ Rn, Sy ⇢ Rm such that x⇤ 2 Sx, y⇤ 2 Sy , and f(x⇤,y⇤) f(x,y⇤), g(x⇤,y⇤) g(x⇤,y), 8x 2 Sx, 8y 2 Sy. A closely related notion of equilibrium is the differential Nash equilibrium (DNE) (Ratliff et al., 2013), which satisfies a second-order sufficient condition for local Nash equilibrium. Definition 2.1 (Differential Nash equilibrium). (x⇤,y⇤) is a differential Nash equilibrium of {(f, g),Rn+m} if the following two conditions hold:
• rxf(x⇤,y⇤) = 0 and ryg(x⇤,y⇤) = 0.
• r2xxf(x⇤,y⇤) 0 and r2yyg(x⇤,y⇤) 0.
The conditions of DNE are slightly stronger than that of local Nash equilibria in that the second-order conditions are definite instead of semi-definite. In this paper, we focus on DNE, as they make up almost all local Nash equilibria in the mathematical sense, and are well-suited for the analysis of second-order algorithms.
2.2 ISSUES WITH GRADIENT-BASED ALGORITHMS
A natural strategy for agents to search for local Nash equilibria in a differentiable game is to use gradient-based algorithms. The simplest gradient-based algorithm is the gradient descent-ascent (GDA) (Ryu and Boyd, 2016; Zhang et al., 2021b) (Algorithm 1) or its variants (Zhang et al., 2021a; Korpelevich, 1976; Mokhtari et al., 2020).
Algorithm 1 Gradient descent-ascent (GDA) Require: Number of iterations T , learning rate
1: for t = 1, . . . , T do 2: xt+1 = xt rxf(xt,yt) 3: yt+1 = yt + ryg(xt,yt) 4: end for
Let z = x y and > 0 be the learning rate, a gradient-based update algorithm can be written as:
zt+1 = zt !(zt). (1) 1Note that local Nash equilibrium is not guaranteed to exist in nonconvex-nonconcave games ((Jin et al.,
2020), Proposition 6), although the (non-)existence of local NE is out of the scope of this paper.
The Jacobian of !(z) is defined as J(z) := @!(z)@z . In the case of GDA, we have:
!GDA(z) = rxf(x,y) ryg(x,y) , JGDA = r2xxf r2xyf r2yxg r2yyg .
Using the Jacobian matrix, we characterize the fixed points of equation 1. Definition 2.2 ((Strictly) stable fixed point). z⇤ is a stable fixed point of the discrete-time dynamical system in equation 1 if
!(z⇤) = 0 and ⇢(I J(z⇤)) 1, where ⇢(·) denotes the spectral radius of a matrix. If we additionally have ⇢(I J(z⇤)) < 1, then z⇤ is a strictly stable fixed point.
Strictly stable fixed points are important for analysis, as they are locally asymptotically convergent (Galor, 2007), i.e. there exists an open set Sz such that z⇤ 2 Sz and limt!1 zt = z⇤ 8z0 2 Sz . A closely related concept is the locally asymptotically stable equilibrium (LASE) for the continuoustime system ż = !(z). (Ratliff et al., 2013). Definition 2.3 (Locally asymptotically stable equilibrium (LASE)). z⇤ is a locally asymptotically stable equilibrium of the continuous-time dynamics ż = !(z) if
!(z⇤) = 0 and Re( ) > 0 for 8 2 spec(J(z⇤)), where Re(·) denotes the real part of a complex number, and spec(·) returns the spectrum (i.e. the set of eigenvalues) of a matrix.
Note that when ! 0, strictly stable fixed points of equation 1 are equivalent to LASE of ż = !(z). In this paper, we prove convergence results in discrete-time (using Definition 2.2), but we often provide intuition using continuous-time concepts such as LASE.
Unfortunately, GDA is not guaranteed to converge to DNE, nor are DNE necessarily (strictly) stable fixed points of the GDA dynamics. Even in the special case of zero-sum games (g = f ), GDA dynamics can still have stable fixed points that are not DNE (Daskalakis and Panageas, 2018; Mazumdar et al., 2020b). The relationship is shown in the Venn diagrams in Figure 1 (to eliminate the effect of , we show illustration in the continuous-time limit ż = !GDA(z)). In Figure 2, we demonstrate the failure modes of GDA in zero-sum games. In 2a, GDA converges to a spurious strictly stable fixed point which is not DNE. In 2b, GDA fails to converge to the unique DNE (Hsieh et al., 2020). Instead, it goes into a limit cycle, due to the strong rotation introduced by large complex parts in its Jacobian eigenvalues. We stress that these pathologies are not limited to GDA, but common for many other first-order algorithms (Wang et al., 2019).
3 DOUBLE FOLLOW-THE-RIDGE
We propose double Follow-the-Ridge (double-FTR), an update rule for general-sum differential games that locally converges to and only to differential Nash equilibria. The double-FTR update is shown in Algorithm 2 (the arguments xt, yt of f and g are dropped to avoid notational clutter).
Algorithm 2 Double Follow-the-Ridge Require: Learning rate ⌘x and ⌘y; number of iterations T .
1: for t = 1, . . . , T do 2: xt+1 xt ⌘xrxf ⌘y(r2xxf) 1r2xygryg 3: yt+1 yt + ⌘yryg + ⌘x(r2yyg) 1r2yxfrxf 4: end for
Let z = x y , = ⌘x and c = ⌘y ⌘x , we can express Algorithm 2 in vectorized form (equation 2). To
simplify the notation, we drop the subscript t for f and g.
zt+1 = zt !FTR(zt), !FTR(zt) =
I (r2xxf) 1r2xyg (r2yyg) 1r2yxf I rxf cryg .
(2)
3.1 LOCAL CONVERGENCE OF DOUBLE-FTR
In this section, we give our main theoretical result. First, we introduce an additional assumption. Assumption 2. At fixed points of equation 2, JGDA(z) has full rank.
Assumption 2 ensures that in double-FTR, the additional terms in the update do not exactly cancel out the GDA terms. In practice, `2 regularization might need to be added to the objective functions. Note a similar assumption is introduced in Mazumdar et al. (2019) Theorem 4.
Our main theoretical result is stated below. Theorem 1. Under Assumptions 1 and 2 and with an appropriate choice of learning rate , z⇤ is a strictly stable fixed point of the double-FTR update (equation 2) if and only if it is a differential Nash equilibrium of the game {(f, g),Rn+m}. Furthermore, at fixed points of equation 2, all eigenvalues of the Jacobian JFTR := @!FTR @z are real.
Intuitively, the first part of the theorem classifies the strictly stable fixed points of double-FTR, and the second part ensures that there is no rotation caused by complex eigenvalues in the neighbourhood of the DNEs. We defer the proof of Theorem 1 to Appendix A. Corollary 1 (Local convergence). Let z⇤ be a DNE of the game {(f, g),Rn+m}. Under Assumptions 1 and 2 and with an appropriate choice of learning rate , there exists an open set Sz ⇢ Rn+m where z⇤ 2 Sz , such that when following equation 2, 8 z0 2 Sz , limt!1 zt ! z⇤.
Proof. The proof follows naturally by combining Theorem 1 with the local convergence of strictly stable fixed points (Galor (2007), Proposition 1.9).
To the best of our knowledge, double FTR is the first algorithm with such local convergence result for general-sum games.
3.2 GENERAL PRECONDITIONERS
In the following remark, we show that double-FTR can be generalized to include a whole family of algorithms. Remark 1. Theorem 1 applies to a more general version of the double FTR algorithm. In particular, we can generalize equation 2 to allow a broader class of “preconditioners”:
zt+1 = zt !̃FTR(zt), !̃FTR(z) = Px
Py
J>GDA(zt) rxf cryg , (3)
where Px, Py are functions of x,y respectively, which satisfy Px 0 () r2xxf 0 and Py 0() r2yyg 0.
Equation 2 corresponds to the special case of Px = (r2xxf) 1, Py = (r2yyg) 1. The proof for Theorem 1 directly applies to the case of general preconditioners in Remark 1.
Remark 1 provides intuition on the convergence properties of double-FTR. Without the preconditioner Px and Py , double-FTR reduces to Hamiltonian gradient descent (Mescheder et al., 2017; Balduzzi et al., 2018; Loizou et al., 2020; Abernethy et al., 2021), which is not guaranteed to only converge to local Nash equilibria. It is the introduction of the preconditioner that enables strictly stable fixed points to satisfy the second-order condition of DNE.
Remark 1 also sheds lights on how to derive a more practical algorithm. Naively implementing Algorithm 2 might cause instability when r2xxf and r2yyg are near singular. In practice, we use (r2xxfr2xxf + I) 1r2xxf instead of (r2xxf) 1 in Algorithm 2 (where a small > 0 is the damping parameter). Note that this also allows us to drop the assumption on the invertibility ofr2xxf and r2yyg in Assumption 1.
4 CONNECTION WITH OTHER ALGORITHMS
Mazumdar et al. (2019) proposed local symplectic surgery (LSS) – a gradient-based algorithm whose LASE are exactly local Nash equilibria in two-player zero-sum games. LSS avoids oscillatory behaviour at local Nash equilibria (similar to double-FTR). Compared to LSS, double-FTR appears to have a simpler form and enables a broader family of algorithms with such local convergence result in general-sum games.
The Follow-the-Ridge (FTR) algorithm (Wang et al., 2019) is closely related to our proposed doubleFTR. FTR was proposed for two-player sequential games and is guaranteed to converge to and only to local minimax. FTR applies a gradient correction term on the follower in a sequential game, so that the agents approximately follow a ridge in the landscape of the objective function. The double-FTR can be viewed as a counterpart of FTR for simultaneous games. The update rule of double-FTR resembles that of FTR, with the gradient modification term applied on both players.
Another related algorithm is the Hamiltonian gradient descent (HGD) (Mescheder et al., 2017; Balduzzi et al., 2018; Loizou et al., 2020; Abernethy et al., 2021). HGD performs gradient-descent on the Hamiltonian, or the squared norm of the gradient. HGD is guaranteed to converge, as it is essentially a minimization problem. However, in general it is not guaranteed to converge only to local Nash equilibria. Interestingly, our double-FTR can be viewed as a preconditioned HGD.
5 RELATED WORK
Mazumdar et al. (2020b) introduced a general framework for competitive gradient-based learning. They characterized local Nash equilibria in terms of the critical points of the gradient algorithms.
They showed the lack of convergence of the gradient algorithm in games, which motivated the development of the double-FTR algorithm.
Much work has focused on improving the dynamics in finding stable fixed points, which is crucial in applications such as GANs, where oscillation caused by eigenvalues with zero real parts of large imaginary parts in the gradient Jacobian can lead to training instability. Mescheder et al. (2017) proposes Consensus Optimization, which encourages agreement between the two players by introducing a regularization term in the objectives of both players. The regularization term results in a more negative real-part for the eigenvalues of the gradient Jacobian, therefore reduces oscillation and allows larger learning rates. Balduzzi et al. (2018); Gemp and Mahadevan (2018) proposes Symplectic Gradient Adjustment (SGA), which decomposes the gradient Jacobian into symmetric (potential) and asymmetric (Hamiltonian) parts and adds a gradient adjustment term for rapid convergence to stable fixed points. Schäfer and Anandkumar (2019) proposes Competitive Gradient Descent (CGD), whose update is given by the Nash equilibrium of a regularized bilinear approximation of the original game. Compared to other methods, CGD has the advantage of not needing to adapt step size when the interaction strength changes between players. Many other methods have been proposed with different strategies for predicting other agents’ moves, such as Learning with Opponent Learning Awareness (LOLA) (Foerster et al., 2016) and optimistic gradient descent-ascent (OGDA) (Popov, 1980; Rakhlin and Sridharan, 2013; Daskalakis et al., 2018; Mertikopoulos et al., 2018). However, none of these existing methods address the problem of spurious (i.e. non-Nash) stable fixed points.
6 EXPERIMENTS
We conduct simple experiments to demonstrate the implications of our theoretical results. In Section 6.1, we show that the double-FTR algorithm empirically converges to and only to differential Nash equilibria, as predicted by Theorem 1. In Section 6.2, we show that double-FTR is able to converge to local Nash equilibria that naive gradient-play avoids in general-sum linear quadratic games. In Section 6.3, we demonstrate the practical implications of another property of double-FTR — eigenvalues of JFTR at fixed points are real.
6.1 2-D TOY EXAMPLE
We consider the zero-sum game {f, f},R2 with the following 2-D function (same as in Mazumdar et al. (2019)):
f(x, y) = e 0.01(x 2+y2) (0.3x2 + y)2 + (0.5y2 + x)2 .
This function has several strictly stable fixed points for the GDA dynamics, among which some are DNE and some are not. As shown in Figure 3, while GDA may converge to fixed points that are not local Nash equilibria, double-FTR avoids such spurious fixed points. Also, in the neighbourhood
of local Nash equilibria, GDA exhibits oscillatory behaviour due to complex eigenvalues of the Jacobian matrix. In contrast, the double-FTR does not have oscillatory behaviour near local Nash equilibria. For reference, we also show the trajectories of the Local Symplectic Surgery (LSS). In this experiment, LSS has similar convergence properties – it avoids spurious fixed points and does not have oscillatory behaviour near local Nash equilibria.
6.2 GENERAL-SUM LINEAR QUADRATIC GAME
The linear quadratic (LQ) game is a classic problem in multi-agent learning. It is an extension of the famous linear quadratic regulator (LQR) problem of optimal control to the multi-agent setting. Just as LQR being a simple yet important benchmark problem for studying properties of reinforcement learning algorithms, the LQ game provides valuable insights to multi-agent RL algorithms (Fazel et al., 2018; Zhang et al., 2019a).
Consider the discrete-time linear dynamical system, where z 2 Rdz is the state, and two players provide control inputs u 2 Rdu and v 2 Rdv respectively.
zt+1 = Azt +Buut +Bvvt, z0 ⇠ p(z0)
Each player adopts a linear state-feedback policy: ut = Kuzt, vt = Kvzt, where the parameters Ku 2 Rdu⇥dz , Kv 2 Rdv⇥dz are to be determined by optimization. In a general-sum LQ game, each player aims to find their corresponding policy parameters K that minimizes their individual quadratic loss function f (shown in equation 4, fv(Ku,Kv) defined analogously using Qv and Rv).
fu(Ku,Kv) = Ez0⇠p(z0) 1X
t=0
z>t Quzt + u > t Ruut (Qu 0, Ru 0) (4)
Despite their simplicity, LQ games are challenging to optimize, because even though the loss functions are quadratic in the states and actions, they are not convex with respect to the player parameters Ku and Kv. Importantly, Mazumdar et al. (2020a) show that in general sum LQ games, using naive gradient-play almost surely avoids some Nash equilibria.
We demonstrate in general-sum LQ games, double-FTR is able to find the Nash equilibria that are avoided by gradient-play. We use a setting mentioned in Mazumdar et al. (2020a), where dz = 2, du = dv = 1, Ru = Rv = 0.01, and
A = 0.511 0.064 0.533 0.993 ,Bu = 1 1 ,Bv = 0 1 ,Qu = 0.01 0 0 1 ,Qv = 1 0 0 0.147 .
The initial state z0 is set to [1 1] > or [1 1.1]> with equal probability.
Figure 4 and 5 shows an instance where the double-FTR is able to converge to a local Nash equilibrium, but the gradient-play fails to. For both algorithms, we use the same initial policy parameters Ku and Kv. Figure 4 visualizes the loss landscape for fu(Ku,Kv) and fv(Ku,Kv) when optimized by double-FTR. It confirms that the solution double-FTR converges to is indeed a Nash equilibrium (the
second-order condition in Definition 2.1). Figure 5a visualizes the local vector field Jacobian (i.e. JGDA) and shows that the Jacobian contains negative eigenvalues, which makes it a saddle point for gradient-play optimization. Indeed, gradient-play (shown in Figure 5b), avoids this Nash equilibrium. Instead, it eventually finds another Nash equilibrium that is stable fixed point.
6.3 PARAMETERIZED BILINEAR GAME
We consider another zero-sum game, the stochastic parameterized bilinear game, as in Prajapat et al. (2021). We use this experiment to demonstrate that double-FTR is also beneficial for stochastic games, and does not exhibit oscillatory behaviour due to having real eigenvalues at fixed points.
min µx, x r(x, y), min µy, y
r(x, y) where x ⇠ N (µx, 2x), y ⇠ N (µy, 2y), r(x, y) = xy.
The unique Nash equilibrium with respect to (x, y) is (0, 0). However, the learnable parameters are the mean and the standard deviation of the distributions where x and y are drawn from. We denote the learnable parameters for x and y as ✓ and respectively. At each time step, we obtain an unbiased estimate of the gradient using REINFORCE over a mini-batch of size B:
r̃✓r(✓, ) = 1
B
BX
i=1
r✓ logN (xi;✓)r(xi, yi), ✓ = µx x , r̃ r(✓, ) computed analogously.
As is often the case, GDA has oscillatory behaviour due to the complex eigenvalues of its Jacobian at fixed points. In this stochastic setting, the oscillation prevents convergence for GDA (Figure 6). In contrast, the double-FTR algorithm does not have rotational behaviour at fixed points, and converges to the unique Nash equilibrium (x, y) = (0, 0).
6.4 GENERATIVE ADVERSARIAL NETWORKS
The Generative Adversarial Network (GAN) (Goodfellow et al., 2014) is a popular deep learning application for two-player games. The goal is to find the Nash equilibrium where the generator perfectly matches the target distribution, and the discriminator is completely fooled by the generator.
In this experiment, we use the GAN framework to learn mixture of Gaussians (MoG). We use the original saturating loss function. Both the generator and the discriminator are multi-layer perceptrons with 2 hidden layers and 64 hidden units in each layer. With neural networks, directly implementing the Hessian would be computationally inefficient or infeasible. Instead, we use conjugate gradient to approximate the Hessian inverse. Details of the experiments can be found in Appendix C.
As shown in Figure 7 and 8, we apply GDA and double-FTR to learn MoG in 1D and 2D. In both cases, GDA gets stuck at a spurious equilibrium and suffers from mode collapse. In contrast, double-FTR recovers all the modes, and the generated distribution closely matches the target.
7 CONCLUSION
We propose double Follow-the-Ridge (double-FTR), a gradient-based algorithm for finding local Nash equilibria in differentiable games. We prove that under mild assumptions, double-FTR locally converges to and only to differential Nash equilibria in the general-sum games, and avoids oscillation in the neighbourhood of fixed points. Furthermore, we remark that by varying the preconditioner, double-FTR leads to a broader family of algorithms that share the same convergence guarantee. Finally, we empirically verify the effectiveness of double-FTR in finding and only finding local Nash equilibria across a broad range of problems.
8 REPREDUCABILITY STATEMENT
For empirical results, we describe the experiment settings in detail in Appendix C. We also provide code for all experiments in the supplementary material. For the theoretical results, proofs are included in the appendix. | 1. What is the focus and contribution of the paper on Follow-the-Ridge algorithm for two-player differentiable games?
2. What are the strengths of the proposed approach, particularly in its simplicity and generality?
3. What are the weaknesses of the paper regarding its theoretical guarantees and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions raised by the reviewer that require further clarification or experimentation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes the application of Follow-The-Ridge (usage of Shur's complement in Gradient Dynamics) for both players. The authors establish that double Follow-the-Ridge (double-FTR) is an algorithm that locally converges to and only to local Nash equilibria in general-sum two-player differentiable games.
Strengths And Weaknesses
Strength: The iead of introducing of being the doubly the follower is interesting. It is not surprising that replacing the opponents' strategy by a follower's one will lead to better and safest performance. On the other hand, it seems to be a simpler and more general way (like a blackbox) to doing lookahead than previous approaches by modeling opponents.
Weaknesses: 1)There is no theoretical guarantee on convergence or approximation bound. 2)The double Follow the ridge trick may be too simple. Anyway, there should be a comparison with simply lookahead methods and with the following publications:
a) https://arxiv.org/pdf/2210.09769.pdf b) https://arxiv.org/abs/2112.13826 3) The method is second-order.
Clarity, Quality, Novelty And Reproducibility
Clarity & Quality: It is well-written paper Novelty: Not-clear but interesting.
See above.
Clarification question #1: The preconditioning or the regularization with \ell_2 could transform the game to strong convex-concave or not? Clarification question #2: In https://arxiv.org/abs/1910.13010, cited by (https://arxiv.org/pdf/2210.09769.pdf), difficult non-convex non-concave games are presented. It would be useful to present experiments of double FTR in such so-called hidden games, in order to exemplify the success of local vs global convergence guarantee in some challenging functions (since the presented ones seem in my humble opinion a bit artificial & synthetic) |
ICLR | Title
Finding and only finding local Nash equilibria by both pretending to be a follower
Abstract
Finding Nash equilibria in two-player differentiable games is a classical problem in game theory with important relevance in machine learning. We propose double Follow-the-Ridge (double-FTR), an algorithm that locally converges to and only to local Nash equilibria in general-sum two-player differentiable games. To our knowledge, double-FTR is the first algorithm with such guarantees for general-sum games. Furthermore, we show that by varying its preconditioner, double-FTR leads to a broader family of algorithms with the same convergence guarantee. In addition, double-FTR avoids oscillation near equilibria due to the real-eigenvalues of its Jacobian at fixed points. Empirically, we validate the double-FTR algorithm on a range of simple zero-sum and general sum games, as well as simple Generative Adversarial Network (GAN) tasks.
N/A
Finding Nash equilibria in two-player differentiable games is a classical problem in game theory with important relevance in machine learning. We propose double Follow-the-Ridge (double-FTR), an algorithm that locally converges to and only to local Nash equilibria in general-sum two-player differentiable games. To our knowledge, double-FTR is the first algorithm with such guarantees for general-sum games. Furthermore, we show that by varying its preconditioner, double-FTR leads to a broader family of algorithms with the same convergence guarantee. In addition, double-FTR avoids oscillation near equilibria due to the real-eigenvalues of its Jacobian at fixed points. Empirically, we validate the double-FTR algorithm on a range of simple zero-sum and general sum games, as well as simple Generative Adversarial Network (GAN) tasks.
1 INTRODUCTION
Much of the recent success in deep learning can be attributed to the effectiveness of gradient-based optimization. It is well-known that for a minimization problem, with appropriate choice of learning rates, gradient descent has convergence guarantee to local minima (Lee et al., 2016; 2019). Based on this foundational result, an array of accelerated and higher-order methods have since been proposed and widely applied in training neural networks (Duchi et al., 2011; Kingma and Ba, 2014; Reddi et al., 2018; Zhang et al., 2019b).
However, once we leave the realm of minimization problems and consider the multi-agent setting, the optimization landscape becomes much more complicated. Multi-agent optimization problems arise in diverse fields such as robotics, economics and machine learning (Foerster et al., 2016; Von Neumann and Morgenstern, 2007; Goodfellow et al., 2014; Ben-Tal and Nemirovski, 2002; Gemp et al., 2020; Anil et al., 2021).
A classical abstraction that is especially relevant for machine learning is two-player differentiable games, where the objective is to find global or local Nash equilibria. The equivalent of gradient descent in such a game would be each agent applying gradient descent to minimize their own objective function. However, in stark contrast with gradient descent in solving minimization problems, this gradient-descent-style algorithm may converge to spurious critical points that are not local Nash equilibria, and in the general-sum game case, local Nash equilibria might not even be stable critical points for this algorithm (Mazumdar et al., 2020b)!
These negative results have driven a surge of recent interest in developing other gradient-based algorithms for finding Nash equilibria in differentiable games. Among them is Mazumdar et al. (2019), who proposed an update algorithm whose attracting critical points are only local Nash equilibria in the special case of zero-sum games. However, to the best of our knowledge, such guarantees have not been extended to general-sum games.
We propose double Follow-the-Ridge (double-FTR), a gradient-based algorithm for general-sum differentiable games that locally converges to and only to differential Nash equilibria. Double-FTR is closely related to the Follow-the-Ridge (FTR) algorithm for Stackelberg games (Wang et al., 2019), which converges to and only to local Stackelberg equilibria (Fiez et al., 2019). Double-FTR can be viewed as its counterpart for simultaneous games, where each player adopts the “follower” strategy in FTR.
The rest of this paper is organized as follows. In Section 2, we give background on two-player differentiable games and equilibrium concepts. We also explain the issues with using gradientdescent-style algorithm on such games. In Section 3, we present the double-FTR algorithm and prove its local convergence to and only to differential Nash equilibria. We also identify a more general class of algorithms that share these properties. We discuss recent works directly relevant to double-FTR in Section 4 and other related work in Section 5. In Section 6, we show empirical evidence of double-FTR’s convergence to and only to local Nash equilibria.
2 BACKGROUND
2.1 TWO-PLAYER DIFFERENTIABLE GAMES AND EQUILIBRIUM CONCEPTS
In a general-sum two-player differentiable game, player 1 aims to minimize f : Rn+m ! R with respect to x 2 Rn, whereas player 2 aims to maximize g : Rn+m ! R with respect to y 2 Rm. Following the notation in Mazumdar et al. (2019), we denote such the game as {(f, g),Rn+m}. We also make the following assumption on the twice-differentiability of f and g. Assumption 1. 8 x 2 Rn,y 2 Rm, f and g are twice-differentiable, and the second derivatives are continuous. Also, r2xxf and r2yyg are invertible.
For two rational, non-cooperative players, their optimal outcome is to achieve a local Nash equilibrium (Ratliff et al., 2013). A point (x⇤,y⇤) is a local Nash equilibrium1 of {(f, g),Rn+m} if there exists open sets Sx ⇢ Rn, Sy ⇢ Rm such that x⇤ 2 Sx, y⇤ 2 Sy , and f(x⇤,y⇤) f(x,y⇤), g(x⇤,y⇤) g(x⇤,y), 8x 2 Sx, 8y 2 Sy. A closely related notion of equilibrium is the differential Nash equilibrium (DNE) (Ratliff et al., 2013), which satisfies a second-order sufficient condition for local Nash equilibrium. Definition 2.1 (Differential Nash equilibrium). (x⇤,y⇤) is a differential Nash equilibrium of {(f, g),Rn+m} if the following two conditions hold:
• rxf(x⇤,y⇤) = 0 and ryg(x⇤,y⇤) = 0.
• r2xxf(x⇤,y⇤) 0 and r2yyg(x⇤,y⇤) 0.
The conditions of DNE are slightly stronger than that of local Nash equilibria in that the second-order conditions are definite instead of semi-definite. In this paper, we focus on DNE, as they make up almost all local Nash equilibria in the mathematical sense, and are well-suited for the analysis of second-order algorithms.
2.2 ISSUES WITH GRADIENT-BASED ALGORITHMS
A natural strategy for agents to search for local Nash equilibria in a differentiable game is to use gradient-based algorithms. The simplest gradient-based algorithm is the gradient descent-ascent (GDA) (Ryu and Boyd, 2016; Zhang et al., 2021b) (Algorithm 1) or its variants (Zhang et al., 2021a; Korpelevich, 1976; Mokhtari et al., 2020).
Algorithm 1 Gradient descent-ascent (GDA) Require: Number of iterations T , learning rate
1: for t = 1, . . . , T do 2: xt+1 = xt rxf(xt,yt) 3: yt+1 = yt + ryg(xt,yt) 4: end for
Let z = x y and > 0 be the learning rate, a gradient-based update algorithm can be written as:
zt+1 = zt !(zt). (1) 1Note that local Nash equilibrium is not guaranteed to exist in nonconvex-nonconcave games ((Jin et al.,
2020), Proposition 6), although the (non-)existence of local NE is out of the scope of this paper.
The Jacobian of !(z) is defined as J(z) := @!(z)@z . In the case of GDA, we have:
!GDA(z) = rxf(x,y) ryg(x,y) , JGDA = r2xxf r2xyf r2yxg r2yyg .
Using the Jacobian matrix, we characterize the fixed points of equation 1. Definition 2.2 ((Strictly) stable fixed point). z⇤ is a stable fixed point of the discrete-time dynamical system in equation 1 if
!(z⇤) = 0 and ⇢(I J(z⇤)) 1, where ⇢(·) denotes the spectral radius of a matrix. If we additionally have ⇢(I J(z⇤)) < 1, then z⇤ is a strictly stable fixed point.
Strictly stable fixed points are important for analysis, as they are locally asymptotically convergent (Galor, 2007), i.e. there exists an open set Sz such that z⇤ 2 Sz and limt!1 zt = z⇤ 8z0 2 Sz . A closely related concept is the locally asymptotically stable equilibrium (LASE) for the continuoustime system ż = !(z). (Ratliff et al., 2013). Definition 2.3 (Locally asymptotically stable equilibrium (LASE)). z⇤ is a locally asymptotically stable equilibrium of the continuous-time dynamics ż = !(z) if
!(z⇤) = 0 and Re( ) > 0 for 8 2 spec(J(z⇤)), where Re(·) denotes the real part of a complex number, and spec(·) returns the spectrum (i.e. the set of eigenvalues) of a matrix.
Note that when ! 0, strictly stable fixed points of equation 1 are equivalent to LASE of ż = !(z). In this paper, we prove convergence results in discrete-time (using Definition 2.2), but we often provide intuition using continuous-time concepts such as LASE.
Unfortunately, GDA is not guaranteed to converge to DNE, nor are DNE necessarily (strictly) stable fixed points of the GDA dynamics. Even in the special case of zero-sum games (g = f ), GDA dynamics can still have stable fixed points that are not DNE (Daskalakis and Panageas, 2018; Mazumdar et al., 2020b). The relationship is shown in the Venn diagrams in Figure 1 (to eliminate the effect of , we show illustration in the continuous-time limit ż = !GDA(z)). In Figure 2, we demonstrate the failure modes of GDA in zero-sum games. In 2a, GDA converges to a spurious strictly stable fixed point which is not DNE. In 2b, GDA fails to converge to the unique DNE (Hsieh et al., 2020). Instead, it goes into a limit cycle, due to the strong rotation introduced by large complex parts in its Jacobian eigenvalues. We stress that these pathologies are not limited to GDA, but common for many other first-order algorithms (Wang et al., 2019).
3 DOUBLE FOLLOW-THE-RIDGE
We propose double Follow-the-Ridge (double-FTR), an update rule for general-sum differential games that locally converges to and only to differential Nash equilibria. The double-FTR update is shown in Algorithm 2 (the arguments xt, yt of f and g are dropped to avoid notational clutter).
Algorithm 2 Double Follow-the-Ridge Require: Learning rate ⌘x and ⌘y; number of iterations T .
1: for t = 1, . . . , T do 2: xt+1 xt ⌘xrxf ⌘y(r2xxf) 1r2xygryg 3: yt+1 yt + ⌘yryg + ⌘x(r2yyg) 1r2yxfrxf 4: end for
Let z = x y , = ⌘x and c = ⌘y ⌘x , we can express Algorithm 2 in vectorized form (equation 2). To
simplify the notation, we drop the subscript t for f and g.
zt+1 = zt !FTR(zt), !FTR(zt) =
I (r2xxf) 1r2xyg (r2yyg) 1r2yxf I rxf cryg .
(2)
3.1 LOCAL CONVERGENCE OF DOUBLE-FTR
In this section, we give our main theoretical result. First, we introduce an additional assumption. Assumption 2. At fixed points of equation 2, JGDA(z) has full rank.
Assumption 2 ensures that in double-FTR, the additional terms in the update do not exactly cancel out the GDA terms. In practice, `2 regularization might need to be added to the objective functions. Note a similar assumption is introduced in Mazumdar et al. (2019) Theorem 4.
Our main theoretical result is stated below. Theorem 1. Under Assumptions 1 and 2 and with an appropriate choice of learning rate , z⇤ is a strictly stable fixed point of the double-FTR update (equation 2) if and only if it is a differential Nash equilibrium of the game {(f, g),Rn+m}. Furthermore, at fixed points of equation 2, all eigenvalues of the Jacobian JFTR := @!FTR @z are real.
Intuitively, the first part of the theorem classifies the strictly stable fixed points of double-FTR, and the second part ensures that there is no rotation caused by complex eigenvalues in the neighbourhood of the DNEs. We defer the proof of Theorem 1 to Appendix A. Corollary 1 (Local convergence). Let z⇤ be a DNE of the game {(f, g),Rn+m}. Under Assumptions 1 and 2 and with an appropriate choice of learning rate , there exists an open set Sz ⇢ Rn+m where z⇤ 2 Sz , such that when following equation 2, 8 z0 2 Sz , limt!1 zt ! z⇤.
Proof. The proof follows naturally by combining Theorem 1 with the local convergence of strictly stable fixed points (Galor (2007), Proposition 1.9).
To the best of our knowledge, double FTR is the first algorithm with such local convergence result for general-sum games.
3.2 GENERAL PRECONDITIONERS
In the following remark, we show that double-FTR can be generalized to include a whole family of algorithms. Remark 1. Theorem 1 applies to a more general version of the double FTR algorithm. In particular, we can generalize equation 2 to allow a broader class of “preconditioners”:
zt+1 = zt !̃FTR(zt), !̃FTR(z) = Px
Py
J>GDA(zt) rxf cryg , (3)
where Px, Py are functions of x,y respectively, which satisfy Px 0 () r2xxf 0 and Py 0() r2yyg 0.
Equation 2 corresponds to the special case of Px = (r2xxf) 1, Py = (r2yyg) 1. The proof for Theorem 1 directly applies to the case of general preconditioners in Remark 1.
Remark 1 provides intuition on the convergence properties of double-FTR. Without the preconditioner Px and Py , double-FTR reduces to Hamiltonian gradient descent (Mescheder et al., 2017; Balduzzi et al., 2018; Loizou et al., 2020; Abernethy et al., 2021), which is not guaranteed to only converge to local Nash equilibria. It is the introduction of the preconditioner that enables strictly stable fixed points to satisfy the second-order condition of DNE.
Remark 1 also sheds lights on how to derive a more practical algorithm. Naively implementing Algorithm 2 might cause instability when r2xxf and r2yyg are near singular. In practice, we use (r2xxfr2xxf + I) 1r2xxf instead of (r2xxf) 1 in Algorithm 2 (where a small > 0 is the damping parameter). Note that this also allows us to drop the assumption on the invertibility ofr2xxf and r2yyg in Assumption 1.
4 CONNECTION WITH OTHER ALGORITHMS
Mazumdar et al. (2019) proposed local symplectic surgery (LSS) – a gradient-based algorithm whose LASE are exactly local Nash equilibria in two-player zero-sum games. LSS avoids oscillatory behaviour at local Nash equilibria (similar to double-FTR). Compared to LSS, double-FTR appears to have a simpler form and enables a broader family of algorithms with such local convergence result in general-sum games.
The Follow-the-Ridge (FTR) algorithm (Wang et al., 2019) is closely related to our proposed doubleFTR. FTR was proposed for two-player sequential games and is guaranteed to converge to and only to local minimax. FTR applies a gradient correction term on the follower in a sequential game, so that the agents approximately follow a ridge in the landscape of the objective function. The double-FTR can be viewed as a counterpart of FTR for simultaneous games. The update rule of double-FTR resembles that of FTR, with the gradient modification term applied on both players.
Another related algorithm is the Hamiltonian gradient descent (HGD) (Mescheder et al., 2017; Balduzzi et al., 2018; Loizou et al., 2020; Abernethy et al., 2021). HGD performs gradient-descent on the Hamiltonian, or the squared norm of the gradient. HGD is guaranteed to converge, as it is essentially a minimization problem. However, in general it is not guaranteed to converge only to local Nash equilibria. Interestingly, our double-FTR can be viewed as a preconditioned HGD.
5 RELATED WORK
Mazumdar et al. (2020b) introduced a general framework for competitive gradient-based learning. They characterized local Nash equilibria in terms of the critical points of the gradient algorithms.
They showed the lack of convergence of the gradient algorithm in games, which motivated the development of the double-FTR algorithm.
Much work has focused on improving the dynamics in finding stable fixed points, which is crucial in applications such as GANs, where oscillation caused by eigenvalues with zero real parts of large imaginary parts in the gradient Jacobian can lead to training instability. Mescheder et al. (2017) proposes Consensus Optimization, which encourages agreement between the two players by introducing a regularization term in the objectives of both players. The regularization term results in a more negative real-part for the eigenvalues of the gradient Jacobian, therefore reduces oscillation and allows larger learning rates. Balduzzi et al. (2018); Gemp and Mahadevan (2018) proposes Symplectic Gradient Adjustment (SGA), which decomposes the gradient Jacobian into symmetric (potential) and asymmetric (Hamiltonian) parts and adds a gradient adjustment term for rapid convergence to stable fixed points. Schäfer and Anandkumar (2019) proposes Competitive Gradient Descent (CGD), whose update is given by the Nash equilibrium of a regularized bilinear approximation of the original game. Compared to other methods, CGD has the advantage of not needing to adapt step size when the interaction strength changes between players. Many other methods have been proposed with different strategies for predicting other agents’ moves, such as Learning with Opponent Learning Awareness (LOLA) (Foerster et al., 2016) and optimistic gradient descent-ascent (OGDA) (Popov, 1980; Rakhlin and Sridharan, 2013; Daskalakis et al., 2018; Mertikopoulos et al., 2018). However, none of these existing methods address the problem of spurious (i.e. non-Nash) stable fixed points.
6 EXPERIMENTS
We conduct simple experiments to demonstrate the implications of our theoretical results. In Section 6.1, we show that the double-FTR algorithm empirically converges to and only to differential Nash equilibria, as predicted by Theorem 1. In Section 6.2, we show that double-FTR is able to converge to local Nash equilibria that naive gradient-play avoids in general-sum linear quadratic games. In Section 6.3, we demonstrate the practical implications of another property of double-FTR — eigenvalues of JFTR at fixed points are real.
6.1 2-D TOY EXAMPLE
We consider the zero-sum game {f, f},R2 with the following 2-D function (same as in Mazumdar et al. (2019)):
f(x, y) = e 0.01(x 2+y2) (0.3x2 + y)2 + (0.5y2 + x)2 .
This function has several strictly stable fixed points for the GDA dynamics, among which some are DNE and some are not. As shown in Figure 3, while GDA may converge to fixed points that are not local Nash equilibria, double-FTR avoids such spurious fixed points. Also, in the neighbourhood
of local Nash equilibria, GDA exhibits oscillatory behaviour due to complex eigenvalues of the Jacobian matrix. In contrast, the double-FTR does not have oscillatory behaviour near local Nash equilibria. For reference, we also show the trajectories of the Local Symplectic Surgery (LSS). In this experiment, LSS has similar convergence properties – it avoids spurious fixed points and does not have oscillatory behaviour near local Nash equilibria.
6.2 GENERAL-SUM LINEAR QUADRATIC GAME
The linear quadratic (LQ) game is a classic problem in multi-agent learning. It is an extension of the famous linear quadratic regulator (LQR) problem of optimal control to the multi-agent setting. Just as LQR being a simple yet important benchmark problem for studying properties of reinforcement learning algorithms, the LQ game provides valuable insights to multi-agent RL algorithms (Fazel et al., 2018; Zhang et al., 2019a).
Consider the discrete-time linear dynamical system, where z 2 Rdz is the state, and two players provide control inputs u 2 Rdu and v 2 Rdv respectively.
zt+1 = Azt +Buut +Bvvt, z0 ⇠ p(z0)
Each player adopts a linear state-feedback policy: ut = Kuzt, vt = Kvzt, where the parameters Ku 2 Rdu⇥dz , Kv 2 Rdv⇥dz are to be determined by optimization. In a general-sum LQ game, each player aims to find their corresponding policy parameters K that minimizes their individual quadratic loss function f (shown in equation 4, fv(Ku,Kv) defined analogously using Qv and Rv).
fu(Ku,Kv) = Ez0⇠p(z0) 1X
t=0
z>t Quzt + u > t Ruut (Qu 0, Ru 0) (4)
Despite their simplicity, LQ games are challenging to optimize, because even though the loss functions are quadratic in the states and actions, they are not convex with respect to the player parameters Ku and Kv. Importantly, Mazumdar et al. (2020a) show that in general sum LQ games, using naive gradient-play almost surely avoids some Nash equilibria.
We demonstrate in general-sum LQ games, double-FTR is able to find the Nash equilibria that are avoided by gradient-play. We use a setting mentioned in Mazumdar et al. (2020a), where dz = 2, du = dv = 1, Ru = Rv = 0.01, and
A = 0.511 0.064 0.533 0.993 ,Bu = 1 1 ,Bv = 0 1 ,Qu = 0.01 0 0 1 ,Qv = 1 0 0 0.147 .
The initial state z0 is set to [1 1] > or [1 1.1]> with equal probability.
Figure 4 and 5 shows an instance where the double-FTR is able to converge to a local Nash equilibrium, but the gradient-play fails to. For both algorithms, we use the same initial policy parameters Ku and Kv. Figure 4 visualizes the loss landscape for fu(Ku,Kv) and fv(Ku,Kv) when optimized by double-FTR. It confirms that the solution double-FTR converges to is indeed a Nash equilibrium (the
second-order condition in Definition 2.1). Figure 5a visualizes the local vector field Jacobian (i.e. JGDA) and shows that the Jacobian contains negative eigenvalues, which makes it a saddle point for gradient-play optimization. Indeed, gradient-play (shown in Figure 5b), avoids this Nash equilibrium. Instead, it eventually finds another Nash equilibrium that is stable fixed point.
6.3 PARAMETERIZED BILINEAR GAME
We consider another zero-sum game, the stochastic parameterized bilinear game, as in Prajapat et al. (2021). We use this experiment to demonstrate that double-FTR is also beneficial for stochastic games, and does not exhibit oscillatory behaviour due to having real eigenvalues at fixed points.
min µx, x r(x, y), min µy, y
r(x, y) where x ⇠ N (µx, 2x), y ⇠ N (µy, 2y), r(x, y) = xy.
The unique Nash equilibrium with respect to (x, y) is (0, 0). However, the learnable parameters are the mean and the standard deviation of the distributions where x and y are drawn from. We denote the learnable parameters for x and y as ✓ and respectively. At each time step, we obtain an unbiased estimate of the gradient using REINFORCE over a mini-batch of size B:
r̃✓r(✓, ) = 1
B
BX
i=1
r✓ logN (xi;✓)r(xi, yi), ✓ = µx x , r̃ r(✓, ) computed analogously.
As is often the case, GDA has oscillatory behaviour due to the complex eigenvalues of its Jacobian at fixed points. In this stochastic setting, the oscillation prevents convergence for GDA (Figure 6). In contrast, the double-FTR algorithm does not have rotational behaviour at fixed points, and converges to the unique Nash equilibrium (x, y) = (0, 0).
6.4 GENERATIVE ADVERSARIAL NETWORKS
The Generative Adversarial Network (GAN) (Goodfellow et al., 2014) is a popular deep learning application for two-player games. The goal is to find the Nash equilibrium where the generator perfectly matches the target distribution, and the discriminator is completely fooled by the generator.
In this experiment, we use the GAN framework to learn mixture of Gaussians (MoG). We use the original saturating loss function. Both the generator and the discriminator are multi-layer perceptrons with 2 hidden layers and 64 hidden units in each layer. With neural networks, directly implementing the Hessian would be computationally inefficient or infeasible. Instead, we use conjugate gradient to approximate the Hessian inverse. Details of the experiments can be found in Appendix C.
As shown in Figure 7 and 8, we apply GDA and double-FTR to learn MoG in 1D and 2D. In both cases, GDA gets stuck at a spurious equilibrium and suffers from mode collapse. In contrast, double-FTR recovers all the modes, and the generated distribution closely matches the target.
7 CONCLUSION
We propose double Follow-the-Ridge (double-FTR), a gradient-based algorithm for finding local Nash equilibria in differentiable games. We prove that under mild assumptions, double-FTR locally converges to and only to differential Nash equilibria in the general-sum games, and avoids oscillation in the neighbourhood of fixed points. Furthermore, we remark that by varying the preconditioner, double-FTR leads to a broader family of algorithms that share the same convergence guarantee. Finally, we empirically verify the effectiveness of double-FTR in finding and only finding local Nash equilibria across a broad range of problems.
8 REPREDUCABILITY STATEMENT
For empirical results, we describe the experiment settings in detail in Appendix C. We also provide code for all experiments in the supplementary material. For the theoretical results, proofs are included in the appendix. | 1. What is the focus and contribution of the paper on solving the problem of computing local Nash equilibria?
2. What are the strengths of the proposed method, particularly its inspiration from the Follow-the-Ridge heuristic algorithm and use of second-order information?
3. What are the weaknesses of the paper regarding its limitations on convergence guarantees and the presence of standard facts in the main body?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors contribute a novel method, Double-Follow-the-Ridge, for solving the problem of computing local Nash equilibria in two-player general-sum differentiable games. This novel method is inspired by the Follow-the-Ridge heuristic algorithm and uses second-order information along with first-order information. In fact, the authors generalize the method to contain dynamics that use preconditioners in place of the second order information as long as some conditions are met. Finally, the paper is complemented with an array of experiments that demonstrate the empirical performance of the proposed method. The convergence guarantees are restricted to the local convergence regime.
Strengths And Weaknesses
Pros: * the paper is self-contained and all the definitions are provided * the novel method is a natural extension of the Follow-the-Ridge
Cons: * the convergence of the algorithm is guaranteed only locally * although the paper contains a lot of preliminaries which is helpful, the main body contains a lot of facts that are standard in min-max optimization and could be moved to the appendix * a standard reference for the limits of gda is not contained in the paper, i.e.,
Daskalakis, C. and Panageas, I., 2018. The limit points of (optimistic) gradient descent in min-max optimization. Advances in neural information processing systems, 31.
Clarity, Quality, Novelty And Reproducibility
The paper is well written and well-motivated |
ICLR | Title
Finding and only finding local Nash equilibria by both pretending to be a follower
Abstract
Finding Nash equilibria in two-player differentiable games is a classical problem in game theory with important relevance in machine learning. We propose double Follow-the-Ridge (double-FTR), an algorithm that locally converges to and only to local Nash equilibria in general-sum two-player differentiable games. To our knowledge, double-FTR is the first algorithm with such guarantees for general-sum games. Furthermore, we show that by varying its preconditioner, double-FTR leads to a broader family of algorithms with the same convergence guarantee. In addition, double-FTR avoids oscillation near equilibria due to the real-eigenvalues of its Jacobian at fixed points. Empirically, we validate the double-FTR algorithm on a range of simple zero-sum and general sum games, as well as simple Generative Adversarial Network (GAN) tasks.
N/A
Finding Nash equilibria in two-player differentiable games is a classical problem in game theory with important relevance in machine learning. We propose double Follow-the-Ridge (double-FTR), an algorithm that locally converges to and only to local Nash equilibria in general-sum two-player differentiable games. To our knowledge, double-FTR is the first algorithm with such guarantees for general-sum games. Furthermore, we show that by varying its preconditioner, double-FTR leads to a broader family of algorithms with the same convergence guarantee. In addition, double-FTR avoids oscillation near equilibria due to the real-eigenvalues of its Jacobian at fixed points. Empirically, we validate the double-FTR algorithm on a range of simple zero-sum and general sum games, as well as simple Generative Adversarial Network (GAN) tasks.
1 INTRODUCTION
Much of the recent success in deep learning can be attributed to the effectiveness of gradient-based optimization. It is well-known that for a minimization problem, with appropriate choice of learning rates, gradient descent has convergence guarantee to local minima (Lee et al., 2016; 2019). Based on this foundational result, an array of accelerated and higher-order methods have since been proposed and widely applied in training neural networks (Duchi et al., 2011; Kingma and Ba, 2014; Reddi et al., 2018; Zhang et al., 2019b).
However, once we leave the realm of minimization problems and consider the multi-agent setting, the optimization landscape becomes much more complicated. Multi-agent optimization problems arise in diverse fields such as robotics, economics and machine learning (Foerster et al., 2016; Von Neumann and Morgenstern, 2007; Goodfellow et al., 2014; Ben-Tal and Nemirovski, 2002; Gemp et al., 2020; Anil et al., 2021).
A classical abstraction that is especially relevant for machine learning is two-player differentiable games, where the objective is to find global or local Nash equilibria. The equivalent of gradient descent in such a game would be each agent applying gradient descent to minimize their own objective function. However, in stark contrast with gradient descent in solving minimization problems, this gradient-descent-style algorithm may converge to spurious critical points that are not local Nash equilibria, and in the general-sum game case, local Nash equilibria might not even be stable critical points for this algorithm (Mazumdar et al., 2020b)!
These negative results have driven a surge of recent interest in developing other gradient-based algorithms for finding Nash equilibria in differentiable games. Among them is Mazumdar et al. (2019), who proposed an update algorithm whose attracting critical points are only local Nash equilibria in the special case of zero-sum games. However, to the best of our knowledge, such guarantees have not been extended to general-sum games.
We propose double Follow-the-Ridge (double-FTR), a gradient-based algorithm for general-sum differentiable games that locally converges to and only to differential Nash equilibria. Double-FTR is closely related to the Follow-the-Ridge (FTR) algorithm for Stackelberg games (Wang et al., 2019), which converges to and only to local Stackelberg equilibria (Fiez et al., 2019). Double-FTR can be viewed as its counterpart for simultaneous games, where each player adopts the “follower” strategy in FTR.
The rest of this paper is organized as follows. In Section 2, we give background on two-player differentiable games and equilibrium concepts. We also explain the issues with using gradientdescent-style algorithm on such games. In Section 3, we present the double-FTR algorithm and prove its local convergence to and only to differential Nash equilibria. We also identify a more general class of algorithms that share these properties. We discuss recent works directly relevant to double-FTR in Section 4 and other related work in Section 5. In Section 6, we show empirical evidence of double-FTR’s convergence to and only to local Nash equilibria.
2 BACKGROUND
2.1 TWO-PLAYER DIFFERENTIABLE GAMES AND EQUILIBRIUM CONCEPTS
In a general-sum two-player differentiable game, player 1 aims to minimize f : Rn+m ! R with respect to x 2 Rn, whereas player 2 aims to maximize g : Rn+m ! R with respect to y 2 Rm. Following the notation in Mazumdar et al. (2019), we denote such the game as {(f, g),Rn+m}. We also make the following assumption on the twice-differentiability of f and g. Assumption 1. 8 x 2 Rn,y 2 Rm, f and g are twice-differentiable, and the second derivatives are continuous. Also, r2xxf and r2yyg are invertible.
For two rational, non-cooperative players, their optimal outcome is to achieve a local Nash equilibrium (Ratliff et al., 2013). A point (x⇤,y⇤) is a local Nash equilibrium1 of {(f, g),Rn+m} if there exists open sets Sx ⇢ Rn, Sy ⇢ Rm such that x⇤ 2 Sx, y⇤ 2 Sy , and f(x⇤,y⇤) f(x,y⇤), g(x⇤,y⇤) g(x⇤,y), 8x 2 Sx, 8y 2 Sy. A closely related notion of equilibrium is the differential Nash equilibrium (DNE) (Ratliff et al., 2013), which satisfies a second-order sufficient condition for local Nash equilibrium. Definition 2.1 (Differential Nash equilibrium). (x⇤,y⇤) is a differential Nash equilibrium of {(f, g),Rn+m} if the following two conditions hold:
• rxf(x⇤,y⇤) = 0 and ryg(x⇤,y⇤) = 0.
• r2xxf(x⇤,y⇤) 0 and r2yyg(x⇤,y⇤) 0.
The conditions of DNE are slightly stronger than that of local Nash equilibria in that the second-order conditions are definite instead of semi-definite. In this paper, we focus on DNE, as they make up almost all local Nash equilibria in the mathematical sense, and are well-suited for the analysis of second-order algorithms.
2.2 ISSUES WITH GRADIENT-BASED ALGORITHMS
A natural strategy for agents to search for local Nash equilibria in a differentiable game is to use gradient-based algorithms. The simplest gradient-based algorithm is the gradient descent-ascent (GDA) (Ryu and Boyd, 2016; Zhang et al., 2021b) (Algorithm 1) or its variants (Zhang et al., 2021a; Korpelevich, 1976; Mokhtari et al., 2020).
Algorithm 1 Gradient descent-ascent (GDA) Require: Number of iterations T , learning rate
1: for t = 1, . . . , T do 2: xt+1 = xt rxf(xt,yt) 3: yt+1 = yt + ryg(xt,yt) 4: end for
Let z = x y and > 0 be the learning rate, a gradient-based update algorithm can be written as:
zt+1 = zt !(zt). (1) 1Note that local Nash equilibrium is not guaranteed to exist in nonconvex-nonconcave games ((Jin et al.,
2020), Proposition 6), although the (non-)existence of local NE is out of the scope of this paper.
The Jacobian of !(z) is defined as J(z) := @!(z)@z . In the case of GDA, we have:
!GDA(z) = rxf(x,y) ryg(x,y) , JGDA = r2xxf r2xyf r2yxg r2yyg .
Using the Jacobian matrix, we characterize the fixed points of equation 1. Definition 2.2 ((Strictly) stable fixed point). z⇤ is a stable fixed point of the discrete-time dynamical system in equation 1 if
!(z⇤) = 0 and ⇢(I J(z⇤)) 1, where ⇢(·) denotes the spectral radius of a matrix. If we additionally have ⇢(I J(z⇤)) < 1, then z⇤ is a strictly stable fixed point.
Strictly stable fixed points are important for analysis, as they are locally asymptotically convergent (Galor, 2007), i.e. there exists an open set Sz such that z⇤ 2 Sz and limt!1 zt = z⇤ 8z0 2 Sz . A closely related concept is the locally asymptotically stable equilibrium (LASE) for the continuoustime system ż = !(z). (Ratliff et al., 2013). Definition 2.3 (Locally asymptotically stable equilibrium (LASE)). z⇤ is a locally asymptotically stable equilibrium of the continuous-time dynamics ż = !(z) if
!(z⇤) = 0 and Re( ) > 0 for 8 2 spec(J(z⇤)), where Re(·) denotes the real part of a complex number, and spec(·) returns the spectrum (i.e. the set of eigenvalues) of a matrix.
Note that when ! 0, strictly stable fixed points of equation 1 are equivalent to LASE of ż = !(z). In this paper, we prove convergence results in discrete-time (using Definition 2.2), but we often provide intuition using continuous-time concepts such as LASE.
Unfortunately, GDA is not guaranteed to converge to DNE, nor are DNE necessarily (strictly) stable fixed points of the GDA dynamics. Even in the special case of zero-sum games (g = f ), GDA dynamics can still have stable fixed points that are not DNE (Daskalakis and Panageas, 2018; Mazumdar et al., 2020b). The relationship is shown in the Venn diagrams in Figure 1 (to eliminate the effect of , we show illustration in the continuous-time limit ż = !GDA(z)). In Figure 2, we demonstrate the failure modes of GDA in zero-sum games. In 2a, GDA converges to a spurious strictly stable fixed point which is not DNE. In 2b, GDA fails to converge to the unique DNE (Hsieh et al., 2020). Instead, it goes into a limit cycle, due to the strong rotation introduced by large complex parts in its Jacobian eigenvalues. We stress that these pathologies are not limited to GDA, but common for many other first-order algorithms (Wang et al., 2019).
3 DOUBLE FOLLOW-THE-RIDGE
We propose double Follow-the-Ridge (double-FTR), an update rule for general-sum differential games that locally converges to and only to differential Nash equilibria. The double-FTR update is shown in Algorithm 2 (the arguments xt, yt of f and g are dropped to avoid notational clutter).
Algorithm 2 Double Follow-the-Ridge Require: Learning rate ⌘x and ⌘y; number of iterations T .
1: for t = 1, . . . , T do 2: xt+1 xt ⌘xrxf ⌘y(r2xxf) 1r2xygryg 3: yt+1 yt + ⌘yryg + ⌘x(r2yyg) 1r2yxfrxf 4: end for
Let z = x y , = ⌘x and c = ⌘y ⌘x , we can express Algorithm 2 in vectorized form (equation 2). To
simplify the notation, we drop the subscript t for f and g.
zt+1 = zt !FTR(zt), !FTR(zt) =
I (r2xxf) 1r2xyg (r2yyg) 1r2yxf I rxf cryg .
(2)
3.1 LOCAL CONVERGENCE OF DOUBLE-FTR
In this section, we give our main theoretical result. First, we introduce an additional assumption. Assumption 2. At fixed points of equation 2, JGDA(z) has full rank.
Assumption 2 ensures that in double-FTR, the additional terms in the update do not exactly cancel out the GDA terms. In practice, `2 regularization might need to be added to the objective functions. Note a similar assumption is introduced in Mazumdar et al. (2019) Theorem 4.
Our main theoretical result is stated below. Theorem 1. Under Assumptions 1 and 2 and with an appropriate choice of learning rate , z⇤ is a strictly stable fixed point of the double-FTR update (equation 2) if and only if it is a differential Nash equilibrium of the game {(f, g),Rn+m}. Furthermore, at fixed points of equation 2, all eigenvalues of the Jacobian JFTR := @!FTR @z are real.
Intuitively, the first part of the theorem classifies the strictly stable fixed points of double-FTR, and the second part ensures that there is no rotation caused by complex eigenvalues in the neighbourhood of the DNEs. We defer the proof of Theorem 1 to Appendix A. Corollary 1 (Local convergence). Let z⇤ be a DNE of the game {(f, g),Rn+m}. Under Assumptions 1 and 2 and with an appropriate choice of learning rate , there exists an open set Sz ⇢ Rn+m where z⇤ 2 Sz , such that when following equation 2, 8 z0 2 Sz , limt!1 zt ! z⇤.
Proof. The proof follows naturally by combining Theorem 1 with the local convergence of strictly stable fixed points (Galor (2007), Proposition 1.9).
To the best of our knowledge, double FTR is the first algorithm with such local convergence result for general-sum games.
3.2 GENERAL PRECONDITIONERS
In the following remark, we show that double-FTR can be generalized to include a whole family of algorithms. Remark 1. Theorem 1 applies to a more general version of the double FTR algorithm. In particular, we can generalize equation 2 to allow a broader class of “preconditioners”:
zt+1 = zt !̃FTR(zt), !̃FTR(z) = Px
Py
J>GDA(zt) rxf cryg , (3)
where Px, Py are functions of x,y respectively, which satisfy Px 0 () r2xxf 0 and Py 0() r2yyg 0.
Equation 2 corresponds to the special case of Px = (r2xxf) 1, Py = (r2yyg) 1. The proof for Theorem 1 directly applies to the case of general preconditioners in Remark 1.
Remark 1 provides intuition on the convergence properties of double-FTR. Without the preconditioner Px and Py , double-FTR reduces to Hamiltonian gradient descent (Mescheder et al., 2017; Balduzzi et al., 2018; Loizou et al., 2020; Abernethy et al., 2021), which is not guaranteed to only converge to local Nash equilibria. It is the introduction of the preconditioner that enables strictly stable fixed points to satisfy the second-order condition of DNE.
Remark 1 also sheds lights on how to derive a more practical algorithm. Naively implementing Algorithm 2 might cause instability when r2xxf and r2yyg are near singular. In practice, we use (r2xxfr2xxf + I) 1r2xxf instead of (r2xxf) 1 in Algorithm 2 (where a small > 0 is the damping parameter). Note that this also allows us to drop the assumption on the invertibility ofr2xxf and r2yyg in Assumption 1.
4 CONNECTION WITH OTHER ALGORITHMS
Mazumdar et al. (2019) proposed local symplectic surgery (LSS) – a gradient-based algorithm whose LASE are exactly local Nash equilibria in two-player zero-sum games. LSS avoids oscillatory behaviour at local Nash equilibria (similar to double-FTR). Compared to LSS, double-FTR appears to have a simpler form and enables a broader family of algorithms with such local convergence result in general-sum games.
The Follow-the-Ridge (FTR) algorithm (Wang et al., 2019) is closely related to our proposed doubleFTR. FTR was proposed for two-player sequential games and is guaranteed to converge to and only to local minimax. FTR applies a gradient correction term on the follower in a sequential game, so that the agents approximately follow a ridge in the landscape of the objective function. The double-FTR can be viewed as a counterpart of FTR for simultaneous games. The update rule of double-FTR resembles that of FTR, with the gradient modification term applied on both players.
Another related algorithm is the Hamiltonian gradient descent (HGD) (Mescheder et al., 2017; Balduzzi et al., 2018; Loizou et al., 2020; Abernethy et al., 2021). HGD performs gradient-descent on the Hamiltonian, or the squared norm of the gradient. HGD is guaranteed to converge, as it is essentially a minimization problem. However, in general it is not guaranteed to converge only to local Nash equilibria. Interestingly, our double-FTR can be viewed as a preconditioned HGD.
5 RELATED WORK
Mazumdar et al. (2020b) introduced a general framework for competitive gradient-based learning. They characterized local Nash equilibria in terms of the critical points of the gradient algorithms.
They showed the lack of convergence of the gradient algorithm in games, which motivated the development of the double-FTR algorithm.
Much work has focused on improving the dynamics in finding stable fixed points, which is crucial in applications such as GANs, where oscillation caused by eigenvalues with zero real parts of large imaginary parts in the gradient Jacobian can lead to training instability. Mescheder et al. (2017) proposes Consensus Optimization, which encourages agreement between the two players by introducing a regularization term in the objectives of both players. The regularization term results in a more negative real-part for the eigenvalues of the gradient Jacobian, therefore reduces oscillation and allows larger learning rates. Balduzzi et al. (2018); Gemp and Mahadevan (2018) proposes Symplectic Gradient Adjustment (SGA), which decomposes the gradient Jacobian into symmetric (potential) and asymmetric (Hamiltonian) parts and adds a gradient adjustment term for rapid convergence to stable fixed points. Schäfer and Anandkumar (2019) proposes Competitive Gradient Descent (CGD), whose update is given by the Nash equilibrium of a regularized bilinear approximation of the original game. Compared to other methods, CGD has the advantage of not needing to adapt step size when the interaction strength changes between players. Many other methods have been proposed with different strategies for predicting other agents’ moves, such as Learning with Opponent Learning Awareness (LOLA) (Foerster et al., 2016) and optimistic gradient descent-ascent (OGDA) (Popov, 1980; Rakhlin and Sridharan, 2013; Daskalakis et al., 2018; Mertikopoulos et al., 2018). However, none of these existing methods address the problem of spurious (i.e. non-Nash) stable fixed points.
6 EXPERIMENTS
We conduct simple experiments to demonstrate the implications of our theoretical results. In Section 6.1, we show that the double-FTR algorithm empirically converges to and only to differential Nash equilibria, as predicted by Theorem 1. In Section 6.2, we show that double-FTR is able to converge to local Nash equilibria that naive gradient-play avoids in general-sum linear quadratic games. In Section 6.3, we demonstrate the practical implications of another property of double-FTR — eigenvalues of JFTR at fixed points are real.
6.1 2-D TOY EXAMPLE
We consider the zero-sum game {f, f},R2 with the following 2-D function (same as in Mazumdar et al. (2019)):
f(x, y) = e 0.01(x 2+y2) (0.3x2 + y)2 + (0.5y2 + x)2 .
This function has several strictly stable fixed points for the GDA dynamics, among which some are DNE and some are not. As shown in Figure 3, while GDA may converge to fixed points that are not local Nash equilibria, double-FTR avoids such spurious fixed points. Also, in the neighbourhood
of local Nash equilibria, GDA exhibits oscillatory behaviour due to complex eigenvalues of the Jacobian matrix. In contrast, the double-FTR does not have oscillatory behaviour near local Nash equilibria. For reference, we also show the trajectories of the Local Symplectic Surgery (LSS). In this experiment, LSS has similar convergence properties – it avoids spurious fixed points and does not have oscillatory behaviour near local Nash equilibria.
6.2 GENERAL-SUM LINEAR QUADRATIC GAME
The linear quadratic (LQ) game is a classic problem in multi-agent learning. It is an extension of the famous linear quadratic regulator (LQR) problem of optimal control to the multi-agent setting. Just as LQR being a simple yet important benchmark problem for studying properties of reinforcement learning algorithms, the LQ game provides valuable insights to multi-agent RL algorithms (Fazel et al., 2018; Zhang et al., 2019a).
Consider the discrete-time linear dynamical system, where z 2 Rdz is the state, and two players provide control inputs u 2 Rdu and v 2 Rdv respectively.
zt+1 = Azt +Buut +Bvvt, z0 ⇠ p(z0)
Each player adopts a linear state-feedback policy: ut = Kuzt, vt = Kvzt, where the parameters Ku 2 Rdu⇥dz , Kv 2 Rdv⇥dz are to be determined by optimization. In a general-sum LQ game, each player aims to find their corresponding policy parameters K that minimizes their individual quadratic loss function f (shown in equation 4, fv(Ku,Kv) defined analogously using Qv and Rv).
fu(Ku,Kv) = Ez0⇠p(z0) 1X
t=0
z>t Quzt + u > t Ruut (Qu 0, Ru 0) (4)
Despite their simplicity, LQ games are challenging to optimize, because even though the loss functions are quadratic in the states and actions, they are not convex with respect to the player parameters Ku and Kv. Importantly, Mazumdar et al. (2020a) show that in general sum LQ games, using naive gradient-play almost surely avoids some Nash equilibria.
We demonstrate in general-sum LQ games, double-FTR is able to find the Nash equilibria that are avoided by gradient-play. We use a setting mentioned in Mazumdar et al. (2020a), where dz = 2, du = dv = 1, Ru = Rv = 0.01, and
A = 0.511 0.064 0.533 0.993 ,Bu = 1 1 ,Bv = 0 1 ,Qu = 0.01 0 0 1 ,Qv = 1 0 0 0.147 .
The initial state z0 is set to [1 1] > or [1 1.1]> with equal probability.
Figure 4 and 5 shows an instance where the double-FTR is able to converge to a local Nash equilibrium, but the gradient-play fails to. For both algorithms, we use the same initial policy parameters Ku and Kv. Figure 4 visualizes the loss landscape for fu(Ku,Kv) and fv(Ku,Kv) when optimized by double-FTR. It confirms that the solution double-FTR converges to is indeed a Nash equilibrium (the
second-order condition in Definition 2.1). Figure 5a visualizes the local vector field Jacobian (i.e. JGDA) and shows that the Jacobian contains negative eigenvalues, which makes it a saddle point for gradient-play optimization. Indeed, gradient-play (shown in Figure 5b), avoids this Nash equilibrium. Instead, it eventually finds another Nash equilibrium that is stable fixed point.
6.3 PARAMETERIZED BILINEAR GAME
We consider another zero-sum game, the stochastic parameterized bilinear game, as in Prajapat et al. (2021). We use this experiment to demonstrate that double-FTR is also beneficial for stochastic games, and does not exhibit oscillatory behaviour due to having real eigenvalues at fixed points.
min µx, x r(x, y), min µy, y
r(x, y) where x ⇠ N (µx, 2x), y ⇠ N (µy, 2y), r(x, y) = xy.
The unique Nash equilibrium with respect to (x, y) is (0, 0). However, the learnable parameters are the mean and the standard deviation of the distributions where x and y are drawn from. We denote the learnable parameters for x and y as ✓ and respectively. At each time step, we obtain an unbiased estimate of the gradient using REINFORCE over a mini-batch of size B:
r̃✓r(✓, ) = 1
B
BX
i=1
r✓ logN (xi;✓)r(xi, yi), ✓ = µx x , r̃ r(✓, ) computed analogously.
As is often the case, GDA has oscillatory behaviour due to the complex eigenvalues of its Jacobian at fixed points. In this stochastic setting, the oscillation prevents convergence for GDA (Figure 6). In contrast, the double-FTR algorithm does not have rotational behaviour at fixed points, and converges to the unique Nash equilibrium (x, y) = (0, 0).
6.4 GENERATIVE ADVERSARIAL NETWORKS
The Generative Adversarial Network (GAN) (Goodfellow et al., 2014) is a popular deep learning application for two-player games. The goal is to find the Nash equilibrium where the generator perfectly matches the target distribution, and the discriminator is completely fooled by the generator.
In this experiment, we use the GAN framework to learn mixture of Gaussians (MoG). We use the original saturating loss function. Both the generator and the discriminator are multi-layer perceptrons with 2 hidden layers and 64 hidden units in each layer. With neural networks, directly implementing the Hessian would be computationally inefficient or infeasible. Instead, we use conjugate gradient to approximate the Hessian inverse. Details of the experiments can be found in Appendix C.
As shown in Figure 7 and 8, we apply GDA and double-FTR to learn MoG in 1D and 2D. In both cases, GDA gets stuck at a spurious equilibrium and suffers from mode collapse. In contrast, double-FTR recovers all the modes, and the generated distribution closely matches the target.
7 CONCLUSION
We propose double Follow-the-Ridge (double-FTR), a gradient-based algorithm for finding local Nash equilibria in differentiable games. We prove that under mild assumptions, double-FTR locally converges to and only to differential Nash equilibria in the general-sum games, and avoids oscillation in the neighbourhood of fixed points. Furthermore, we remark that by varying the preconditioner, double-FTR leads to a broader family of algorithms that share the same convergence guarantee. Finally, we empirically verify the effectiveness of double-FTR in finding and only finding local Nash equilibria across a broad range of problems.
8 REPREDUCABILITY STATEMENT
For empirical results, we describe the experiment settings in detail in Appendix C. We also provide code for all experiments in the supplementary material. For the theoretical results, proofs are included in the appendix. | 1. What is the focus and contribution of the paper on differentiable games?
2. What are the strengths of the proposed algorithm, particularly its relation to previous works?
3. What are the weaknesses of the paper, especially concerning its technical novelty and challenges?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies general-sum two-player differentiable games and proposes an algorithm (double-FTR) that locally converges to (and only to) differentiable Nash equilibria (DNE). Moreover, extensive empirical validation is provided.
Strengths And Weaknesses
The main result of the paper is interesting and clearly presented. The proposed algorithm can be intuitively seen as an extension of the standard GDA algorithm where the update of each player has an additional term that makes use of second-order information. In particular, it seems like a natural extension of the FTR algorithm of [Wang, Zhang, Ba, ICLR 2019]. Having read the appendix, my main concern has to do with the technical novelty and challenges behind the main result. My concerns focus specifically on two aspects: (i) the algorithm is heavily similar to the FTR algorithm of [WZB '19] and (ii) the technical challenges (provided Assumption 2) are mainly algebraic manipulations of the double-FTR dynamics.
I would like to pose the following questions:
Q1. Can the authors highlight the technical novelties and challenges behind the result? Adding a technical overview section in the main body would be helpful.
Q2. Assumption 2 ensures that the updates in lines 2 and 3 of Algorithm 1 do not vanish. Is it possible to drop this assumption or relax it by extending the proposed algorithm?
Q3. Is it known whether second-order information is truly necessary in order to obtain a result like Theorem 1? As discussed in Section 2, standard GDA fails to converge even in zero-sum games. It would be beneficial to add a more extensive discussion regarding other first-order methods.
In general, I believe that the paper is nice and well-written. However, I think that the technical contribution of the paper is limited. Nevertheless, I am willing to increase my score after authors' rebuttal depending on the response.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written and the result is well-presented. The main issue is the paper's limited technical novelty. |
ICLR | Title
Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding Explains the Performance
Abstract
Recently, self-supervised learning (SSL) has achieved tremendous empirical advancements in learning image representation. However, our understanding of the principle behind learning such a representation are still limited. This work shows that the success of the SOTA Siamese-network-based SSL approaches is primarily based on learning a distributed representation of image patches. In particular, we show that when we learn a representation only for fixed-scale image patches and aggregate different patch representations for an image (instance), it can achieve on par or even better results than the baseline methods that use the whole image. Further, we show that the patch representation aggregation can also improve various SOTA baseline methods by a large margin. We also establish a formal connection between the Siamese-network-based SSL objective and the image patches co-occurrence statistics modeling, which supplements the prevailing invariance perspective. By visualizing the nearest neighbors of different image patches in the embedding space and projection space, we show that while the projection has more invariance, the embedding space tends to preserve more equivariance and locality. The evidence shows that it is a promising direction to simplify the SOTA methods to build better understanding.
1 INTRODUCTION
In many application domains, Self-supervised representation learning experienced tremendous advancements in the past few years. In terms of the quality of the learned feature, unsupervised learning has caught up with supervised learning or even surpassed the latter in many cases. This trend promises unparalleled scalability for data-driven machine learning in the future. One of the most successful paradigms in image self-supervised representation learning is based on instanceaugmentation-invariant contrastive learning (Wu et al., 2018; Chen et al., 2020a;b) using a Siamese network architecture Bromley et al. (1993). This style of learning methods achieves the following general goal: 1) It brings the representation of two different views (augmentation) of the same instance (image) closer. 2) It keeps the representation informative of the input; in other words, avoids collapse. Several recent non-contrastive methods achieve competitive performance by explicitly achieving those two goals (Bardes et al., 2021; Li et al., 2022). While we celebrate the empirical success of SSL in a wide range of benchmarks, our understanding of the principle of this learning process are still very limited. In this work, we seek the principle behind the instance-based SSL methods and argue that the success largely comes from learning a representation of image patches based on their co-occurrence statistics in the images. To demonstrate this, we simplify the current SSL method to using a single crop scale to learn a representation of image patches of fixed size and establish a formal connection between our formulation and co-occurrence statistics modeling. The patch representation can be linearly aggregated (bag-of-words) to form the representation of the image. The learned representation achieves similar or better performance than the baseline representation, which is based on the entire image. In particular, even kNN classifier works surprisingly well with the aggregated patch feature. These findings also resonate with recent works in supervised learning based on patch features (Brendel & Bethge, 2018; Dosovitskiy et al., 2020; Trockman & Kolter, 2022). We also show that for baseline SSL methods pretrained with multi-scale crops, the whole-image representation is essentially an aggregation of different patch representations from the same instance.
Further, given various SOTA baseline SSL models, we show that the same aggregation process can further improve the representation quality. Then we provide a cosine-similarity-based visualization of image patches representation on both ImageNet and CIFAR10 datasets. Particularly, we find that while the projection space has achieved significant invariance, the embedding space, frequently used for representation evaluation, tends to preserve more locality and equivariance.
Our discoveries may provide useful explanations and understanding for the success of the instanceaugmentation-invariant SSL methods. The co-occurrence statistics modeling formulation and equivariance preserving property in the embedding space both supplement the current prevailing invariance perspective. Finally, these results motivate an interesting discussion of several potential future directions.
2 RELATED WORKS
Instance-Based Self-Supervised Learning: Invariance without Collapse. The instance contrastive learning (Wu et al., 2018) views each of the images as a different class and uses data augmentation (Dosovitskiy et al., 2016) to generate different views from the same image. As the number of classes is equal to the number of images, it is formulated as a massive classification problem, which may require a huge buffer or memory bank. Later, SimCLR (Chen et al., 2020a) simplifies the technique significantly and uses an InfoNCE-based formulation to restrict the classification within an individual batch. While it’s widely perceived that contrastive learning needs the “bag of tricks,” e.g., large batches, hyperparameter tuning, momentum encoding, memory queues, etc. Later works (Chen & He, 2021; Yeh et al., 2021; HaoChen et al., 2021) show that many of these issues can be easily fixed. Recently, several even simpler non-contrastive learning methods(Bardes et al., 2021; Zbontar et al., 2021; Li et al., 2022) are proposed, where one directly pushes the representation of different views from the same instance closer while maintaining a none-collapsing representation space. Image SSL methods mostly differ in their means to achieve a non-collapsing solution. This include classification versus negative samples(Chen et al., 2020a), Siamese networks (He et al., 2020; Grill et al., 2020) and more recently, covariance regularization (Ermolov et al., 2021; Zbontar et al., 2021; Bardes et al., 2021; HaoChen et al., 2021; Li et al., 2022; Bardes et al., 2022). The covariance regularization has also long been used in many classical unsupervised learning methods (Roweis & Saul, 2000; Tenenbaum et al., 2000; Wiskott & Sejnowski, 2002; Chen et al., 2018), also to enforce a non-collapsing solution. In fact, there is a duality between the spectral contrastive loss(HaoChen et al., 2021) and the non-contrastive loss, which we prove in Appendix B.
All previously mentioned instance-based SSL methods pull together representations of different views of the same instance. Intuitively, the representation would eventually be invariant to the transformation that generates those views. We would like to provide further insight into this learning process: The learning objective can be understood as using the inner product to capture the co-occurrence statistics of those image patches. We also provide visualization to study whether the learned representation truly has this invariance property.
Patch-Based Representation. Many works have explored the effectiveness of path-based image features. In the supervised setting, Bagnet(Brendel & Bethge, 2018) and Thiry et al. (2021) showed that aggregation of patch-based features can achieve most of the performance of supervised learning on image datasets. In the unsupervised setting, Gidaris et al. (2020) performs SSL by requiring a bag-of patches representation to be invariant between different views. Due to architectural constraints, Image Transformer based methods naturally use a patch-based representation (He et al., 2021; Bao et al., 2021).
Learning Representation by Modeling the Co-Occurrence Statistics. The use of word vector representation has a long history in NLP, which dates back to the 80s (Rumelhart et al., 1986; Dumais, 2004). Perhaps one of the most famous word embedding results, the word vector arithmetic operation, was introduced in Mikolov et al. (2013a). Particularly, to learn this embedding, a task called “skip-gram” was used, where one uses the latent embedding of a word to predict the latent embedding of the word vectors in a context. A refinement was proposed in Mikolov et al. (2013b), where a simplified variant of Noise Contrastive Estimation (NCE) was introduced for training the “Skip-gram” model. The task and loss are deeply connected to the SimCLR and its InfoNCE loss. Later, a matrix factorization formulation was proposed in Pennington et al. (2014), which uses a carefully reprocessed concurrence matrix compared to latent semantic analysis. While the task in
Word2Vec and SimCLR is apparently similar, the underlying interpretations are quite different. In instance-based SSL methods, one pervasive perception is that the encoding network is trying to build invariance, i.e., different views of the same instance shall be mapped to the same latent embedding. This work supplements this classical opinion and show that similar to Word2Vect, instance-based SSL methods can be understood as building a distributed representation of image patches by modeling the co-occurrence statistics.
3 SELF-SUPERVISED IMAGE PATCH EMBEDDING AND CO-OCCURRENCE STATISTICS MODELING
To study the role of patch embeddings, we use fixed-scale crops instead of multi-scale crop to learn a representation for fixed-size image patches. We show in Section 4 that any SSL objective can be used. As an example, we present a general formulation of covariance regularization based techniques(Bardes et al., 2021; Zbontar et al., 2021; Li et al., 2022; HaoChen et al., 2021): Definition 1. Intra-instance variance-invariance-covariance regularization (I2 VICReg):
min θ
− Ep(x1,x2) [ zT1 z2 ] , s.t. Ep(x) [ zzT ] = 1
demb · I (1)
where z = g(h) and h = f(x; θ). We call h the embedding and z the projection of an image patch, x. {x} all have the same size. The function f(·; θ) is a deep neural network with parameters θ, and g is typically a much simpler neural network with only one or a few fully connected layers. demb is the dimension of an embedding vector, z. This general idea is shown in Figure 1. For an image, we extract fix-size image patches, which are color augmented before embedding1 f and projection g. Given an image patch xi, the objective tries to push its projection zi closer to the projections of the other image patches within the instance. Further, the regularization decorrelates different dimensions of z while maintaining the variance of each dimension. Covariance regularization was first explicitly implemented in VICReg Bardes et al. (2021). Later (Li et al., 2022) realizes similar effect by maximizing the Total Coding Rate (TCR) (Ma et al., 2007).
Relationship of covariance-regularization based method to Co-Occurrence Statistics Modeling. Assume x1 and x2 are two color-augmented patches sampled from the same image. We denote their marginal distribution by p(x1) and p(x2), which includes variation due to sampling different locations within an image, random color augmentation, as well as variation due to sampling images from the dataset. We also denote their joint distribution by p(x1, x2), which assume x1 and x2 are sampled from the same image. We show that covariance-regularization based contrastive learning can be understood by the following objective that approximates the normalized co-occurrence statistics by the inner product of the two embeddings z1 and z2 generated by x1 and x2:
min ∫
p(x1)p(x2) [ wzT1 z2 − p(x1, x2)
p(x1)p(x2)
]2 dx1dx2 (2)
where w is a fixed weight used to compensate for scale differences. 1This is also called representation in some related literatures.
Proposition 3.1. 2 can be rewritten as the following spectral contrastive form: min Ep(x1,x2) [ −zT1 z2 ] + λEp(x1)p(x2) ( zT1 z2 )2 (3)
where λ = w2 . The proof is rather straightforward and is presented in Appendix A. As we can see, the first term resembles the similarity term in Eqn 1, and the second spectral contrastive term HaoChen et al. (2021) minimizes the inner product between two independent patch embeddings, which has the effect of orthogonalizing them. As we mentioned earlier, there exists a duality between the spectral contrastive regularization and covariance regularization term in Eqn 1. Please refer to the Appendix B for a more in-depth discussion.
Bag-of-Feature Model. After we have learned an embedding for the fix-scale image patches, we can embed all of the image patches {x11, . . . , xHW } within an instance into the embedding space, {h11, . . . , hHW }. Then, we can obtain the representation for the whole image by linearly aggregating (averaging) all hs, or by concatenation. The details and results will be presented in later sections.
4 QUANTITATIVE EMPIRICAL RESULTS
Through experiments, we demonstrate that representations learned by self-supervised learning method trained with fixed-size patches are nearly as strong as that learned with multi-scale crops. For several cases, pretraining with multi-scale crops and evaluating on the fixed central crop is equivalent in terms of performance to pretraining with fixed-size small patches and evaluating by averaging the embedding across the image. We further show that for a multi-scale pretrained model, averaging embedding of fixed-scale small image patches converges to the embedding generated by the center cropped image, as the number of aggregated patches increases. Thus for network pretrained with multi-scale crop, passing the center crop into the network can be viewed as an efficient way to obtain the averaged patch embeddings. Further, we show that the patch aggregated evaluation can further improve the accuracy of the baseline models by a significant margin. Our experiments used the CIFAR-10, CIFAR-100, and the more challenging ImageNet-100 dataset. We also provide a short-epoch ImageNet pretraining to show that with small image patches, the training tends to have lower learning efficiency. In the last section, we will dive into the invariance and equivariance analysis of the patch embedding. All implementation details can be found in Appendix C
4.1 CIFAR
We first provide experimental results on the standard CIFAR-10 and CIFAR-100 datasets (Krizhevsky et al., 2009) using ResNet-34. The results are shown in Figure 2, Tables 1 and 2. We show results obtained using the linear evaluation protocol and the kNN evaluation protocol and the results are consistent with each other. The standard evaluation method generates the embedding using the full image, both during training of the linear classifier and at final evaluation (Central). Alternatively, an image embedding is generated by inputting a certain number of patches (same scale as training time and upsampled) into the neural network and aggregating the patch embeddings by performing averaging. This is denoted by 1, 16, and 256 patches.
The main observation we make is that pretraining on small patches and evaluating with the averaged embedding performs on par or better than pretraining with random-scale patches and evaluating with the full image representation. On CIFAR-10 with the TCR method, the 256-patches evaluation with fixed pretraining scale of 0.2 outperforms the full-image evaluation with random pretraining scale between 0.08 and 1, which is the standard scale range used. When only averaging 16-patches, the same model performs on par with full image evaluation. On the k-NN evaluation, pretraining with randomscale patches not spanning the full range 0.08 to 1.0 gives much worse performance comparatively, than linear evaluation. However, aggregated embedding does not see this comparatively worse performance, and can still outperform the full image evaluation. Using results from Table 1,2 and 3, we can draw the same conclusion on other datasets and other self-supervised methods (VICReg (Bardes et al., 2021) and BYOL(Grill et al., 2020)).
4.2 IMAGENET-100 AND IMAGENET
We provide experimental results on the ImageNet-100 and ImageNet dataset (Deng et al., 2009) with ResNet-50. We present our results using the linear evaluation protocol in Table 3 and Figure 3.
The behavior observed on CIFAR-10 generalizes to ImageNet-100. Averaging embeddings of 16 small patches produced by the patch-based pretrained model performs almost as well as the “central” evaluation of the embedding produced by the baseline model on the ImageNet-100 dataset, as shown in Table 3. In Figure 3(b), we show short-epoch pretrained models on ImageNet. As the patch-based pretrained model tends to see much less information compared to the baseline multi-scale pretraining, there is a 4.5% gap between the patch-based model and the baseline model.
4.3 PATCHED-AGGREGATION BASED EVALUATION OF MULTI-SCALE PRETRAINED MODEL
Our results in the last two sections show that the best performance is obtained when the pretraining step is done using patches of various sizes, and the evaluation step is done using the aggregated patch embeddings. It is therefore interesting to evaluate the embedding of models pretrained with other self-supervised learning methods to investigate if this evaluation protocol provides a uniform performance boost. We do this evaluation on the VICReg model pretrained for 1000 epochs and a SwAV model pretrained for 800 epochs. All models are downloaded from their original repository. Table 4 shows the linear evaluation performance on the validation set of ImageNet using the full image and aggregated embedding. On all the models, aggregated embedding outperforms full-
image evaluation, often by more than 1%. Also, increasing the number of patches averaged in the aggregation process also increases the performance. We do not go beyond 48 patches because of memory and run time issues, but we hypothesize that a further increase in the number of patches will improve the performance further, as we have demonstrated on CIFAR-10, where 256 patches significantly outperform 16 patches.
4.4 CONVERGENCE OF PATCH-BASED EMBEDDING TO WHOLE-INSTANCE EMBEDDING.
In this experiment, we show that for a multi-scale pretrained SSL model, linearly aggregating the patch embedding converges to the instance embedding. We take a multi-scale pretrained VICReg baseline model and use randomly selected 512 images from the ImageNet dataset. For each image, we first get the embedding of the 224× 224 center crop. Then we randomly aggregate N embeddings of different 100×100 image patches and calculate the cosine similarity between the patch-aggregated embedding and the center crop embedding. Figure 3(a) shows that the aggregated representation converges to the instance embedding as N increases from 1 to 16 to all the image patches2.
4.5 CONCATENATION AGGREGATION FURTHER IMPROVES SSL PERFORMANCE
An alternative way to aggregate embeddings are by concatenating them into a single larger vector. To test how this method perform, we downloaded the checkpoints of SOTA SSL model pretrained on CIFAR10 dataset from sololearn (da Costa et al., 2022), and tested linear and kNN accuracy with concatenation aggregation. As shown in Table 5, concatenation aggregation further improve the performance of these SOTA SSL model. Even with only 25 patches, the K-nearest-neighbor (KNN) accuracy of the aggregated embedding outperforms the baseline linear evaluation accuracy by a large margin.
5 PATCH EMBEDDING VISUALIZATION: INVARIANCE OR EQUIVARIANCE?
The instance-augmentation-invariant SSL methods are primarily motivated from an invariance perspective. In this section, we provide CIFAR-10 nearest neighbor and ImageNet cosine-similarity heatmap visualization to further understand the learned representation. In the CIFAR-10 experiment, we take a model pre-trained with 14× 14 image patches on CIFAR-10 and calculate the projection and embedding vectors of all different image patches from the training set. Then for a given 14× 14
2“All”: extracting overlapped patches with stride 4 and totally aggregate about 1000 patches’ embeddings.
image patch (e.g. the ones circled by red dash boxes Fig 4), we visualize its k nearest neighbors in terms of cosine-similarity in both the projection and the embedding space. Figure 4 shows the results for two different image patches. The patches circled by green boxes are image patches from another instance of the same category, whereas the uncircled patches are from the same instance.
In the ImageNet experiment, we take a multi-scale pretrained VICReg model, then for a given image patch (e.g. circled by red dash boxes in Figure 5), we visualize the cosine-similarity between embedding from this patch and that from the other patches from the same instance. In this experiment, we use two different image patches scales, 71 × 71 and 100 × 100. The heatmap visualization is normalized to the same scale.
Overall, we observe that the projection vectors are significantly more invariant than the embedding vectors. This is apparent from both Figure 4 and Figure 5. For the CIFAR kNN patches, NNs in the embedding space are visually much more similar than NNs in the projection space. In fact, in the embedding space, the nearest NNs are mostly locally shifted patches of similar “part” information. For projection space, however, many NNs are patches of different “part” information from the same class. E.g., we can see in Figure 4 that an NNs of a “wheel” in the projection space might be a “door” or a “window”, however, the NNs in the embedding space all contain “wheel” information. In the second example, the NNs of a “horse legs” patch may have different “horse” body parts whereas the NNs in the embedding space are all “horse leg”.
The heatmap visualization on ImageNet also illustrates the same phenomenon. Let’s visualize a multi-scale pretrained VICReg model. The projection vector from a patch has a high similarity to that from the query patch whenever the patch has enough information to infer the class of the image. While for embedding vectors, the similarity area is much more localized to the query patch, or to other patches with similar features (the other leg of the dog in Figure 5). This general observation is consistent with the results of the visualizations in Bordes et al. (2021). We slightly abused the term and call this property of the embedding vector equivariant, in contrast to the invariance possessed by the projector vectors. A more thorough visualization is provided in the Appendix E.
6 DISCUSSION
In this paper, we seek to provide an understanding of the success of instance-augmentation-invariant SSL methods. We demonstrate learning an embedding for fixed-size image patches (I2 VICReg) and linear aggregating them from the same instance can achieve on-par or even better performance than the multi-scale pretraining. On the other hand, with a multi-scale pretrained model, we show that the whole image embedding is essentially the average of patch embeddings. Conceptually we establish the close connection between I2 VICReg and modeling the co-occurrence statistics of patches.
Through visualizing nearest neighbors and cosine-similarity heatmaps, we find that the projector vector is relatively invariant while the embedding vector is instead equivariant, which may explain its higher discriminative performance. This result suggests that the SSL objective, which learns the co-occurrence statistics, encourages an invariant solution, while the more favorable property of equivariance is achieved by the implicit bias introduced by the projector. In the future, it is interesting to explore if it’s possible to directly encourage equivariance in the objective function in a more principled manner instead of relying on the projector head. For this, prior works in NLP may provide useful guidance. In Pennington et al. (2014), word embedding is learned by fitting the log co-occurrence matrix, which avoids the problem of getting dominated by large elements and allows the embedding to carry richer information. Similarly, an SSL objective that implicitly fits to the log-occurrence matrix may learn a more equivariant embedding, which may be an interesting direction for future work.
Lots of open questions still remain in the quest of understanding image SSL. For example, it’s still unclear why the projector g makes the embedding h more equivariant than the projection z. For this, we hypothesize that the role of the projector can be understood as learning a feature representation for a kernel function in the embedding space. Since for h1, h2, the dot product of g(h1) and g(h2) always represent some positive semi-definite kernel on the original space k(h1, h2) = g(h1)T g(h2). It is possible that the flexible kernel function on the embedding alleviates the excess invariance problem caused by the objective on the projector vectors, which allows the embedding to be more equivariant and perform better. We leave further analysis of this hypothesis to future work.
A PROOF OF PROPOSITION 3.1
Proposition A.1. Equation 2 can be rewritten in the following contrastive form:
Ep(x1,x2) [ −zT1 z2 ] + λEp(x1)p(x2) ( zT1 z2 )2 (4)
where λ = w2 .
Proof. Since we are dealing with an objective, we can drop constants, which do not depend on the embedding z1 and z2, when they occur.
L = ∫ p(x1)p(x2) [ wzT1 z2 − p(x1, x2)
p(x1)p(x2)
]2 dx1dx2 (5)
= ∫ p(x1)p(x2) [ (wzT1 z2) 2 − 2wzT1 z2 · p(x1, x2)
p(x1)p(x2)
] dx1dx2 (6)
= ∫ p(x1)p(x2)(wz T 1 z2) 2dx1dx2 − 2w ∫ p(x1, x2)(z T 1 z2)dx1dx2 (7)
= Ep(x1,x2) [ −zT1 z2 ] + λEp(x1)p(x2) ( zT1 z2 )2 (8)
where λ = w2 .
B THE DUALITY BETWEEN SPECTRAL CONTRASTIVE REGULARIZATION AND COVARIANCE REGULARIZATION.
For Objective 3 and Objective 1, as the similarity term is the same, we can focus our discussion on the regularization term, particularly with SGD optimizer. For simplicity, we assume that the embedding z is L2-normalized and each of the embedding dimension also has zero mean and normalized variance. Given a minibatch with size N , the spectral regularization term Ep(x1)p(x2) ( zT1 z2
)2 reduces to∥∥ZTZ − Id∥∥2F . By Lemma 3.2 from Le et al. (2011), we have:∥∥ZTZ − IN∥∥2F = ∥∥ZZT − Id∥∥2F = ∥∥∥∥ZZT − Nd Id ∥∥∥∥2 F + C (9)
where C is a constant. The third equality follows due to that each of the embedding dimension is normalized. ∥∥ZZT − 1dIN∥∥2F is the mini-batch version of the covariance regularization term Ep(x) [ zzT ] = Ndemb · I .
A thorough discussion is beyond the scope of this work. We refer the curious readers to Garrido et al. (2022) for a more general discussion on the duality between contrastive learning and non-contrastive learning.
C IMPLEMENTATION DETAILS
C.1 CIFAR-10 AND CIFAR-100
For all experiments, we pretrain a ResNet-34 for 600 epochs. We use a batch size of 1024, LARS optimizer, and a weight decay of 1e− 04. The learning rate is set to 0.3, and follows a cosine decay schedule, with 10 epochs of warmup and a final value of 0. In the TCR loss, λ is set to 30.0, and ϵ is set to 0.2. The projector network consists of 2 linear layers with respectively 4096 hidden units and 128 output units for the CIFAR-10 experiments and 512 output units for the CIFAR-100 experiments. All the layers are separated with a ReLU and a BatchNorm layers. The data augmentations used are identical to those of BYOL.
C.2 IMAGENET-100 AND IMAGENET
For all the experiments, we pretrain a ResNet-50 with the TCR loss for 400 epochs for ImageNet-100, and 100 epochs for ImageNet. We use a batch size of 1024, the LARS optimizer, and a weight decay of 1e− 04. The learning rate is set to 0.1, and follows a cosine decay schedule, with 10 epochs of warmup and a final value of 0. In the TCR loss, λ is set to 1920.0, and ϵ is set to 0.2. The projector network consists of 3 linear layers with each and 8192 units, separated by a ReLU and a BatchNorm layers. The data augmentations used are identical to those of BYOL.
C.3 IMPLEMENTATION DETAIL FOR 4.5
For all the experiments, we downloaded the checkpoints of SOTA SSL model pretrained on CIFAR10 dataset from solo-learn. Each method is pretrained for 1000 epochs and the hyperparameters used for each method is described in solo-learn. The backbone model used in all these checkpoint is ResNet-18, which output a dimension 512× 5× 5 tensor for each image. We apply spatial average pooling (stride = 2, window size = 3) to this tensor and flatten the result to obtain a feature vector of dimension 2048.
D IMAGENET INTRA-INSTANCE VISUALIZATION
In this section, we provide further visualization of the multi-scale pretrained VICReg model, and the results are shown in Fig 6. Here we use image patches of scale 0.1 to calculate the cosine similarity heatmaps, the query patch is marked by the red-dash boxes. The embedding space contains more localized information, whereas the projection space is relatively more invariant, especially when the patch has enough information to determine the category.
E CIFAR10 KNN VISUALIZATION
This section continues the visualization of the model pretrained with 14 × 14 patches. In this visualization, we primarily use kNN and cosine similarity to find the closest neighbors for the query patches, marked in the red-dash boxes. Again, green boxes indicate that the patches are from other instances of the same category; red boxes indicate that the patches are from other instances of a different category. Patchs that do not have a color box are from the same instance. In the following, we discuss several interesting aspects of the problem.
Additional Projection and Embedding Spaces Comparison. As we can see in Figure 7, the embedding space has a much lesser degree of collapse of the semantic information. The projection space tends to collapse different “parts” of a class to similar vectors, whereas the embedding space preserves more information about the details in a patch. This is manifested by higher visual similarity between neighboring patchs.
Embedding Space with 256 kNN. In the previous CIFAR visualization, we only show kNN with 119 neighbors. In Figure 8 and Figure 9, we provide kNN with 255 neighbors, the same set of conclusions hold.
Different “Parts” in the Embedding Space. In Figure 10, we provide some more typical patches of “parts” and show their embedding neighbors. While many parts are shared by different instances, we also find some less ideal cases, e.g. Figure 10(4a)(2d), where the closest neighbors are nearly all from the same instance.
As we discussed earlier, the objective is essentially modeling the co-occurrence statistics of patches. If the same patch is not “shared” by different instances, it is relatively uninformative. While the exact same patch might not be “shared”, the color augmentation and deep image prior embedded in the network design may create approximate sharing. In Figure 11 and Figure 12, we provide two examples of the compositional structure of instances. | 1. What is the main contribution of the paper regarding self-supervised learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its pipeline and theoretical analyses?
3. Do you have any concerns regarding the methodology and experiments, especially in the use of image patches and their representation aggregation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper pays attention to the crucial effect of image patches for self-supervised learning (SSL), tries to show the success of Siamese-network-based SSL is to learn a distributed representation of image patches or the co-occurrence statistics of image patches, and demonstrates the effectiveness of patch representation aggregation for other SSL methods.
Strengths And Weaknesses
Strength:
This paper is well motivated and the perspective of exploring the co-occurrence statistics modeling in SSL is interesting and promising.
The analyses of this paper are abundant with extensive experimental results, intuitive visualizations, and some theoretical proofs.
Weakness:
Overall, the insight and motivation of this paper are good. However, no matter the theoretical analyses or the experimental results cannot sufficiently support the claims and insight.
Specifically, the key concerns are as follows:
The pipeline of the proposed I^2VICReg is unclear and ambiguous. Because the proposed method is built on VICReg, it means that two different views are required for each image. According to Figure 1, it seems that global average pooling is used to obtain a unified feature vector for each view. However, according to Eq. (1), it looks like that the embedding vectors of the image patches are directly used in the way of VICReg. Or is each image patch just one view for an image? If yes, how many image patches (views) are used during training? Therefore, a pseudocode or an algorithm are required to provide more details.
Why do the proposition 3.1 and proofs provided support the claim of ‘the success of SSL comes from learning a representation of image patches based on their co-occurrence statistics in the images? A more in-depth discussion is needed. If an SSL method does not use the covariance regularization, how to explain Eq. (2) and Eq. (3)?
Note that the effect of the aggregated embedding during test, especially in the linear evaluation protocol, cannot directly support the claims of this paper during training. This is because, in this case, the ground truth (real labels) of the images are used in linear evaluation. If we take image patches as different views of the same image, using multiple image patches during test can be seen as an operation of data augmentation at the test time. If we take image patches as local features, using multiple image patches during test can be seen as using ‘bag-of-words’ [1]. Therefore, the success of using such an operation (using multiple crops during test) is a common sense, which cannot be a causal support of the claims in this paper.
The datasets of CIFAR-10 and CIFAR-100 are too simple. Why not provide the results on ImageNet-1K?
This paper just uses the simple RandomResizedCrop operation to obtain the image patches. How about using non-overlapping image patches like vision transformer?
[1] In Defense of Nearest-Neighbor Based Image Classification. CVPR 2008.
Clarity, Quality, Novelty And Reproducibility
Clarity: can be improved.
Quality: Great.
Novelty: Incremental.
Reproducibility: Good. |
ICLR | Title
Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding Explains the Performance
Abstract
Recently, self-supervised learning (SSL) has achieved tremendous empirical advancements in learning image representation. However, our understanding of the principle behind learning such a representation are still limited. This work shows that the success of the SOTA Siamese-network-based SSL approaches is primarily based on learning a distributed representation of image patches. In particular, we show that when we learn a representation only for fixed-scale image patches and aggregate different patch representations for an image (instance), it can achieve on par or even better results than the baseline methods that use the whole image. Further, we show that the patch representation aggregation can also improve various SOTA baseline methods by a large margin. We also establish a formal connection between the Siamese-network-based SSL objective and the image patches co-occurrence statistics modeling, which supplements the prevailing invariance perspective. By visualizing the nearest neighbors of different image patches in the embedding space and projection space, we show that while the projection has more invariance, the embedding space tends to preserve more equivariance and locality. The evidence shows that it is a promising direction to simplify the SOTA methods to build better understanding.
1 INTRODUCTION
In many application domains, Self-supervised representation learning experienced tremendous advancements in the past few years. In terms of the quality of the learned feature, unsupervised learning has caught up with supervised learning or even surpassed the latter in many cases. This trend promises unparalleled scalability for data-driven machine learning in the future. One of the most successful paradigms in image self-supervised representation learning is based on instanceaugmentation-invariant contrastive learning (Wu et al., 2018; Chen et al., 2020a;b) using a Siamese network architecture Bromley et al. (1993). This style of learning methods achieves the following general goal: 1) It brings the representation of two different views (augmentation) of the same instance (image) closer. 2) It keeps the representation informative of the input; in other words, avoids collapse. Several recent non-contrastive methods achieve competitive performance by explicitly achieving those two goals (Bardes et al., 2021; Li et al., 2022). While we celebrate the empirical success of SSL in a wide range of benchmarks, our understanding of the principle of this learning process are still very limited. In this work, we seek the principle behind the instance-based SSL methods and argue that the success largely comes from learning a representation of image patches based on their co-occurrence statistics in the images. To demonstrate this, we simplify the current SSL method to using a single crop scale to learn a representation of image patches of fixed size and establish a formal connection between our formulation and co-occurrence statistics modeling. The patch representation can be linearly aggregated (bag-of-words) to form the representation of the image. The learned representation achieves similar or better performance than the baseline representation, which is based on the entire image. In particular, even kNN classifier works surprisingly well with the aggregated patch feature. These findings also resonate with recent works in supervised learning based on patch features (Brendel & Bethge, 2018; Dosovitskiy et al., 2020; Trockman & Kolter, 2022). We also show that for baseline SSL methods pretrained with multi-scale crops, the whole-image representation is essentially an aggregation of different patch representations from the same instance.
Further, given various SOTA baseline SSL models, we show that the same aggregation process can further improve the representation quality. Then we provide a cosine-similarity-based visualization of image patches representation on both ImageNet and CIFAR10 datasets. Particularly, we find that while the projection space has achieved significant invariance, the embedding space, frequently used for representation evaluation, tends to preserve more locality and equivariance.
Our discoveries may provide useful explanations and understanding for the success of the instanceaugmentation-invariant SSL methods. The co-occurrence statistics modeling formulation and equivariance preserving property in the embedding space both supplement the current prevailing invariance perspective. Finally, these results motivate an interesting discussion of several potential future directions.
2 RELATED WORKS
Instance-Based Self-Supervised Learning: Invariance without Collapse. The instance contrastive learning (Wu et al., 2018) views each of the images as a different class and uses data augmentation (Dosovitskiy et al., 2016) to generate different views from the same image. As the number of classes is equal to the number of images, it is formulated as a massive classification problem, which may require a huge buffer or memory bank. Later, SimCLR (Chen et al., 2020a) simplifies the technique significantly and uses an InfoNCE-based formulation to restrict the classification within an individual batch. While it’s widely perceived that contrastive learning needs the “bag of tricks,” e.g., large batches, hyperparameter tuning, momentum encoding, memory queues, etc. Later works (Chen & He, 2021; Yeh et al., 2021; HaoChen et al., 2021) show that many of these issues can be easily fixed. Recently, several even simpler non-contrastive learning methods(Bardes et al., 2021; Zbontar et al., 2021; Li et al., 2022) are proposed, where one directly pushes the representation of different views from the same instance closer while maintaining a none-collapsing representation space. Image SSL methods mostly differ in their means to achieve a non-collapsing solution. This include classification versus negative samples(Chen et al., 2020a), Siamese networks (He et al., 2020; Grill et al., 2020) and more recently, covariance regularization (Ermolov et al., 2021; Zbontar et al., 2021; Bardes et al., 2021; HaoChen et al., 2021; Li et al., 2022; Bardes et al., 2022). The covariance regularization has also long been used in many classical unsupervised learning methods (Roweis & Saul, 2000; Tenenbaum et al., 2000; Wiskott & Sejnowski, 2002; Chen et al., 2018), also to enforce a non-collapsing solution. In fact, there is a duality between the spectral contrastive loss(HaoChen et al., 2021) and the non-contrastive loss, which we prove in Appendix B.
All previously mentioned instance-based SSL methods pull together representations of different views of the same instance. Intuitively, the representation would eventually be invariant to the transformation that generates those views. We would like to provide further insight into this learning process: The learning objective can be understood as using the inner product to capture the co-occurrence statistics of those image patches. We also provide visualization to study whether the learned representation truly has this invariance property.
Patch-Based Representation. Many works have explored the effectiveness of path-based image features. In the supervised setting, Bagnet(Brendel & Bethge, 2018) and Thiry et al. (2021) showed that aggregation of patch-based features can achieve most of the performance of supervised learning on image datasets. In the unsupervised setting, Gidaris et al. (2020) performs SSL by requiring a bag-of patches representation to be invariant between different views. Due to architectural constraints, Image Transformer based methods naturally use a patch-based representation (He et al., 2021; Bao et al., 2021).
Learning Representation by Modeling the Co-Occurrence Statistics. The use of word vector representation has a long history in NLP, which dates back to the 80s (Rumelhart et al., 1986; Dumais, 2004). Perhaps one of the most famous word embedding results, the word vector arithmetic operation, was introduced in Mikolov et al. (2013a). Particularly, to learn this embedding, a task called “skip-gram” was used, where one uses the latent embedding of a word to predict the latent embedding of the word vectors in a context. A refinement was proposed in Mikolov et al. (2013b), where a simplified variant of Noise Contrastive Estimation (NCE) was introduced for training the “Skip-gram” model. The task and loss are deeply connected to the SimCLR and its InfoNCE loss. Later, a matrix factorization formulation was proposed in Pennington et al. (2014), which uses a carefully reprocessed concurrence matrix compared to latent semantic analysis. While the task in
Word2Vec and SimCLR is apparently similar, the underlying interpretations are quite different. In instance-based SSL methods, one pervasive perception is that the encoding network is trying to build invariance, i.e., different views of the same instance shall be mapped to the same latent embedding. This work supplements this classical opinion and show that similar to Word2Vect, instance-based SSL methods can be understood as building a distributed representation of image patches by modeling the co-occurrence statistics.
3 SELF-SUPERVISED IMAGE PATCH EMBEDDING AND CO-OCCURRENCE STATISTICS MODELING
To study the role of patch embeddings, we use fixed-scale crops instead of multi-scale crop to learn a representation for fixed-size image patches. We show in Section 4 that any SSL objective can be used. As an example, we present a general formulation of covariance regularization based techniques(Bardes et al., 2021; Zbontar et al., 2021; Li et al., 2022; HaoChen et al., 2021): Definition 1. Intra-instance variance-invariance-covariance regularization (I2 VICReg):
min θ
− Ep(x1,x2) [ zT1 z2 ] , s.t. Ep(x) [ zzT ] = 1
demb · I (1)
where z = g(h) and h = f(x; θ). We call h the embedding and z the projection of an image patch, x. {x} all have the same size. The function f(·; θ) is a deep neural network with parameters θ, and g is typically a much simpler neural network with only one or a few fully connected layers. demb is the dimension of an embedding vector, z. This general idea is shown in Figure 1. For an image, we extract fix-size image patches, which are color augmented before embedding1 f and projection g. Given an image patch xi, the objective tries to push its projection zi closer to the projections of the other image patches within the instance. Further, the regularization decorrelates different dimensions of z while maintaining the variance of each dimension. Covariance regularization was first explicitly implemented in VICReg Bardes et al. (2021). Later (Li et al., 2022) realizes similar effect by maximizing the Total Coding Rate (TCR) (Ma et al., 2007).
Relationship of covariance-regularization based method to Co-Occurrence Statistics Modeling. Assume x1 and x2 are two color-augmented patches sampled from the same image. We denote their marginal distribution by p(x1) and p(x2), which includes variation due to sampling different locations within an image, random color augmentation, as well as variation due to sampling images from the dataset. We also denote their joint distribution by p(x1, x2), which assume x1 and x2 are sampled from the same image. We show that covariance-regularization based contrastive learning can be understood by the following objective that approximates the normalized co-occurrence statistics by the inner product of the two embeddings z1 and z2 generated by x1 and x2:
min ∫
p(x1)p(x2) [ wzT1 z2 − p(x1, x2)
p(x1)p(x2)
]2 dx1dx2 (2)
where w is a fixed weight used to compensate for scale differences. 1This is also called representation in some related literatures.
Proposition 3.1. 2 can be rewritten as the following spectral contrastive form: min Ep(x1,x2) [ −zT1 z2 ] + λEp(x1)p(x2) ( zT1 z2 )2 (3)
where λ = w2 . The proof is rather straightforward and is presented in Appendix A. As we can see, the first term resembles the similarity term in Eqn 1, and the second spectral contrastive term HaoChen et al. (2021) minimizes the inner product between two independent patch embeddings, which has the effect of orthogonalizing them. As we mentioned earlier, there exists a duality between the spectral contrastive regularization and covariance regularization term in Eqn 1. Please refer to the Appendix B for a more in-depth discussion.
Bag-of-Feature Model. After we have learned an embedding for the fix-scale image patches, we can embed all of the image patches {x11, . . . , xHW } within an instance into the embedding space, {h11, . . . , hHW }. Then, we can obtain the representation for the whole image by linearly aggregating (averaging) all hs, or by concatenation. The details and results will be presented in later sections.
4 QUANTITATIVE EMPIRICAL RESULTS
Through experiments, we demonstrate that representations learned by self-supervised learning method trained with fixed-size patches are nearly as strong as that learned with multi-scale crops. For several cases, pretraining with multi-scale crops and evaluating on the fixed central crop is equivalent in terms of performance to pretraining with fixed-size small patches and evaluating by averaging the embedding across the image. We further show that for a multi-scale pretrained model, averaging embedding of fixed-scale small image patches converges to the embedding generated by the center cropped image, as the number of aggregated patches increases. Thus for network pretrained with multi-scale crop, passing the center crop into the network can be viewed as an efficient way to obtain the averaged patch embeddings. Further, we show that the patch aggregated evaluation can further improve the accuracy of the baseline models by a significant margin. Our experiments used the CIFAR-10, CIFAR-100, and the more challenging ImageNet-100 dataset. We also provide a short-epoch ImageNet pretraining to show that with small image patches, the training tends to have lower learning efficiency. In the last section, we will dive into the invariance and equivariance analysis of the patch embedding. All implementation details can be found in Appendix C
4.1 CIFAR
We first provide experimental results on the standard CIFAR-10 and CIFAR-100 datasets (Krizhevsky et al., 2009) using ResNet-34. The results are shown in Figure 2, Tables 1 and 2. We show results obtained using the linear evaluation protocol and the kNN evaluation protocol and the results are consistent with each other. The standard evaluation method generates the embedding using the full image, both during training of the linear classifier and at final evaluation (Central). Alternatively, an image embedding is generated by inputting a certain number of patches (same scale as training time and upsampled) into the neural network and aggregating the patch embeddings by performing averaging. This is denoted by 1, 16, and 256 patches.
The main observation we make is that pretraining on small patches and evaluating with the averaged embedding performs on par or better than pretraining with random-scale patches and evaluating with the full image representation. On CIFAR-10 with the TCR method, the 256-patches evaluation with fixed pretraining scale of 0.2 outperforms the full-image evaluation with random pretraining scale between 0.08 and 1, which is the standard scale range used. When only averaging 16-patches, the same model performs on par with full image evaluation. On the k-NN evaluation, pretraining with randomscale patches not spanning the full range 0.08 to 1.0 gives much worse performance comparatively, than linear evaluation. However, aggregated embedding does not see this comparatively worse performance, and can still outperform the full image evaluation. Using results from Table 1,2 and 3, we can draw the same conclusion on other datasets and other self-supervised methods (VICReg (Bardes et al., 2021) and BYOL(Grill et al., 2020)).
4.2 IMAGENET-100 AND IMAGENET
We provide experimental results on the ImageNet-100 and ImageNet dataset (Deng et al., 2009) with ResNet-50. We present our results using the linear evaluation protocol in Table 3 and Figure 3.
The behavior observed on CIFAR-10 generalizes to ImageNet-100. Averaging embeddings of 16 small patches produced by the patch-based pretrained model performs almost as well as the “central” evaluation of the embedding produced by the baseline model on the ImageNet-100 dataset, as shown in Table 3. In Figure 3(b), we show short-epoch pretrained models on ImageNet. As the patch-based pretrained model tends to see much less information compared to the baseline multi-scale pretraining, there is a 4.5% gap between the patch-based model and the baseline model.
4.3 PATCHED-AGGREGATION BASED EVALUATION OF MULTI-SCALE PRETRAINED MODEL
Our results in the last two sections show that the best performance is obtained when the pretraining step is done using patches of various sizes, and the evaluation step is done using the aggregated patch embeddings. It is therefore interesting to evaluate the embedding of models pretrained with other self-supervised learning methods to investigate if this evaluation protocol provides a uniform performance boost. We do this evaluation on the VICReg model pretrained for 1000 epochs and a SwAV model pretrained for 800 epochs. All models are downloaded from their original repository. Table 4 shows the linear evaluation performance on the validation set of ImageNet using the full image and aggregated embedding. On all the models, aggregated embedding outperforms full-
image evaluation, often by more than 1%. Also, increasing the number of patches averaged in the aggregation process also increases the performance. We do not go beyond 48 patches because of memory and run time issues, but we hypothesize that a further increase in the number of patches will improve the performance further, as we have demonstrated on CIFAR-10, where 256 patches significantly outperform 16 patches.
4.4 CONVERGENCE OF PATCH-BASED EMBEDDING TO WHOLE-INSTANCE EMBEDDING.
In this experiment, we show that for a multi-scale pretrained SSL model, linearly aggregating the patch embedding converges to the instance embedding. We take a multi-scale pretrained VICReg baseline model and use randomly selected 512 images from the ImageNet dataset. For each image, we first get the embedding of the 224× 224 center crop. Then we randomly aggregate N embeddings of different 100×100 image patches and calculate the cosine similarity between the patch-aggregated embedding and the center crop embedding. Figure 3(a) shows that the aggregated representation converges to the instance embedding as N increases from 1 to 16 to all the image patches2.
4.5 CONCATENATION AGGREGATION FURTHER IMPROVES SSL PERFORMANCE
An alternative way to aggregate embeddings are by concatenating them into a single larger vector. To test how this method perform, we downloaded the checkpoints of SOTA SSL model pretrained on CIFAR10 dataset from sololearn (da Costa et al., 2022), and tested linear and kNN accuracy with concatenation aggregation. As shown in Table 5, concatenation aggregation further improve the performance of these SOTA SSL model. Even with only 25 patches, the K-nearest-neighbor (KNN) accuracy of the aggregated embedding outperforms the baseline linear evaluation accuracy by a large margin.
5 PATCH EMBEDDING VISUALIZATION: INVARIANCE OR EQUIVARIANCE?
The instance-augmentation-invariant SSL methods are primarily motivated from an invariance perspective. In this section, we provide CIFAR-10 nearest neighbor and ImageNet cosine-similarity heatmap visualization to further understand the learned representation. In the CIFAR-10 experiment, we take a model pre-trained with 14× 14 image patches on CIFAR-10 and calculate the projection and embedding vectors of all different image patches from the training set. Then for a given 14× 14
2“All”: extracting overlapped patches with stride 4 and totally aggregate about 1000 patches’ embeddings.
image patch (e.g. the ones circled by red dash boxes Fig 4), we visualize its k nearest neighbors in terms of cosine-similarity in both the projection and the embedding space. Figure 4 shows the results for two different image patches. The patches circled by green boxes are image patches from another instance of the same category, whereas the uncircled patches are from the same instance.
In the ImageNet experiment, we take a multi-scale pretrained VICReg model, then for a given image patch (e.g. circled by red dash boxes in Figure 5), we visualize the cosine-similarity between embedding from this patch and that from the other patches from the same instance. In this experiment, we use two different image patches scales, 71 × 71 and 100 × 100. The heatmap visualization is normalized to the same scale.
Overall, we observe that the projection vectors are significantly more invariant than the embedding vectors. This is apparent from both Figure 4 and Figure 5. For the CIFAR kNN patches, NNs in the embedding space are visually much more similar than NNs in the projection space. In fact, in the embedding space, the nearest NNs are mostly locally shifted patches of similar “part” information. For projection space, however, many NNs are patches of different “part” information from the same class. E.g., we can see in Figure 4 that an NNs of a “wheel” in the projection space might be a “door” or a “window”, however, the NNs in the embedding space all contain “wheel” information. In the second example, the NNs of a “horse legs” patch may have different “horse” body parts whereas the NNs in the embedding space are all “horse leg”.
The heatmap visualization on ImageNet also illustrates the same phenomenon. Let’s visualize a multi-scale pretrained VICReg model. The projection vector from a patch has a high similarity to that from the query patch whenever the patch has enough information to infer the class of the image. While for embedding vectors, the similarity area is much more localized to the query patch, or to other patches with similar features (the other leg of the dog in Figure 5). This general observation is consistent with the results of the visualizations in Bordes et al. (2021). We slightly abused the term and call this property of the embedding vector equivariant, in contrast to the invariance possessed by the projector vectors. A more thorough visualization is provided in the Appendix E.
6 DISCUSSION
In this paper, we seek to provide an understanding of the success of instance-augmentation-invariant SSL methods. We demonstrate learning an embedding for fixed-size image patches (I2 VICReg) and linear aggregating them from the same instance can achieve on-par or even better performance than the multi-scale pretraining. On the other hand, with a multi-scale pretrained model, we show that the whole image embedding is essentially the average of patch embeddings. Conceptually we establish the close connection between I2 VICReg and modeling the co-occurrence statistics of patches.
Through visualizing nearest neighbors and cosine-similarity heatmaps, we find that the projector vector is relatively invariant while the embedding vector is instead equivariant, which may explain its higher discriminative performance. This result suggests that the SSL objective, which learns the co-occurrence statistics, encourages an invariant solution, while the more favorable property of equivariance is achieved by the implicit bias introduced by the projector. In the future, it is interesting to explore if it’s possible to directly encourage equivariance in the objective function in a more principled manner instead of relying on the projector head. For this, prior works in NLP may provide useful guidance. In Pennington et al. (2014), word embedding is learned by fitting the log co-occurrence matrix, which avoids the problem of getting dominated by large elements and allows the embedding to carry richer information. Similarly, an SSL objective that implicitly fits to the log-occurrence matrix may learn a more equivariant embedding, which may be an interesting direction for future work.
Lots of open questions still remain in the quest of understanding image SSL. For example, it’s still unclear why the projector g makes the embedding h more equivariant than the projection z. For this, we hypothesize that the role of the projector can be understood as learning a feature representation for a kernel function in the embedding space. Since for h1, h2, the dot product of g(h1) and g(h2) always represent some positive semi-definite kernel on the original space k(h1, h2) = g(h1)T g(h2). It is possible that the flexible kernel function on the embedding alleviates the excess invariance problem caused by the objective on the projector vectors, which allows the embedding to be more equivariant and perform better. We leave further analysis of this hypothesis to future work.
A PROOF OF PROPOSITION 3.1
Proposition A.1. Equation 2 can be rewritten in the following contrastive form:
Ep(x1,x2) [ −zT1 z2 ] + λEp(x1)p(x2) ( zT1 z2 )2 (4)
where λ = w2 .
Proof. Since we are dealing with an objective, we can drop constants, which do not depend on the embedding z1 and z2, when they occur.
L = ∫ p(x1)p(x2) [ wzT1 z2 − p(x1, x2)
p(x1)p(x2)
]2 dx1dx2 (5)
= ∫ p(x1)p(x2) [ (wzT1 z2) 2 − 2wzT1 z2 · p(x1, x2)
p(x1)p(x2)
] dx1dx2 (6)
= ∫ p(x1)p(x2)(wz T 1 z2) 2dx1dx2 − 2w ∫ p(x1, x2)(z T 1 z2)dx1dx2 (7)
= Ep(x1,x2) [ −zT1 z2 ] + λEp(x1)p(x2) ( zT1 z2 )2 (8)
where λ = w2 .
B THE DUALITY BETWEEN SPECTRAL CONTRASTIVE REGULARIZATION AND COVARIANCE REGULARIZATION.
For Objective 3 and Objective 1, as the similarity term is the same, we can focus our discussion on the regularization term, particularly with SGD optimizer. For simplicity, we assume that the embedding z is L2-normalized and each of the embedding dimension also has zero mean and normalized variance. Given a minibatch with size N , the spectral regularization term Ep(x1)p(x2) ( zT1 z2
)2 reduces to∥∥ZTZ − Id∥∥2F . By Lemma 3.2 from Le et al. (2011), we have:∥∥ZTZ − IN∥∥2F = ∥∥ZZT − Id∥∥2F = ∥∥∥∥ZZT − Nd Id ∥∥∥∥2 F + C (9)
where C is a constant. The third equality follows due to that each of the embedding dimension is normalized. ∥∥ZZT − 1dIN∥∥2F is the mini-batch version of the covariance regularization term Ep(x) [ zzT ] = Ndemb · I .
A thorough discussion is beyond the scope of this work. We refer the curious readers to Garrido et al. (2022) for a more general discussion on the duality between contrastive learning and non-contrastive learning.
C IMPLEMENTATION DETAILS
C.1 CIFAR-10 AND CIFAR-100
For all experiments, we pretrain a ResNet-34 for 600 epochs. We use a batch size of 1024, LARS optimizer, and a weight decay of 1e− 04. The learning rate is set to 0.3, and follows a cosine decay schedule, with 10 epochs of warmup and a final value of 0. In the TCR loss, λ is set to 30.0, and ϵ is set to 0.2. The projector network consists of 2 linear layers with respectively 4096 hidden units and 128 output units for the CIFAR-10 experiments and 512 output units for the CIFAR-100 experiments. All the layers are separated with a ReLU and a BatchNorm layers. The data augmentations used are identical to those of BYOL.
C.2 IMAGENET-100 AND IMAGENET
For all the experiments, we pretrain a ResNet-50 with the TCR loss for 400 epochs for ImageNet-100, and 100 epochs for ImageNet. We use a batch size of 1024, the LARS optimizer, and a weight decay of 1e− 04. The learning rate is set to 0.1, and follows a cosine decay schedule, with 10 epochs of warmup and a final value of 0. In the TCR loss, λ is set to 1920.0, and ϵ is set to 0.2. The projector network consists of 3 linear layers with each and 8192 units, separated by a ReLU and a BatchNorm layers. The data augmentations used are identical to those of BYOL.
C.3 IMPLEMENTATION DETAIL FOR 4.5
For all the experiments, we downloaded the checkpoints of SOTA SSL model pretrained on CIFAR10 dataset from solo-learn. Each method is pretrained for 1000 epochs and the hyperparameters used for each method is described in solo-learn. The backbone model used in all these checkpoint is ResNet-18, which output a dimension 512× 5× 5 tensor for each image. We apply spatial average pooling (stride = 2, window size = 3) to this tensor and flatten the result to obtain a feature vector of dimension 2048.
D IMAGENET INTRA-INSTANCE VISUALIZATION
In this section, we provide further visualization of the multi-scale pretrained VICReg model, and the results are shown in Fig 6. Here we use image patches of scale 0.1 to calculate the cosine similarity heatmaps, the query patch is marked by the red-dash boxes. The embedding space contains more localized information, whereas the projection space is relatively more invariant, especially when the patch has enough information to determine the category.
E CIFAR10 KNN VISUALIZATION
This section continues the visualization of the model pretrained with 14 × 14 patches. In this visualization, we primarily use kNN and cosine similarity to find the closest neighbors for the query patches, marked in the red-dash boxes. Again, green boxes indicate that the patches are from other instances of the same category; red boxes indicate that the patches are from other instances of a different category. Patchs that do not have a color box are from the same instance. In the following, we discuss several interesting aspects of the problem.
Additional Projection and Embedding Spaces Comparison. As we can see in Figure 7, the embedding space has a much lesser degree of collapse of the semantic information. The projection space tends to collapse different “parts” of a class to similar vectors, whereas the embedding space preserves more information about the details in a patch. This is manifested by higher visual similarity between neighboring patchs.
Embedding Space with 256 kNN. In the previous CIFAR visualization, we only show kNN with 119 neighbors. In Figure 8 and Figure 9, we provide kNN with 255 neighbors, the same set of conclusions hold.
Different “Parts” in the Embedding Space. In Figure 10, we provide some more typical patches of “parts” and show their embedding neighbors. While many parts are shared by different instances, we also find some less ideal cases, e.g. Figure 10(4a)(2d), where the closest neighbors are nearly all from the same instance.
As we discussed earlier, the objective is essentially modeling the co-occurrence statistics of patches. If the same patch is not “shared” by different instances, it is relatively uninformative. While the exact same patch might not be “shared”, the color augmentation and deep image prior embedded in the network design may create approximate sharing. In Figure 11 and Figure 12, we provide two examples of the compositional structure of instances. | 1. What is the main contribution of the paper regarding Siamese-network-based self-supervised learning approaches?
2. What are the strengths and weaknesses of the paper's analysis and findings?
3. Do you have any concerns or questions regarding the paper's experiments and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The work aims to show that the key of success of SOTA Siamese-network-based self-supervised learning approaches is based on learning a distributed representation of image patches. They demonstrate that the performance of aggregated patch representation from fixed-scale image patches can be comparable to that of using the whole image. In addition, they show the connection between the Siamese-network-based SSL objective and the image patches co-occurrence statistics modeling. The authors further show the property of the image patch in the embedding space and projection space through visualization and find that the projection has more invariance and the embedding space tends to preserve more equivariance and locality.
Strengths And Weaknesses
Strength:
The paper shows the connection of covariance-regularization-based SSL method to co-occurrence statistics modeling to support their argument that the success of instance-based SSL methods largely comes from learning a representation of image patches based on their co-occurrence statistics in the images.
The work provides the empirical evaluations to show that different Siamese-network-based approaches have similar trends that the performance of the aggregated patch representation learned from fixed-scale patches is comparable to the whole image representation learned from multi-scale patches.
Weakness:
The paper only show the connection of the patch co-occurrence statistics with covariance-regularization based methods. For other Siamese network-based methods, the paper mainly shows the empirical results of other three methods beyond VICReg (e.g. SimCLR, TCR, BYOL). It is not clear why the finding can be extended to all Siamese network-based methods. How about MoCo, MoCov2 and other new methods?
Why doesn't the Table 4 take the same protocol as in Table 1, 2, and 3 to show the performances of patch-based training v.s. standard training? Why not compare SwAV in Table 1 as well. There are some inconsistencies for the experimental results.
Clarity, Quality, Novelty And Reproducibility
Since the goal of the paper is to explore the underlying success key for instance-based self-supervised learning, although it does not invent any new self-supervised learning method, it is of some novelty. The paper is well-written, and it also provides various experimental results and implementation details in Appendix C. It should be okay to reproduce their results. |
ICLR | Title
Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding Explains the Performance
Abstract
Recently, self-supervised learning (SSL) has achieved tremendous empirical advancements in learning image representation. However, our understanding of the principle behind learning such a representation are still limited. This work shows that the success of the SOTA Siamese-network-based SSL approaches is primarily based on learning a distributed representation of image patches. In particular, we show that when we learn a representation only for fixed-scale image patches and aggregate different patch representations for an image (instance), it can achieve on par or even better results than the baseline methods that use the whole image. Further, we show that the patch representation aggregation can also improve various SOTA baseline methods by a large margin. We also establish a formal connection between the Siamese-network-based SSL objective and the image patches co-occurrence statistics modeling, which supplements the prevailing invariance perspective. By visualizing the nearest neighbors of different image patches in the embedding space and projection space, we show that while the projection has more invariance, the embedding space tends to preserve more equivariance and locality. The evidence shows that it is a promising direction to simplify the SOTA methods to build better understanding.
1 INTRODUCTION
In many application domains, Self-supervised representation learning experienced tremendous advancements in the past few years. In terms of the quality of the learned feature, unsupervised learning has caught up with supervised learning or even surpassed the latter in many cases. This trend promises unparalleled scalability for data-driven machine learning in the future. One of the most successful paradigms in image self-supervised representation learning is based on instanceaugmentation-invariant contrastive learning (Wu et al., 2018; Chen et al., 2020a;b) using a Siamese network architecture Bromley et al. (1993). This style of learning methods achieves the following general goal: 1) It brings the representation of two different views (augmentation) of the same instance (image) closer. 2) It keeps the representation informative of the input; in other words, avoids collapse. Several recent non-contrastive methods achieve competitive performance by explicitly achieving those two goals (Bardes et al., 2021; Li et al., 2022). While we celebrate the empirical success of SSL in a wide range of benchmarks, our understanding of the principle of this learning process are still very limited. In this work, we seek the principle behind the instance-based SSL methods and argue that the success largely comes from learning a representation of image patches based on their co-occurrence statistics in the images. To demonstrate this, we simplify the current SSL method to using a single crop scale to learn a representation of image patches of fixed size and establish a formal connection between our formulation and co-occurrence statistics modeling. The patch representation can be linearly aggregated (bag-of-words) to form the representation of the image. The learned representation achieves similar or better performance than the baseline representation, which is based on the entire image. In particular, even kNN classifier works surprisingly well with the aggregated patch feature. These findings also resonate with recent works in supervised learning based on patch features (Brendel & Bethge, 2018; Dosovitskiy et al., 2020; Trockman & Kolter, 2022). We also show that for baseline SSL methods pretrained with multi-scale crops, the whole-image representation is essentially an aggregation of different patch representations from the same instance.
Further, given various SOTA baseline SSL models, we show that the same aggregation process can further improve the representation quality. Then we provide a cosine-similarity-based visualization of image patches representation on both ImageNet and CIFAR10 datasets. Particularly, we find that while the projection space has achieved significant invariance, the embedding space, frequently used for representation evaluation, tends to preserve more locality and equivariance.
Our discoveries may provide useful explanations and understanding for the success of the instanceaugmentation-invariant SSL methods. The co-occurrence statistics modeling formulation and equivariance preserving property in the embedding space both supplement the current prevailing invariance perspective. Finally, these results motivate an interesting discussion of several potential future directions.
2 RELATED WORKS
Instance-Based Self-Supervised Learning: Invariance without Collapse. The instance contrastive learning (Wu et al., 2018) views each of the images as a different class and uses data augmentation (Dosovitskiy et al., 2016) to generate different views from the same image. As the number of classes is equal to the number of images, it is formulated as a massive classification problem, which may require a huge buffer or memory bank. Later, SimCLR (Chen et al., 2020a) simplifies the technique significantly and uses an InfoNCE-based formulation to restrict the classification within an individual batch. While it’s widely perceived that contrastive learning needs the “bag of tricks,” e.g., large batches, hyperparameter tuning, momentum encoding, memory queues, etc. Later works (Chen & He, 2021; Yeh et al., 2021; HaoChen et al., 2021) show that many of these issues can be easily fixed. Recently, several even simpler non-contrastive learning methods(Bardes et al., 2021; Zbontar et al., 2021; Li et al., 2022) are proposed, where one directly pushes the representation of different views from the same instance closer while maintaining a none-collapsing representation space. Image SSL methods mostly differ in their means to achieve a non-collapsing solution. This include classification versus negative samples(Chen et al., 2020a), Siamese networks (He et al., 2020; Grill et al., 2020) and more recently, covariance regularization (Ermolov et al., 2021; Zbontar et al., 2021; Bardes et al., 2021; HaoChen et al., 2021; Li et al., 2022; Bardes et al., 2022). The covariance regularization has also long been used in many classical unsupervised learning methods (Roweis & Saul, 2000; Tenenbaum et al., 2000; Wiskott & Sejnowski, 2002; Chen et al., 2018), also to enforce a non-collapsing solution. In fact, there is a duality between the spectral contrastive loss(HaoChen et al., 2021) and the non-contrastive loss, which we prove in Appendix B.
All previously mentioned instance-based SSL methods pull together representations of different views of the same instance. Intuitively, the representation would eventually be invariant to the transformation that generates those views. We would like to provide further insight into this learning process: The learning objective can be understood as using the inner product to capture the co-occurrence statistics of those image patches. We also provide visualization to study whether the learned representation truly has this invariance property.
Patch-Based Representation. Many works have explored the effectiveness of path-based image features. In the supervised setting, Bagnet(Brendel & Bethge, 2018) and Thiry et al. (2021) showed that aggregation of patch-based features can achieve most of the performance of supervised learning on image datasets. In the unsupervised setting, Gidaris et al. (2020) performs SSL by requiring a bag-of patches representation to be invariant between different views. Due to architectural constraints, Image Transformer based methods naturally use a patch-based representation (He et al., 2021; Bao et al., 2021).
Learning Representation by Modeling the Co-Occurrence Statistics. The use of word vector representation has a long history in NLP, which dates back to the 80s (Rumelhart et al., 1986; Dumais, 2004). Perhaps one of the most famous word embedding results, the word vector arithmetic operation, was introduced in Mikolov et al. (2013a). Particularly, to learn this embedding, a task called “skip-gram” was used, where one uses the latent embedding of a word to predict the latent embedding of the word vectors in a context. A refinement was proposed in Mikolov et al. (2013b), where a simplified variant of Noise Contrastive Estimation (NCE) was introduced for training the “Skip-gram” model. The task and loss are deeply connected to the SimCLR and its InfoNCE loss. Later, a matrix factorization formulation was proposed in Pennington et al. (2014), which uses a carefully reprocessed concurrence matrix compared to latent semantic analysis. While the task in
Word2Vec and SimCLR is apparently similar, the underlying interpretations are quite different. In instance-based SSL methods, one pervasive perception is that the encoding network is trying to build invariance, i.e., different views of the same instance shall be mapped to the same latent embedding. This work supplements this classical opinion and show that similar to Word2Vect, instance-based SSL methods can be understood as building a distributed representation of image patches by modeling the co-occurrence statistics.
3 SELF-SUPERVISED IMAGE PATCH EMBEDDING AND CO-OCCURRENCE STATISTICS MODELING
To study the role of patch embeddings, we use fixed-scale crops instead of multi-scale crop to learn a representation for fixed-size image patches. We show in Section 4 that any SSL objective can be used. As an example, we present a general formulation of covariance regularization based techniques(Bardes et al., 2021; Zbontar et al., 2021; Li et al., 2022; HaoChen et al., 2021): Definition 1. Intra-instance variance-invariance-covariance regularization (I2 VICReg):
min θ
− Ep(x1,x2) [ zT1 z2 ] , s.t. Ep(x) [ zzT ] = 1
demb · I (1)
where z = g(h) and h = f(x; θ). We call h the embedding and z the projection of an image patch, x. {x} all have the same size. The function f(·; θ) is a deep neural network with parameters θ, and g is typically a much simpler neural network with only one or a few fully connected layers. demb is the dimension of an embedding vector, z. This general idea is shown in Figure 1. For an image, we extract fix-size image patches, which are color augmented before embedding1 f and projection g. Given an image patch xi, the objective tries to push its projection zi closer to the projections of the other image patches within the instance. Further, the regularization decorrelates different dimensions of z while maintaining the variance of each dimension. Covariance regularization was first explicitly implemented in VICReg Bardes et al. (2021). Later (Li et al., 2022) realizes similar effect by maximizing the Total Coding Rate (TCR) (Ma et al., 2007).
Relationship of covariance-regularization based method to Co-Occurrence Statistics Modeling. Assume x1 and x2 are two color-augmented patches sampled from the same image. We denote their marginal distribution by p(x1) and p(x2), which includes variation due to sampling different locations within an image, random color augmentation, as well as variation due to sampling images from the dataset. We also denote their joint distribution by p(x1, x2), which assume x1 and x2 are sampled from the same image. We show that covariance-regularization based contrastive learning can be understood by the following objective that approximates the normalized co-occurrence statistics by the inner product of the two embeddings z1 and z2 generated by x1 and x2:
min ∫
p(x1)p(x2) [ wzT1 z2 − p(x1, x2)
p(x1)p(x2)
]2 dx1dx2 (2)
where w is a fixed weight used to compensate for scale differences. 1This is also called representation in some related literatures.
Proposition 3.1. 2 can be rewritten as the following spectral contrastive form: min Ep(x1,x2) [ −zT1 z2 ] + λEp(x1)p(x2) ( zT1 z2 )2 (3)
where λ = w2 . The proof is rather straightforward and is presented in Appendix A. As we can see, the first term resembles the similarity term in Eqn 1, and the second spectral contrastive term HaoChen et al. (2021) minimizes the inner product between two independent patch embeddings, which has the effect of orthogonalizing them. As we mentioned earlier, there exists a duality between the spectral contrastive regularization and covariance regularization term in Eqn 1. Please refer to the Appendix B for a more in-depth discussion.
Bag-of-Feature Model. After we have learned an embedding for the fix-scale image patches, we can embed all of the image patches {x11, . . . , xHW } within an instance into the embedding space, {h11, . . . , hHW }. Then, we can obtain the representation for the whole image by linearly aggregating (averaging) all hs, or by concatenation. The details and results will be presented in later sections.
4 QUANTITATIVE EMPIRICAL RESULTS
Through experiments, we demonstrate that representations learned by self-supervised learning method trained with fixed-size patches are nearly as strong as that learned with multi-scale crops. For several cases, pretraining with multi-scale crops and evaluating on the fixed central crop is equivalent in terms of performance to pretraining with fixed-size small patches and evaluating by averaging the embedding across the image. We further show that for a multi-scale pretrained model, averaging embedding of fixed-scale small image patches converges to the embedding generated by the center cropped image, as the number of aggregated patches increases. Thus for network pretrained with multi-scale crop, passing the center crop into the network can be viewed as an efficient way to obtain the averaged patch embeddings. Further, we show that the patch aggregated evaluation can further improve the accuracy of the baseline models by a significant margin. Our experiments used the CIFAR-10, CIFAR-100, and the more challenging ImageNet-100 dataset. We also provide a short-epoch ImageNet pretraining to show that with small image patches, the training tends to have lower learning efficiency. In the last section, we will dive into the invariance and equivariance analysis of the patch embedding. All implementation details can be found in Appendix C
4.1 CIFAR
We first provide experimental results on the standard CIFAR-10 and CIFAR-100 datasets (Krizhevsky et al., 2009) using ResNet-34. The results are shown in Figure 2, Tables 1 and 2. We show results obtained using the linear evaluation protocol and the kNN evaluation protocol and the results are consistent with each other. The standard evaluation method generates the embedding using the full image, both during training of the linear classifier and at final evaluation (Central). Alternatively, an image embedding is generated by inputting a certain number of patches (same scale as training time and upsampled) into the neural network and aggregating the patch embeddings by performing averaging. This is denoted by 1, 16, and 256 patches.
The main observation we make is that pretraining on small patches and evaluating with the averaged embedding performs on par or better than pretraining with random-scale patches and evaluating with the full image representation. On CIFAR-10 with the TCR method, the 256-patches evaluation with fixed pretraining scale of 0.2 outperforms the full-image evaluation with random pretraining scale between 0.08 and 1, which is the standard scale range used. When only averaging 16-patches, the same model performs on par with full image evaluation. On the k-NN evaluation, pretraining with randomscale patches not spanning the full range 0.08 to 1.0 gives much worse performance comparatively, than linear evaluation. However, aggregated embedding does not see this comparatively worse performance, and can still outperform the full image evaluation. Using results from Table 1,2 and 3, we can draw the same conclusion on other datasets and other self-supervised methods (VICReg (Bardes et al., 2021) and BYOL(Grill et al., 2020)).
4.2 IMAGENET-100 AND IMAGENET
We provide experimental results on the ImageNet-100 and ImageNet dataset (Deng et al., 2009) with ResNet-50. We present our results using the linear evaluation protocol in Table 3 and Figure 3.
The behavior observed on CIFAR-10 generalizes to ImageNet-100. Averaging embeddings of 16 small patches produced by the patch-based pretrained model performs almost as well as the “central” evaluation of the embedding produced by the baseline model on the ImageNet-100 dataset, as shown in Table 3. In Figure 3(b), we show short-epoch pretrained models on ImageNet. As the patch-based pretrained model tends to see much less information compared to the baseline multi-scale pretraining, there is a 4.5% gap between the patch-based model and the baseline model.
4.3 PATCHED-AGGREGATION BASED EVALUATION OF MULTI-SCALE PRETRAINED MODEL
Our results in the last two sections show that the best performance is obtained when the pretraining step is done using patches of various sizes, and the evaluation step is done using the aggregated patch embeddings. It is therefore interesting to evaluate the embedding of models pretrained with other self-supervised learning methods to investigate if this evaluation protocol provides a uniform performance boost. We do this evaluation on the VICReg model pretrained for 1000 epochs and a SwAV model pretrained for 800 epochs. All models are downloaded from their original repository. Table 4 shows the linear evaluation performance on the validation set of ImageNet using the full image and aggregated embedding. On all the models, aggregated embedding outperforms full-
image evaluation, often by more than 1%. Also, increasing the number of patches averaged in the aggregation process also increases the performance. We do not go beyond 48 patches because of memory and run time issues, but we hypothesize that a further increase in the number of patches will improve the performance further, as we have demonstrated on CIFAR-10, where 256 patches significantly outperform 16 patches.
4.4 CONVERGENCE OF PATCH-BASED EMBEDDING TO WHOLE-INSTANCE EMBEDDING.
In this experiment, we show that for a multi-scale pretrained SSL model, linearly aggregating the patch embedding converges to the instance embedding. We take a multi-scale pretrained VICReg baseline model and use randomly selected 512 images from the ImageNet dataset. For each image, we first get the embedding of the 224× 224 center crop. Then we randomly aggregate N embeddings of different 100×100 image patches and calculate the cosine similarity between the patch-aggregated embedding and the center crop embedding. Figure 3(a) shows that the aggregated representation converges to the instance embedding as N increases from 1 to 16 to all the image patches2.
4.5 CONCATENATION AGGREGATION FURTHER IMPROVES SSL PERFORMANCE
An alternative way to aggregate embeddings are by concatenating them into a single larger vector. To test how this method perform, we downloaded the checkpoints of SOTA SSL model pretrained on CIFAR10 dataset from sololearn (da Costa et al., 2022), and tested linear and kNN accuracy with concatenation aggregation. As shown in Table 5, concatenation aggregation further improve the performance of these SOTA SSL model. Even with only 25 patches, the K-nearest-neighbor (KNN) accuracy of the aggregated embedding outperforms the baseline linear evaluation accuracy by a large margin.
5 PATCH EMBEDDING VISUALIZATION: INVARIANCE OR EQUIVARIANCE?
The instance-augmentation-invariant SSL methods are primarily motivated from an invariance perspective. In this section, we provide CIFAR-10 nearest neighbor and ImageNet cosine-similarity heatmap visualization to further understand the learned representation. In the CIFAR-10 experiment, we take a model pre-trained with 14× 14 image patches on CIFAR-10 and calculate the projection and embedding vectors of all different image patches from the training set. Then for a given 14× 14
2“All”: extracting overlapped patches with stride 4 and totally aggregate about 1000 patches’ embeddings.
image patch (e.g. the ones circled by red dash boxes Fig 4), we visualize its k nearest neighbors in terms of cosine-similarity in both the projection and the embedding space. Figure 4 shows the results for two different image patches. The patches circled by green boxes are image patches from another instance of the same category, whereas the uncircled patches are from the same instance.
In the ImageNet experiment, we take a multi-scale pretrained VICReg model, then for a given image patch (e.g. circled by red dash boxes in Figure 5), we visualize the cosine-similarity between embedding from this patch and that from the other patches from the same instance. In this experiment, we use two different image patches scales, 71 × 71 and 100 × 100. The heatmap visualization is normalized to the same scale.
Overall, we observe that the projection vectors are significantly more invariant than the embedding vectors. This is apparent from both Figure 4 and Figure 5. For the CIFAR kNN patches, NNs in the embedding space are visually much more similar than NNs in the projection space. In fact, in the embedding space, the nearest NNs are mostly locally shifted patches of similar “part” information. For projection space, however, many NNs are patches of different “part” information from the same class. E.g., we can see in Figure 4 that an NNs of a “wheel” in the projection space might be a “door” or a “window”, however, the NNs in the embedding space all contain “wheel” information. In the second example, the NNs of a “horse legs” patch may have different “horse” body parts whereas the NNs in the embedding space are all “horse leg”.
The heatmap visualization on ImageNet also illustrates the same phenomenon. Let’s visualize a multi-scale pretrained VICReg model. The projection vector from a patch has a high similarity to that from the query patch whenever the patch has enough information to infer the class of the image. While for embedding vectors, the similarity area is much more localized to the query patch, or to other patches with similar features (the other leg of the dog in Figure 5). This general observation is consistent with the results of the visualizations in Bordes et al. (2021). We slightly abused the term and call this property of the embedding vector equivariant, in contrast to the invariance possessed by the projector vectors. A more thorough visualization is provided in the Appendix E.
6 DISCUSSION
In this paper, we seek to provide an understanding of the success of instance-augmentation-invariant SSL methods. We demonstrate learning an embedding for fixed-size image patches (I2 VICReg) and linear aggregating them from the same instance can achieve on-par or even better performance than the multi-scale pretraining. On the other hand, with a multi-scale pretrained model, we show that the whole image embedding is essentially the average of patch embeddings. Conceptually we establish the close connection between I2 VICReg and modeling the co-occurrence statistics of patches.
Through visualizing nearest neighbors and cosine-similarity heatmaps, we find that the projector vector is relatively invariant while the embedding vector is instead equivariant, which may explain its higher discriminative performance. This result suggests that the SSL objective, which learns the co-occurrence statistics, encourages an invariant solution, while the more favorable property of equivariance is achieved by the implicit bias introduced by the projector. In the future, it is interesting to explore if it’s possible to directly encourage equivariance in the objective function in a more principled manner instead of relying on the projector head. For this, prior works in NLP may provide useful guidance. In Pennington et al. (2014), word embedding is learned by fitting the log co-occurrence matrix, which avoids the problem of getting dominated by large elements and allows the embedding to carry richer information. Similarly, an SSL objective that implicitly fits to the log-occurrence matrix may learn a more equivariant embedding, which may be an interesting direction for future work.
Lots of open questions still remain in the quest of understanding image SSL. For example, it’s still unclear why the projector g makes the embedding h more equivariant than the projection z. For this, we hypothesize that the role of the projector can be understood as learning a feature representation for a kernel function in the embedding space. Since for h1, h2, the dot product of g(h1) and g(h2) always represent some positive semi-definite kernel on the original space k(h1, h2) = g(h1)T g(h2). It is possible that the flexible kernel function on the embedding alleviates the excess invariance problem caused by the objective on the projector vectors, which allows the embedding to be more equivariant and perform better. We leave further analysis of this hypothesis to future work.
A PROOF OF PROPOSITION 3.1
Proposition A.1. Equation 2 can be rewritten in the following contrastive form:
Ep(x1,x2) [ −zT1 z2 ] + λEp(x1)p(x2) ( zT1 z2 )2 (4)
where λ = w2 .
Proof. Since we are dealing with an objective, we can drop constants, which do not depend on the embedding z1 and z2, when they occur.
L = ∫ p(x1)p(x2) [ wzT1 z2 − p(x1, x2)
p(x1)p(x2)
]2 dx1dx2 (5)
= ∫ p(x1)p(x2) [ (wzT1 z2) 2 − 2wzT1 z2 · p(x1, x2)
p(x1)p(x2)
] dx1dx2 (6)
= ∫ p(x1)p(x2)(wz T 1 z2) 2dx1dx2 − 2w ∫ p(x1, x2)(z T 1 z2)dx1dx2 (7)
= Ep(x1,x2) [ −zT1 z2 ] + λEp(x1)p(x2) ( zT1 z2 )2 (8)
where λ = w2 .
B THE DUALITY BETWEEN SPECTRAL CONTRASTIVE REGULARIZATION AND COVARIANCE REGULARIZATION.
For Objective 3 and Objective 1, as the similarity term is the same, we can focus our discussion on the regularization term, particularly with SGD optimizer. For simplicity, we assume that the embedding z is L2-normalized and each of the embedding dimension also has zero mean and normalized variance. Given a minibatch with size N , the spectral regularization term Ep(x1)p(x2) ( zT1 z2
)2 reduces to∥∥ZTZ − Id∥∥2F . By Lemma 3.2 from Le et al. (2011), we have:∥∥ZTZ − IN∥∥2F = ∥∥ZZT − Id∥∥2F = ∥∥∥∥ZZT − Nd Id ∥∥∥∥2 F + C (9)
where C is a constant. The third equality follows due to that each of the embedding dimension is normalized. ∥∥ZZT − 1dIN∥∥2F is the mini-batch version of the covariance regularization term Ep(x) [ zzT ] = Ndemb · I .
A thorough discussion is beyond the scope of this work. We refer the curious readers to Garrido et al. (2022) for a more general discussion on the duality between contrastive learning and non-contrastive learning.
C IMPLEMENTATION DETAILS
C.1 CIFAR-10 AND CIFAR-100
For all experiments, we pretrain a ResNet-34 for 600 epochs. We use a batch size of 1024, LARS optimizer, and a weight decay of 1e− 04. The learning rate is set to 0.3, and follows a cosine decay schedule, with 10 epochs of warmup and a final value of 0. In the TCR loss, λ is set to 30.0, and ϵ is set to 0.2. The projector network consists of 2 linear layers with respectively 4096 hidden units and 128 output units for the CIFAR-10 experiments and 512 output units for the CIFAR-100 experiments. All the layers are separated with a ReLU and a BatchNorm layers. The data augmentations used are identical to those of BYOL.
C.2 IMAGENET-100 AND IMAGENET
For all the experiments, we pretrain a ResNet-50 with the TCR loss for 400 epochs for ImageNet-100, and 100 epochs for ImageNet. We use a batch size of 1024, the LARS optimizer, and a weight decay of 1e− 04. The learning rate is set to 0.1, and follows a cosine decay schedule, with 10 epochs of warmup and a final value of 0. In the TCR loss, λ is set to 1920.0, and ϵ is set to 0.2. The projector network consists of 3 linear layers with each and 8192 units, separated by a ReLU and a BatchNorm layers. The data augmentations used are identical to those of BYOL.
C.3 IMPLEMENTATION DETAIL FOR 4.5
For all the experiments, we downloaded the checkpoints of SOTA SSL model pretrained on CIFAR10 dataset from solo-learn. Each method is pretrained for 1000 epochs and the hyperparameters used for each method is described in solo-learn. The backbone model used in all these checkpoint is ResNet-18, which output a dimension 512× 5× 5 tensor for each image. We apply spatial average pooling (stride = 2, window size = 3) to this tensor and flatten the result to obtain a feature vector of dimension 2048.
D IMAGENET INTRA-INSTANCE VISUALIZATION
In this section, we provide further visualization of the multi-scale pretrained VICReg model, and the results are shown in Fig 6. Here we use image patches of scale 0.1 to calculate the cosine similarity heatmaps, the query patch is marked by the red-dash boxes. The embedding space contains more localized information, whereas the projection space is relatively more invariant, especially when the patch has enough information to determine the category.
E CIFAR10 KNN VISUALIZATION
This section continues the visualization of the model pretrained with 14 × 14 patches. In this visualization, we primarily use kNN and cosine similarity to find the closest neighbors for the query patches, marked in the red-dash boxes. Again, green boxes indicate that the patches are from other instances of the same category; red boxes indicate that the patches are from other instances of a different category. Patchs that do not have a color box are from the same instance. In the following, we discuss several interesting aspects of the problem.
Additional Projection and Embedding Spaces Comparison. As we can see in Figure 7, the embedding space has a much lesser degree of collapse of the semantic information. The projection space tends to collapse different “parts” of a class to similar vectors, whereas the embedding space preserves more information about the details in a patch. This is manifested by higher visual similarity between neighboring patchs.
Embedding Space with 256 kNN. In the previous CIFAR visualization, we only show kNN with 119 neighbors. In Figure 8 and Figure 9, we provide kNN with 255 neighbors, the same set of conclusions hold.
Different “Parts” in the Embedding Space. In Figure 10, we provide some more typical patches of “parts” and show their embedding neighbors. While many parts are shared by different instances, we also find some less ideal cases, e.g. Figure 10(4a)(2d), where the closest neighbors are nearly all from the same instance.
As we discussed earlier, the objective is essentially modeling the co-occurrence statistics of patches. If the same patch is not “shared” by different instances, it is relatively uninformative. While the exact same patch might not be “shared”, the color augmentation and deep image prior embedded in the network design may create approximate sharing. In Figure 11 and Figure 12, we provide two examples of the compositional structure of instances. | 1. What is the main contribution of the paper regarding Siamse-based Self-Supervised Learning methods?
2. What are the strengths and weaknesses of the proposed patch-based training and evaluation?
3. Do you have any concerns regarding the theoretical description and its connection to the proposal?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor errors or typos in the paper that need to be addressed? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper explores the use of patches to evaluate the Siamse-based Self-Supervised Learning methods. The paper uses a patch-based training instead of the standard augmentation schemes, and proposes a new patch-based evaluation.
The paper evaluates existing methods with the proposed patch-based techniques on ImageNet and CIFAR, and shows visualizations of the embedding spaces.
Strengths And Weaknesses
Strengths:
The idea of using patches is easy to understand and to include in other methods.
The paper builds on top of existing literature and tries to tie the proposal with them.
Weaknesses:
The theoretical description of Section 3 doesn't add (or is not linked) to the proposal for the patch based proposal. It is not clear from the description what the intention of introducing these definitions was. At some point, the paper reads like it was rush or it is incomplete.
The description of the patch based training is not clearly stated. I fail to understand the difference against common augmentation techniques.
In particular, in Section 4, it is stated that the patch-based training happens with fixed-sized small patches, and Figure 1 seems to support this idea. However, in Fig. 2, it is mentioned that a random resize and crop is used.
Given that these is the main contribution of the paper, it must be completely clear what the training protocol is, and how it was implemented.
The inclusion of a new downstream evaluation based on patches is interesting. However, the claim that it improves the performance (p.6) must be reviewed.
The evaluation metrics do not determine the expressiveness of the representation spaced that was learned. Instead, they serve as proxy to understand it. Thus, using a patch-based approach may reveal interesting characteristics, it doesn't mean that it provides better performance.
For instance, by swapping a linear classifier with a non-linear one, one could obtain better performance on the downstream tasks. However, this increment does not mean that the representation space is better.
Perhaps I misunderstood the point of the introduction of the patch-based evaluation. If that is the case, then again this highlights that the paper needs a major revision in its exposition of ideas.
The patch-based evaluation is not enough to thoroughly evaluate the patch-based training. While the "central" evaluation includes a comparison point, it will be more interesting to see other ensemble type of assessments w.r.t. the existing methods. In other words, since using the patches resembles an ensemble by smoothing the representations (over scale and position), other type of methods to extract similar invariant or robust features should be used to fairly compare against other methods.
The paper is poorly written. The description of the definitions (Section 3) are disconnected from the experimental proposal. Moreover, the descriptions do not explain nor tie directly where the ideas are coming from nor how they can be used.
In its current form, the descriptions complexify the paper unnecessarily. Reviewing them and clearly linking the proposal with prior work and the patch-based proposal is needed to fully understand the ideas that are proposed.
As minor errors or typos, do not start sentences with symbols; review the use of parenthetical and textual citations; and proofread the document (e.g., the 4th sentence of Section 2 seems incomplete).
Clarity, Quality, Novelty And Reproducibility
Clarity. The paper has a major clarity problem. As it stands it is difficult to understand, and parts of the proposal do not support each other. The paper reads disconnected, and lacks support among its parts.
Novelty. The paper introduces interesting ideas in Section 3. However, they are not clearly linked to prior works, and leaves open where they are coming from. By following the references, the reader cannot figure out if the definitions and propositions are new or are based on the previous work. I encourage the authors to err on the side of verbosity and help the readers to learn and follow the work, instead of obscure the references and assume that they will know and have read all the papers in the literature.
Reproducibility. The descriptions of the patch-based approaches are not clearly explained, and the experiments show different setups. It is difficult to understand, and will require much work from the reader to puzzle the authors setup. |
ICLR | Title
Equivariance-aware Architectural Optimization of Neural Networks
Abstract
Incorporating equivariance to symmetry groups as a constraint during neural network training can improve performance and generalization for tasks exhibiting those symmetries, but such symmetries are often not perfectly nor explicitly present. This motivates algorithmically optimizing the architectural constraints imposed by equivariance. We propose the equivariance relaxation morphism, which preserves functionality while reparametrizing a group equivariant layer to operate with equivariance constraints on a subgroup, as well as the [G]-mixed equivariant layer, which mixes layers constrained to different groups to enable within-layer equivariance optimization. We further present evolutionary and differentiable neural architecture search (NAS) algorithms that utilize these mechanisms respectively for equivariance-aware architectural optimization. Experiments across a variety of datasets show the benefit of dynamically constrained equivariance to find effective architectures with approximate equivariance.
1 INTRODUCTION
Constraining neural networks to be equivariant to symmetry groups present in the data can improve their task performance, efficiency, and generalization capabilities (Bronstein et al., 2021), as shown by translation-equivariant convolutional neural networks (Fukushima & Miyake, 1982; LeCun et al., 1989) for image-based tasks (LeCun et al., 1998). Seminal works have developed general theories and architectures for equivariance in neural networks, providing a blueprint for equivariant operations on complex structured data (Cohen & Welling, 2016; Ravanbakhsh et al., 2017; Kondor & Trivedi, 2018; Weiler et al., 2021). However, these works design model constraints based on an explicit equivariance property. Furthermore, their architectural assumption of full equivariance in every layer may be overly constraining; e.g., in handwritten digit recognition, full equivariance to 180◦ rotation may lead to misclassifying samples of “6” and “9”. Weiler & Cesa (2019) found that local equivariance from a final subgroup convolutional layer improves performance over full equivariance. If appropriate equivariance constraints are instead learned, the benefits of equivariance could extend to applications where the data may have unknown or imperfect symmetries.
Learning approximate equivariance has been recently approached via novel layer operations (Wang et al., 2022; Finzi et al., 2021; Zhou et al., 2020; Yeh et al., 2022; Basu et al., 2021). Separately, the field of neural architecture search (NAS) aims to optimize full neural network architectures (Zoph & Le, 2017; Real et al., 2017; Elsken et al., 2017; Liu et al., 2018; Lu et al., 2019). Existing NAS methods have not yet explicitly optimized equivariance, although partial or soft equivariant approaches like Romero & Lohit (2022) and van der Ouderaa et al. (2022) approach custom equivariant architectures. An important aspect of NAS is network morphisms: function-preserving architectural changes (Wei et al., 2016) which can be used during training to change the loss landscape and gradient descent trajectory while immediately maintaining the current functionality and loss value (Maile et al., 2022). Developing tools for searching over a space of architectural representations of equivariance would permit NAS algorithms to be applied towards architectural optimization of equivariance.
Contributions First, we present two mechanisms towards equivariance-aware architectural optimization. The equivariance relaxation morphism for group convolutional layers partially expands the representation and parameters of the layer to enable less constrained learning with a prior on symmetry. The [G]-mixed equivariant layer parametrizes a layer as a weighted sum of layers equivariant to different groups, permitting the learning of architectural weighting parameters.
Second, we implement these concepts within two algorithms for architectural optimization of partially-equivariant networks. Evolutionary Equivariance-Aware NAS (EquiNASE) utilizes the equivariance relaxation morphism in a greedy evolutionary algorithm, dynamically relaxing constraints throughout the training process. Differentiable Equivariance-Aware NAS (EquiNASD) implements [G]-mixed equivariant layers throughout a network to learn the appropriate approximate equivariance of each layer, in addition to their optimized weights, during training.
Finally, we analyze the proposed mechanisms via their respective NAS approaches in multiple image classification tasks, investigating how the dynamically learned approximate equivariance affects training and performance over baseline models and other approaches.
2 RELATED WORKS
Approximate equivariance Although no other works on approximate equivariance explicitly study architectural optimization, some approaches are architectural in nature. We compare our contributions with the most conceptually similar works to our knowledge.
The main contributions of Basu et al. (2021) and Agrawal & Ostrowski (2022) are similar to our proposed equivariant relaxation morphism. Basu et al. (2021) also utilizes subgroup decomposition but instead algorithmically builds up equivariances from smaller groups, while our work focuses on relaxing existing constraints. Agrawal & Ostrowski (2022) presents theoretical contributions towards network morphisms for group-invariant shallow neural networks: in comparison, our work focuses on deep group convolutional architectures and implements the morphism in a NAS algorithm.
The main contributions of Wang et al. (2022) and Finzi et al. (2021) are similar to our proposed [G]mixed equivariant layer. Wang et al. (2022) also uses a weighted sum of filters, but uses the same group for each filter and defines the weights over the domain of group elements. Finzi et al. (2021) uses an equivariant layer in parallel to a linear layer with weighted regularization, thus only using two layers in parallel and weighting them by regularization rather than parametrization. Mouli & Ribeiro (2021) also progressively relaxes equivariance constraints, but with regularized rather than parametrized constraints.
In more diverse approaches, Zhou et al. (2020) and Yeh et al. (2022) represent symmetry-inducing weight sharing via learnable matrices. Romero & Lohit (2022) and van der Ouderaa et al. (2022) learn partial or soft equivariances for each layer.
Neural architecture search Neural architecture search (NAS) aims to optimize both the architecture and its parameters for a given task. Liu et al. (2018) approaches this difficult bi-level optimization by creating a large super-network containing all possible elements and continuously relaxing the discrete architectural parameters to enable search by gradient descent. Other NAS approaches include evolutionary algorithms (Real et al., 2017; Lu et al., 2019; Elsken et al., 2017) and reinforcement learning (Zoph & Le, 2017), which search over discretely represented architectures.
3 BACKGROUND
We assume familiarity with group theory (see Appendix A.1). For discrete group G, the lth Gequivariant group convolutional layer (Cohen & Welling, 2016) of a group convolutional neural network (G-CNN) convolves1 the feature map f : G → RCl−1 from the previous layer with a filter with kernel size k represented as learnable parameters ψ : G→ RCl×Cl−1 . For each output channel
1We identify the correlation and convolution operators as they only differ where the inverse group element is placed and refer to both as ”convolution” throughout this work.
[G]-mixed equivariant layer
C4 equivariance C2 equivariance
A B
d ∈ [Cl], where [C] := {1, . . . , C}, and group element g ∈ G, the layer’s output is defined as:
[f ⋆G ψ]d(g) = ∑ h∈G Cl−1∑ c=1 fc(h)ψd,c(g −1h). (1)
The first layer is a special case: the input to the network needs to be lifted via this operation such that the output feature map of this layer has a domain of G. In the case of image data, an image x with C channels may be interpreted as a function x : Z2 → RC mapping each pixel in coordinate space to a real number for each channel, where the cth channel of x is referred to as xc. The input is x : Z2 → RC0 , so the layer is instead a lifting convolution:
[x ⋆G ψ]d(g) = ∑ y∈Z2 C0∑ c=1 xc(y)ψd,c(g −1y). (2)
We present our contributions in the group convolutional layer case, although similar claims apply for the lifting convolutional layer case.
4 TOWARDS ARCHITECTURAL OPTIMIZATION OVER SUBGROUPS
We propose two mechanisms to enable search over subgroups: the equivariance relaxation morphism and a [G]-mixed equivariant layer. The proposed morphism, depicted in Figure 1(A) and described in Section 4.1, changes the equivariance constraint from one group to another subgroup while preserving the learned weights of the initial group convolutional operator. The [G]-mixed equivariant layer, shown in Figure 1(B) and presented in Section 4.2, allows for a single layer to represent equivariance to multiple subgroups through a weighted sum.
4.1 EQUIVARIANCE RELAXATION MORPHISM
The equivariance relaxation morphism reparametrizes a G-equivariant group (or lifting) convolutional layer to operate over a subgroup of G, partially removing weight-sharing constraints from the parameter space while maintaining the functionality of the layer.
Let G′ ≤ G be a subgroup of G such that G′ \ G is finite. Let R be a system of representatives of the left quotient (including the identity element), so that G′ \ G = {G′r | r ∈ R} , where G′r := {g′r | g′ ∈ G′} . Given a G-equivariant group convolutional layer with feature map f and filter ψ, we define the relaxed feature map f̃ : G′ → R(Cl−1×|R|) and relaxed filter ψ̃ : G′ → R(Cl×|R|)×(Cl−1×|R|) as follows. For c ∈ [Cl−1], s, t ∈ R, d ∈ [Cl]:
f̃(c,s)(g ′) := fc(g ′s), (3)
ψ̃(d,t),(c,s)(g ′) := ψd,c(t −1g′s). (4)
We define the equivariance relaxation morphism from G to G′ as the reparametrization of ψ as ψ̃ (Eq. 4) and reshaping of f as f̃ (Eq. 3). We will show that the new layer, [f̃ ⋆G′ ψ̃](d,t)(g′), is equivalent to [f ⋆G ψ]d(g′t) down to reshaping. Since the mapping G′ × R → G, (g′, t) 7→ g′t, is bijective, every g can uniquely be written as g = g′t with g′ ∈ G′ and t ∈ R. For g ∈ G, G′g ∈ G′ \G has a unique representative t ∈ R with G′g = G′t, and g′ := gt−1 ∈ G′. Similarly, h ∈ G may be written as h = h′s with unique h′ ∈ G′ and s ∈ R. With these preliminaries, we get:
[f ⋆G ψ]d(g ′t) = [f ⋆G ψ]d(g) (5)
= ∑ h∈G Cl−1∑ c=1 fc(h)ψd,c(g −1h), (6) = ∑
h′∈G′ ∑ s∈R Cl−1∑ c=1 fc(h ′s)ψd,c(t −1g′−1h′s), (7)
= ∑
h′∈G′ Cl−1∑ c=1 ∑ s∈R f̃(c,s)(h ′)ψ̃(d,t),(c,s)(g ′−1h′), (8)
= [ f̃ ⋆G′ ψ̃ ] (d,t) (g′), (9)
which shows the claim. Thus, the convolution of f̃ with ψ̃ is equivariant to G but parametrized as a G′-equivariant group convolutional layer, where the representatives are expanded into independent channels. This morphism can be viewed as initializing a G′-equivariant layer with a pre-trained prior of equivariance to G, maintaining any previous training.
Standard convolutional layers are a special case of group-equivariant layers, where the group is translational symmetry over pixel space. Regular group convolutions are often implemented by relaxation to the translational symmetry group by expanding the filter via the appropriate group actions, allowing a standard convolution implementation from a deep learning library to be used. The equivariance relaxation morphism generalizes this concept to any subgroup. This, as well as how the equivariance relaxation morphism is implemented, is discussed further in Appendix B.
4.2 [G]-MIXED EQUIVARIANT LAYER
Towards learning equivariance, we additionally propose partial equivariance via a mixture of layers, each constrained to equivariance to different groups, applied in parallel to the same input then combined via a weighted sum. The equivariance relaxation morphism provides a mapping of group elements between a group and a subgroup. For a set of groups [G], such as a subgroup lattice of some group G, we define a [G]-mixed equivariant layer as:[
f⋆̂[G][ψ] ] (d,t) (g) = ∑
G∈[G]
zG [ f ⋆G′ ψ̃G ] (d,t) (g) (10)
= f ⋆G′ ∑ G∈[G] zGψ̃G (d,t) (g), (11)
where each element zG of [z] := {zG|G ∈ [G]} is an architectural weighting parameter such that∑ G∈[G] zG = 1, G ′ is a subgroup of all groups in [G], each element ψG of [ψ] is a filter with a domain of G, and ψ̃G is the transformation of ψG from a domain of G to G′ as defined in Equation 4. Thus, the layer is parametrized by [ψ] and [z], computing a weighted sum of operations that are equivariant to different groups of [G]. The layer may be equivalently computed by convolution of the input with the weighted sum of transformed filters, shown in Equation 11. We provide further implementation details in Appendix B.
5 EQUIVARIANCE-AWARE NEURAL ARCHITECTURE ALGORITHMS
We present two NAS methods that utilize the presented mechanisms for discovering appropriate equivariance during training: Evolutionary Equivariance-Aware NAS (EquiNASE) and Differen-
Algorithm 1 Evolutionary equivariance-aware neural architecture search. procedure EQUINASE(Initial symmetry group G)
Initialize population with a G-equivariant group convolutional network. for each generation do
for each network in population do Add children of network with relaxed equivariance constraints into population.
for each network in population do Partially train network on dataset.
Select Pareto-efficient and high accuracy networks as new population. return population
tiable Equivariance-Aware NAS (EquiNASD). Both methods optimize an architecture while learning weights, yielding a final trained network adapted to equivariances present in the training data. However, they differ in NAS paradigm and approximate equivariance representation: EquiNASE , in Section 5.1, searches for networks composed of layers each fully equivariant to possibly different groups, while EquiNASD, in Section 5.2, searches for smooth mixtures of equivariant layers.
5.1 EVOLUTIONARY EQUIVARIANCE-AWARE NAS
Towards finding the optimal full equivariance per layer, the equivariance relaxation morphism presented in Section 4.1 is applied as the genetic operator in an evolutionary hill-climbing algorithm. The Evolutionary Equivariance-Aware NAS (EquiNASE) algorithm, given in Algorithm 1, is similar to other evolutionary NAS methods such as Elsken et al. (2017) with pareto selection as in Falanti et al. (2022). A population of networks, which starts with an individual with all layers equivariant to the largest possible group, undergoes mutation via equivariance relaxation and selection based on accuracy and parameter count to optimize neural architecture while learning network parameters. See Appendix A.2 for further background on evolutionary NAS.
In each generation, candidate networks are evaluated based on maximizing validation accuracy and minimizing parameter count: the pareto-dominant individuals with highest accuracy are kept, then additional high-accuracy individuals are added if necessary until the desired parent population size is reached. Offspring are generated from each parent separately by mutation using the relaxation morphism. This preserves the weights of the parametrized equivariance during mutation, allowing for the continuous training of networks over evolution by inheritance from parent individuals. Specifically, mutation reduces a single layer’s parametrized equivariance to a subgroup within the constraint that each layer has parametrized equivariance to a subgroup of all preceding layers. This constraint yields local equivariance properties for the network, as shown in Weiler & Cesa (2019) and Elsayed et al. (2020) to be empirically favorable in image classification tasks. The resulting individuals are each trained independently for a given training time, and then this process repeats.
The second objective of minimizing parameter count is intended to advance efficient networks, such as those with large symmetry groups. Accuracy-based selection alone would necessarily prefer larger networks as mutation via the equivariance relaxation morphism results in two networks with identical performance but different size, the relaxed network having more parameters, until training; potentially short-term increases in validation accuracy after training would then result in the selection of individuals with more parameters. Thus, the proposed strategy of selecting both paretodominant and high-accuracy individuals is intended to maintain a diverse yet efficient population without succumbing to overly greedy selections too early.
5.2 DIFFERENTIABLE EQUIVARIANCE-AWARE NAS
In a contrasting paradigm, the [G]-mixed equivariant layer presented in Section 4.2 allows for smoothly searching across a spectrum of equivariance for each layer via a differentiable NAS algorithm. Our Differentiable Equivariance-Aware NAS (EquiNASD) algorithm, defined in Algorithm 2, is inspired by DARTS (Liu et al., 2018) with significant changes detailed in the following paragraphs. EquiNASD simplifies the bilevel optimization of the architecture weighting parameters Z and filter weights Ψ into alternating independent updates, computing the gradient update for Z with the current, rather than optimal, Ψ for the current architecture encoded by Z, to boost search efficiency with minimal performance loss compared to higher order approximations (Liu et al., 2018).
Algorithm 2 Differentiable equivariance-aware neural architecture search. procedure EQUINASD(Set of groups [G])
Initialize network with [G]-mixed equivariant layers, parametrized by Ψ and Z. while not converged do
Update Z by ∇ZL(Ψ, Z). Update Ψ by ∇ΨL(Ψ, Z).
return trained network
In most differentiable NAS search spaces, the desired output architecture is discretized to select a subset of architectural options within constraints, then the weights are re-initialized and trained within the static architecture. In our formulation, this is not necessary as any mixed operation can be equivalently expressed as a single layer equivariant to any group G′ that is a common subgroup to all groups of the mixed operation (Eq. 11): in our experimental case, this is a standard translation-equivariant convolutional layer, so the final model can be equivalently expressed as a standard convolutional model with encoded partial equivariance. Thus, the final optimized architecture and trained weights are output from the single search process. We explore the standard NAS paradigm, where weights are reinitialized and trained in the final static architecture, in Appendix D.
In order to enforce that the scaling of each filter does not confound the architecture weighting parameters, we use the weight normalizing reparametrization (Salimans & Kingma, 2016) and do not update the scalar norm parameter of each filter after initialization.
We do not use disjoint datasets for updating Ψ and Z, but rather draw one batch for Ψ and another for Z independently and randomly from the same training split. This allows for a standard dataset split and to use the validation set for hyperparameter tuning.
These two NAS approaches present adaptations of two standard types of NAS, evolutionary and differentiable, to the search for optimal partial equivariance. We next study empirically the two EquiNAS methods on three datasets, one with known rotational symmetry and two with unknown but visually significant rotational and reflectional symmetry.
6 EXPERIMENTS
We focus on the regular representation of groups and show experiments with reflectional and up to 4-fold rotational symmetry groups applied to image classification tasks. Examples of symmetry groups acting on pixel space, which corresponds to Z2, include T (2), which consists of discrete translations in both dimensions; the cyclical groups Cn, which consist of n-fold rotations; and the dihedral groupsDn, which consist of reflections with n-fold rotations, where n ∈ {1, 2, 4} for exact symmetry without interpolation. The p4 group consists of discrete translations and multiples of 90◦ rotations and may be represented as T (2) ⋊ C4. The p4m group consists of discrete translations, reflections, and multiples of 90◦ rotations and may be represented as T (2)⋊D4. As standard convolutional layers are already equivariant to T (2), we refer to layers also equivariant to n-fold rotations with or without reflections asDn orCn-equivariant, respectively. So, aC1 equivariant convolutional layer is a standard translation-equivariant convolutional layer. We use {C1, D1, C2, D2, C4, D4} as the set of potential groups for mutation in EquiNASE and as [G] in EquiNASD.
We present experiments on image classification for a variety of datasets. The Rotated MNIST dataset (Larochelle et al., 2007, RotMNIST) is a version of the MNIST handwritten digit dataset but with the images rotated by any angle. This task serves as a simple investigational study with known symmetry, while the following two tasks are more realistic and complex. The Galaxy10 DECals dataset (Leung & Bovy, 2019, Galaxy10) contains galaxy images in 10 broad categories. The ISIC 2019 dataset (Codella et al., 2018; Tschandl et al., 2018; Combalia et al., 2019, ISIC) contains dermascopic images of 8 types of skin cancer plus a null class. For Galaxy10 and ISIC, we downsample the images to 64 × 64 due to computational constraints, which adds notable difficulty to the tasks. These tasks exhibit varying levels of rotational and reflectional symmetry, motivating architectural optimization to determine the most effective application of equivariance constraints.
Across all experiments, the architectures are designed to have consistent channel dimensions once expanded to a standard translation-equivariant convolutional layer for each layer across models. Thus, constrained equivariance to a larger symmetry group results in fewer learnable parameters. A
layer constrained to C4 equivariance has |C4 \D4| = 2 times as many independent channels and as many parameters as a layer constrained toD4 equivariance. This is a notably different paradigm than other works that equate parameter counts across architectures with different equivariance properties.
As baseline comparisons, we train and test G-CNNs with static architectures. In addition to the static baselines, we re-implement the residual pathway priors (RPP) approach by Finzi et al. (2021) as a C1 equivariant layer with regularization in parallel with a D4 equivariant convolutional layer.
Further experiment details such as architecture details and other hyperparameters are in Appendix C. For each paradigm of experiments, we present results in the following subsections, with general discussion in Section 7. Additional ablation and random search baselines are in Appendix D.
6.1 EVOLUTIONARY EQUIVARIANCE-AWARE NAS
The classification test errors are listed in Table 1. The advantages of equivariance search methods are most apparent in the Galaxy10 benchmark. While EquiNASE outperforms most baselines on RotMNIST and all baselines on ISIC, it has similar performance and some final architectures to the D4 baseline for both tasks. However, the D4 baseline fails at the Galaxy10 task, demonstrating that the same equivariant architecture can not always be naively applied. Both search methods, EquiNASE and RPP, outperform all baseline models on Galaxy10, and by a large margin for EquiNASE .
The evolutionary progress on RotMNIST is shown in Figure 2: the selected population maintains a fully equivariant network in every generation. The final selected population originates from two main lineages, one staying fully equivariant until the last generations and the other diverging from
the fully equivariant network midway through, showing that training with dynamically constrained parametrizations can produce performant models.
In addition to the normally initialized static baselines, we also train and test baselines that are initialized with priors to larger symmetry groups. These are implemented by initializing all layers to be constrained to the prior symmetry group, then using the equivariance relaxation morphism on each layer. EquiNASE searches for relaxation schedules that yield trained priors on equivariance, while these additional baselines yield untrained priors. The results in Table 1 show that, theC1-equivariant networks generally improve with either equivariance prior, while the C4 equivariant networks perform better with D4 equivariance initialization only when the D4 constrained baselines also work well. The untrained prior methods do not perform as well as EquiNASE on RotMNIST, showing the benefit of investing some training time to the constrained equivariance. For the other tasks, the baselines with priors have better performances than their constrained baseline counterparts.
6.2 DIFFERENTIABLE EQUIVARIANCE-AWARE NAS
The classification test errors are listed in Table 2. EquiNASD achieves better test accuracy than the other comparable methods on RotMNIST and Galaxy10. Due to differences in training protocol, only comparisons of relative rankings with Table 1 are possible: baseline methods accuracies follow similar ranking patterns, suggesting the benefit of general C4 equivariance for RotMNIST and Galaxy10 and general D4 equivariance, including RPP, for ISIC. In this training protocol notably with adaptive optimizers, the results are more consistent across methods and trials.
The dynamics of architecture weighting parameters for one exemplary trial per task are shown in Figure 3. The general trend of less constrained layers toward the output supports the conjecture of local equivariance being beneficial. However, this effect is less consistent for ISIC, the only task where EquiNASD did not exceed baselines, possibly indicating less inherent symmetry. As seen in
Appendix D, the final mixing of architectures for ISIC included a high level of C1, indicating that feature analysis outside of these symmetry groups is important for this benchmark.
Previous differentiable NAS works often used regularization of network size or even architecture weighting parameters themselves to encourage efficient architectures with a single highly weighted choice for each layer. However, our algorithm shows strong preference for a single, more equivariant and thus more expressive layer, notably to D4 or C4 equivariance, without such regularization. This may be due to the bilevel optimization dynamics: more constrained layers may be able to make more effective updates and thus become favorable compared to the lagging larger layers.
7 DISCUSSION
To our knowledge, this is the first work which proposes search methods for networks with dynamically constrained equivariance. Many NAS approaches separately search for an architecture and then reinitialize and retrain the weights, while our two proposed approaches find an optimal architecture with trained weights in a single process, notably with dynamically constrained weights. Gradientbased tuning (Maclaurin et al., 2015) has shown the benefit not only of optimizing hyperparameters but also of dynamically adjusting them during training (Lichtarge et al., 2022). Dynamically constrained weights can reap the benefits of specialization and generalization over the course of training.
Our two equivariance-aware NAS approaches have distinct approaches: EquiNASE searches for architectures composed of discretely equivariant layers, while EquiNASD searches for continuous mixtures of equivariance within each layer. The EquiNASD algorithm avoids many known problems in differentiable NAS such as the discretization gap that occurs when searching over a continuous relaxation of a discrete architectural search space (Xie et al., 2021), such as that of EquiNASE . Towards searching for discretely equivariant layers using the [G]-mixed equivariant layer, proximal NAS algorithms use techniques such as projection (Yao et al., 2020) and straight-through estimation (Li et al., 2022) to avoid the discretization gap and thus may be effective for this application.
EquiNASE is innately greedy: at each selection step, the population is evaluated by known current performance rather than unknown final performance, biased to architectures that train quickly. Networks with more equivariance constraints tend to learn faster, but equivariance relaxation may yield large gradients for newly unconstrained parameters and thus fast increases in performance. Further work could utilize metrics for final performance, such as proxies (White et al., 2022).
The theoretical and algorithmic contributions of this work are applicable beyond the image classification experiments presented to architectures with parametrized equivariance to any discrete group. We leave the extension to other group representations and domains as future work, such as the continuous case via careful analysis of the regular representation, still given G′ \G is finite. Our proposed equivariance-aware NAS problems can be practically applied to find effective models or architectures for datasets with hypothesizable symmetry. EquiNASE may particularly work well on tasks that benefit from local equivariance, determined by analyzing the architecture weighting parameters from first applying the more efficient EquiNASD, as well as for finding good discrete architectures within which to retrain weights, based on the ablation and random comparisons. We thus recommend EquiNASD for practical applications if the final model is not restricted to discrete equivariance, in which case it can be used to inform design decisions for applying EquiNASE .
Beyond NAS, the equivariance relaxation morphism could be used in other applications such as fine-tuning and distillation. Layers of a pre-trained equivariant network could be expanded via equivariance relaxation before fine-tuning on the same or a new task. Similarly, a network could be distilled to a wider architecture for additional performance benefits.
Conclusion We present two mechanisms towards equivariance-aware architectural optimization, the equivariance relaxation morphism and the [G]-mixed equivariant layer, and two NAS algorithms that respectively implement these mechanisms evolutionarily as EquiNASE and differentiably as EquiNASD. We investigate how dynamic equivariance achieved by these algorithms affects the training and performance of models across multiple image classification tasks of varying complexity and assumed symmetry, demonstrating that these techniques can search for performant architectures and weights even on noisy tasks. The proposed mechanisms and algorithms are extendable beyond vision tasks to any architecture with parametrized equivariance to any discrete group.
A ADDITIONAL BACKGROUND
A.1 SYMMETRIES IN NEURAL NETWORKS
A symmetry of an object is a mapping of the object onto itself such that structure is preserved. A symmetry group G is a set of such mappings along with a binary operation · : G × G → G, known as the group product, that satisfies axioms for closure, associativity, the identity, and the inverse (Herstein, 2006). A group G acts on a set X via the group action . : G × X → X , (g, x) 7→ g.x that satisfies axioms for identity and compatibility: X is called a G-space. Equivariance is the property of a mapping such that transformation of the input results in equivalent transformation of the output. Formally, a mapping h : X → Z between two G-spaces is G-equivariant if for all g ∈ G and x ∈ X we have: h(g.x) = g.h(x). For example, an image segmentation neural network should be T (2)-equivariant: shifting the input should result in the same shift in the output.
Invariance is a special case of equivariance, where the output of the function is completely independent of transformation of the input. Formally, a mapping h : X → Z is G-invariant if for all g ∈ G and x ∈ X we have: h(g.x) = h(x). For example, an image classification network should be T (2)-invariant: shifting the input should not change the output. Symmetries leave objects invariant.
For two groups G and H with group products ·G and ·H respectively where H acts on G with group action ., the (outer) semi-direct product G ⋊ H of H acting on G is a group composed of the set of elements G ×H with group product (g, h) · (g′, h′) = (g ·G (h.g′), h ·H h′) and inverse (g, h)−1 = (h−1.g−1, h−1).
A subgroup H of G is a nonempty subset with the same group product that also fulfills the group axioms. Then, gH = {g · h|h ∈ H} and Hg = {h · g|h ∈ H} denote the left coset and right coset, respectively, of H with representative g.
A.2 NEURAL ARCHITECTURE SEARCH
Evolutionary algorithms are optimization methods inspired by evolution in biology, where individuals in a population compete with their phenotypic traits in order to pass on their genotypic traits to offspring. The population is the current collection of individuals. Each individual is an instance of the object to be optimized and has a genotype that is decoded into a phenotype. In this case, each individual is a neural network, with a genotype that encodes the parametrized equivariance group of each convolutional layer, represented as a vector of integers. The individual continues training on the task before competing against other individuals to be selected as a parent to mutate to generate the next population. Each parent itself is kept for the next population, as well as each valid child that is generated via the equivariance relaxation morphism, such that they are functionally equivalent to their parent at initialization (although with a different architecture) and thus have the same fitness before training. Each individual in this population is partially trained, such that these children diverge from their siblings and parent, so that the next set of parents may be selected and this process repeats.
Pareto dominance can be used in multi-objective optimizations to select the next parent population. For a population of individuals each scored in n objectives, an individual is pareto-optimal if no individual has at least one strictly better score for an objective without that of any other objectives being strictly worse. The pareto front is the set formed by all pareto-optimal individuals.
B IMPLEMENTATION DETAILS
Group convolutional layers The implementation of regular group convolutional layers can be viewed as a special case of our proposed equivariance relaxation morphism. With the preliminaries given in Section 4.1 and the case ofG′ = T (2), f̃ and ψ̃ are computed such that f̃(c,s)(g′) := fc(g′s) and ψ̃(d,t),(c,s)(g′) := ψd,c(t−1g′s) for each g′ ∈ T (2), c ∈ [Cl−1], s, t ∈ R, and d ∈ [Cl].
Let SG := |R|. The learnable parameters of the Gl-equivariant lth layer with Cl output channels, corresponding to ψ, are stored as a tensor of sizeCl×Cl−1×SGl×Kl×Kl. The filter transformation expands this filter tensor by performing the action of each r ∈ R on another copy of the tensor to expand its shape along a new dimension, resulting in a tensor of sizeCl×SGl×Cl−1×SGl×Kl×Kl, which is reshaped to ClSGl ×Cl−1SGl ×Kl×Kl. The input tensor to the lth layer, corresponding to f, is in the shape ofB×Cl−1×SGl×Hl−1×Wl−1,which is reshaped toB×Cl−1SGl×Hl−1×Wl−1 and convolved with the expanded filter. The output of shape B × ClSGl ×Hl ×Wl is reshaped to B × Cl × SGl ×Hl ×Wl.
Equivariance relaxation morphism To implement the equivariance relaxation morphism, the new filter tensor is initialized by applying Equation 4 such that result of applying the preceding filter transformation is equivalent. Our implementation of group actions relies on group channel indexing to represent the order of group elements: to ensure this is consistent before and after the morphism, the appropriate reordering of the output and input channels of the expanded filter are applied upon expansion. The new filter tensor has a shape of Cl|R| × Cl−1|R| × SGl/|R| ×Kl ×Kl. The [G]mixed equivariant layer is built on top of this implementation, also using proper input and output channel reordering between layers to ensure correct mixing of group channels.
C EXPERIMENTAL DETAILS
Architecture backbone For both EquiNASE and EquiNASD experiments, we use the same backbone architecture, such that the static baselines have the same architecture across experiments. The architectures have a lifting layer followed by 7 group convolutional layers, for a total of 8 convolutional layers. After 4 layers, the channel count doubles, from 16 to 32 for a D4 equivariant layer and scaling up for smaller symmetry group equivariance constraints. An average pooling layer is placed after every other layer for all architectures and additionally after the fifth and seventh convolutional layers for Galaxy10 and ISIC. After the final group convolutional layer is a group-dimension average pooling followed by two linear layers to the output dimension. Every convolutional and linear layer except the output layer is immediately followed by a batchnorm then a ReLU.
Hyperparameters The hyperparameters for each algorithm are selected such that baselines only differ by training time and optimizers. The learning rates were selected by grid search over baselines on RotMNIST. For all experiments in Sections 6.1, we use a simple SGD optimizer with learning rate 0.1 to avoid confounding effects such as momentum during the morphism. For EquiNASE , the parent selection size is 5, the training time per generation is 0.5 epochs, and the number of generations is 50 for all tasks. Baselines were trained for the equivalent number of epochs. For all experiments in Section 6.2, we use separate Adam optimizers for Ψ and Z, each with a learning rate of 0.01 and otherwise default settings. The total training time is 100 epochs for RotMNIST and 50 epochs for Galaxy10 and ISIC. For RPP, we use a C1-equivariant layer with an L2 regularization parameter of 1e−6 in parallel with a D4-equivariant layer without regularization. For RotMNIST and MNIST, we use the standard training and test splits with a batch size of 64, reserving 10% of the training data as the validation set. For Galaxy10, we set aside 10% of the dataset as the test set, reserving 10% of the remaining training data as the validation set. For ISIC, we set aside 10% of the available training dataset as the test set, reserving 10% of the remaining data as the validation set and the rest as training data. For the latter two datasets, we resize the
images to 64 × 64 due to computational constraints and use a batchsize of 32. The validation sets were previously used for hyperparameter tuning: for experimental results, they are only used for the experiments in Section 6.1 as necessary for the EquiNASE algorithm. No data augmentation is performed, although the datasets are normalized.
D ADDITIONAL EXPERIMENTS AND FIGURES
To explore the symmetry discovery of EquiNASD, we apply it to six augmentations of the MNIST dataset (LeCun et al., 1998), where each augmentation applies the group actions of each group in {C1, C2, C4, D1, D2, D4} respectively. The resulting architecture dynamics of this experiment are shown in Figure 4, showing that less augmented versions still have some inherent symmetry, while more augmented versions induce stronger architectural changes towards layers that are equivariant to larger groups. Across all augmentations, earlier layers tended towards more constraints to equivariance.
As ablation studies and comparisons, we implement two kinds of random search for each NAS method. The first ablates smart architecture search: EquiNASE Random Select works as described in Algorithm 1 but with random parent selection (instead of pareto-front selection) and EquiNASD Random Z works as described in Algorithm 2 but with random Z updates (instead of gradient descent) by shuffling Z gradients. The second is more akin to standard NAS random search: for the evolutionary paradigm, we train 30 randomly selected static architectures in the discrete architecture search space for the same training time and selecting the top 5 by validation accuracy, and for the differentiable paradigm, we train 25 randomly selected static architectures in the continuous architecture search space and selecting the top 5 by validation accuracy. 30 and 25 were respectively calculated to be approximately the same compute cost as the trials of EquiNASE and EquiNASD. These are labeled as “Random Static” for both the evolutionary and differentiable paradigms. Since Random Static trains static architectures while EquiNASE and EquiNASD dynamically search for both architectures and parameters, we take the best 5 architectures for each and retrain their param-
eters from scratch as in the standard NAS paradigm, labeled as EquiNASE Retrain and EquiNASD Retrain, respectively, for fair comparison to Random Static.
The results of these additional experiments are compared against those of our main algorithms and baselines in Figures 5 and 8.
Shown in Figure 5, EquiNASE outperforms EquiNASE Random Select, showing the benefit of using informed selection to guide the relaxation of equivariance constraints over training. Additionally, EquiNASE Retrain outperforms the Random Static baseline, showing that using compute in an informed search is more beneficial than just randomly searching the space of static architecture constraints.
EquiNASD finds competitive architectures on average and can find architectures which outperform baseline choices like architectures fully equivariant to C1 or D4. From comparing EquiNASD Retrain to EquiNASD results in Figure 8, retraining a resulting architecture is not consistently better or worse than using the final weights from EquiNASD, showing that there isn’t a disadvantage to training weights during search and avoiding the additional cost of a post-search training step.
The use of randomized loss information in Random Static in Figure 8 shows that an informed search for architecture hyperparameters is generally useful. However, experiments on the ISIC benchmark demonstrates that the architecture search can be deceptive and that random loss information can outperform informed loss. This motivates exploration into the use of noise during the search process for architectural parameters.
A sampling of random continuous architectures in Random Static in Figure 8 shows that random architectures can perform well on problems where fully equivariant architectures like the D4 baseline already perform well. However, on the Galaxy10 problem, the D4 and C4 baseline have high variance, suggesting that a fully equivariant architecture is sub-optimal. On this baseline, EquiNASD greatly outperforms a search of random architectures, demonstrating that EquiNASD can discover the appropriate equivariance for a specific dataset over fixed or randomly selected architectures.
The search space for EquiNASD is already well-formed for random architectures, compared to the discrete search space of EquiNASE . This is enabled by the [G]-mixed equivariant layer, which is a contribution of this work. Random non-mixed equivariant architectures do worse on all three benchmarks compared to random architectures which use the [G]-mixed equivariant layer. This can explain why the EquiNASD results are closer to random baselines than the EquiNASE results, as the search space permits easily finding the appropriate mix of equivariances compared to a discretized search space.
Our methods enable searching for both architecture and parameters concurrently in a single training process. This approach is more efficient than NAS approaches that only search for architectures, requiring a retraining process within the resulting architecture for evaluation. However, these ablation and random search comparisons show that our algorithms may get performance gains from adding a retraining phase with a tradeoff of further compute cost. | 1. What is the focus of the paper regarding leveraging equivariances to subgroups?
2. What are the strengths of the proposed approach, particularly in its formulation and experimental results?
3. What are the weaknesses of the paper, such as the lack of comparison with other approaches and limitations in extending to non-compact groups?
4. How can the paper be improved regarding its clarity and reproducibility? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this work, the authors explore technique towards leveraging equivariances to subgroups rather than the entire group. First they propose equivariance relaxation morphism
−
which partially removes weight sharing constraints on the entire group (without losing out on functionality) and the mixed layer allows to learn partial equivariance as a weighted sum. The authors then propose two equivariance aware NAS algorithms - one evolutionary and the other differentiable - and show good experimental results in both cases.
Strengths And Weaknesses
Strengths:
The formulation of partial/ approximate equivariances/ invariances
−
as a mixture over subgroups is interesting and an important problem to tackle.
Strong experimental results for both their learning paradigms.
For a person with sufficient group theory background - the paper is easy to understand.
Weaknesses and correspnding questions/ suggestions:
Lack of comparison of the evolutionary paradigm with a similar approach [1] which searches over the subspace lattice to identify te groups to be equivariant/ invariant to.
It is hard to see how this extends to non-finite groups which are not compact. Would suggest that the authors comment on this.
While the paper is understandable for someone who has the necessary group theory background - it is almost impossible for general audience - would recommend the authors add necessary background and preliminaries to the appendix.
Studies in out of distribution settings are missing (for e.g. train only contains images rotated by 0-15 degrees, but test has those between 60-180 degrees) - this is will give us more insight into the weights learned.
Minor:
Please make sure the title on openreview matches the title on the paper
References
Mouli, S. Chandra, and Bruno Ribeiro. "Neural Networks for Learning Counterfactual G-Invariances from Single Environments." ICLR 2021
Clarity, Quality, Novelty And Reproducibility
Quality and Novelty - The paper tackles an important problem and has a reasonable amount of novelty as described in previous section.
Clarity - While the paper is understandable for someone who has the necessary group theory background - it is almost impossible for general audience - would recommend the authors add necessary background and preliminaries to the appendix.
Reproducibility - Currently code not available |
ICLR | Title
Equivariance-aware Architectural Optimization of Neural Networks
Abstract
Incorporating equivariance to symmetry groups as a constraint during neural network training can improve performance and generalization for tasks exhibiting those symmetries, but such symmetries are often not perfectly nor explicitly present. This motivates algorithmically optimizing the architectural constraints imposed by equivariance. We propose the equivariance relaxation morphism, which preserves functionality while reparametrizing a group equivariant layer to operate with equivariance constraints on a subgroup, as well as the [G]-mixed equivariant layer, which mixes layers constrained to different groups to enable within-layer equivariance optimization. We further present evolutionary and differentiable neural architecture search (NAS) algorithms that utilize these mechanisms respectively for equivariance-aware architectural optimization. Experiments across a variety of datasets show the benefit of dynamically constrained equivariance to find effective architectures with approximate equivariance.
1 INTRODUCTION
Constraining neural networks to be equivariant to symmetry groups present in the data can improve their task performance, efficiency, and generalization capabilities (Bronstein et al., 2021), as shown by translation-equivariant convolutional neural networks (Fukushima & Miyake, 1982; LeCun et al., 1989) for image-based tasks (LeCun et al., 1998). Seminal works have developed general theories and architectures for equivariance in neural networks, providing a blueprint for equivariant operations on complex structured data (Cohen & Welling, 2016; Ravanbakhsh et al., 2017; Kondor & Trivedi, 2018; Weiler et al., 2021). However, these works design model constraints based on an explicit equivariance property. Furthermore, their architectural assumption of full equivariance in every layer may be overly constraining; e.g., in handwritten digit recognition, full equivariance to 180◦ rotation may lead to misclassifying samples of “6” and “9”. Weiler & Cesa (2019) found that local equivariance from a final subgroup convolutional layer improves performance over full equivariance. If appropriate equivariance constraints are instead learned, the benefits of equivariance could extend to applications where the data may have unknown or imperfect symmetries.
Learning approximate equivariance has been recently approached via novel layer operations (Wang et al., 2022; Finzi et al., 2021; Zhou et al., 2020; Yeh et al., 2022; Basu et al., 2021). Separately, the field of neural architecture search (NAS) aims to optimize full neural network architectures (Zoph & Le, 2017; Real et al., 2017; Elsken et al., 2017; Liu et al., 2018; Lu et al., 2019). Existing NAS methods have not yet explicitly optimized equivariance, although partial or soft equivariant approaches like Romero & Lohit (2022) and van der Ouderaa et al. (2022) approach custom equivariant architectures. An important aspect of NAS is network morphisms: function-preserving architectural changes (Wei et al., 2016) which can be used during training to change the loss landscape and gradient descent trajectory while immediately maintaining the current functionality and loss value (Maile et al., 2022). Developing tools for searching over a space of architectural representations of equivariance would permit NAS algorithms to be applied towards architectural optimization of equivariance.
Contributions First, we present two mechanisms towards equivariance-aware architectural optimization. The equivariance relaxation morphism for group convolutional layers partially expands the representation and parameters of the layer to enable less constrained learning with a prior on symmetry. The [G]-mixed equivariant layer parametrizes a layer as a weighted sum of layers equivariant to different groups, permitting the learning of architectural weighting parameters.
Second, we implement these concepts within two algorithms for architectural optimization of partially-equivariant networks. Evolutionary Equivariance-Aware NAS (EquiNASE) utilizes the equivariance relaxation morphism in a greedy evolutionary algorithm, dynamically relaxing constraints throughout the training process. Differentiable Equivariance-Aware NAS (EquiNASD) implements [G]-mixed equivariant layers throughout a network to learn the appropriate approximate equivariance of each layer, in addition to their optimized weights, during training.
Finally, we analyze the proposed mechanisms via their respective NAS approaches in multiple image classification tasks, investigating how the dynamically learned approximate equivariance affects training and performance over baseline models and other approaches.
2 RELATED WORKS
Approximate equivariance Although no other works on approximate equivariance explicitly study architectural optimization, some approaches are architectural in nature. We compare our contributions with the most conceptually similar works to our knowledge.
The main contributions of Basu et al. (2021) and Agrawal & Ostrowski (2022) are similar to our proposed equivariant relaxation morphism. Basu et al. (2021) also utilizes subgroup decomposition but instead algorithmically builds up equivariances from smaller groups, while our work focuses on relaxing existing constraints. Agrawal & Ostrowski (2022) presents theoretical contributions towards network morphisms for group-invariant shallow neural networks: in comparison, our work focuses on deep group convolutional architectures and implements the morphism in a NAS algorithm.
The main contributions of Wang et al. (2022) and Finzi et al. (2021) are similar to our proposed [G]mixed equivariant layer. Wang et al. (2022) also uses a weighted sum of filters, but uses the same group for each filter and defines the weights over the domain of group elements. Finzi et al. (2021) uses an equivariant layer in parallel to a linear layer with weighted regularization, thus only using two layers in parallel and weighting them by regularization rather than parametrization. Mouli & Ribeiro (2021) also progressively relaxes equivariance constraints, but with regularized rather than parametrized constraints.
In more diverse approaches, Zhou et al. (2020) and Yeh et al. (2022) represent symmetry-inducing weight sharing via learnable matrices. Romero & Lohit (2022) and van der Ouderaa et al. (2022) learn partial or soft equivariances for each layer.
Neural architecture search Neural architecture search (NAS) aims to optimize both the architecture and its parameters for a given task. Liu et al. (2018) approaches this difficult bi-level optimization by creating a large super-network containing all possible elements and continuously relaxing the discrete architectural parameters to enable search by gradient descent. Other NAS approaches include evolutionary algorithms (Real et al., 2017; Lu et al., 2019; Elsken et al., 2017) and reinforcement learning (Zoph & Le, 2017), which search over discretely represented architectures.
3 BACKGROUND
We assume familiarity with group theory (see Appendix A.1). For discrete group G, the lth Gequivariant group convolutional layer (Cohen & Welling, 2016) of a group convolutional neural network (G-CNN) convolves1 the feature map f : G → RCl−1 from the previous layer with a filter with kernel size k represented as learnable parameters ψ : G→ RCl×Cl−1 . For each output channel
1We identify the correlation and convolution operators as they only differ where the inverse group element is placed and refer to both as ”convolution” throughout this work.
[G]-mixed equivariant layer
C4 equivariance C2 equivariance
A B
d ∈ [Cl], where [C] := {1, . . . , C}, and group element g ∈ G, the layer’s output is defined as:
[f ⋆G ψ]d(g) = ∑ h∈G Cl−1∑ c=1 fc(h)ψd,c(g −1h). (1)
The first layer is a special case: the input to the network needs to be lifted via this operation such that the output feature map of this layer has a domain of G. In the case of image data, an image x with C channels may be interpreted as a function x : Z2 → RC mapping each pixel in coordinate space to a real number for each channel, where the cth channel of x is referred to as xc. The input is x : Z2 → RC0 , so the layer is instead a lifting convolution:
[x ⋆G ψ]d(g) = ∑ y∈Z2 C0∑ c=1 xc(y)ψd,c(g −1y). (2)
We present our contributions in the group convolutional layer case, although similar claims apply for the lifting convolutional layer case.
4 TOWARDS ARCHITECTURAL OPTIMIZATION OVER SUBGROUPS
We propose two mechanisms to enable search over subgroups: the equivariance relaxation morphism and a [G]-mixed equivariant layer. The proposed morphism, depicted in Figure 1(A) and described in Section 4.1, changes the equivariance constraint from one group to another subgroup while preserving the learned weights of the initial group convolutional operator. The [G]-mixed equivariant layer, shown in Figure 1(B) and presented in Section 4.2, allows for a single layer to represent equivariance to multiple subgroups through a weighted sum.
4.1 EQUIVARIANCE RELAXATION MORPHISM
The equivariance relaxation morphism reparametrizes a G-equivariant group (or lifting) convolutional layer to operate over a subgroup of G, partially removing weight-sharing constraints from the parameter space while maintaining the functionality of the layer.
Let G′ ≤ G be a subgroup of G such that G′ \ G is finite. Let R be a system of representatives of the left quotient (including the identity element), so that G′ \ G = {G′r | r ∈ R} , where G′r := {g′r | g′ ∈ G′} . Given a G-equivariant group convolutional layer with feature map f and filter ψ, we define the relaxed feature map f̃ : G′ → R(Cl−1×|R|) and relaxed filter ψ̃ : G′ → R(Cl×|R|)×(Cl−1×|R|) as follows. For c ∈ [Cl−1], s, t ∈ R, d ∈ [Cl]:
f̃(c,s)(g ′) := fc(g ′s), (3)
ψ̃(d,t),(c,s)(g ′) := ψd,c(t −1g′s). (4)
We define the equivariance relaxation morphism from G to G′ as the reparametrization of ψ as ψ̃ (Eq. 4) and reshaping of f as f̃ (Eq. 3). We will show that the new layer, [f̃ ⋆G′ ψ̃](d,t)(g′), is equivalent to [f ⋆G ψ]d(g′t) down to reshaping. Since the mapping G′ × R → G, (g′, t) 7→ g′t, is bijective, every g can uniquely be written as g = g′t with g′ ∈ G′ and t ∈ R. For g ∈ G, G′g ∈ G′ \G has a unique representative t ∈ R with G′g = G′t, and g′ := gt−1 ∈ G′. Similarly, h ∈ G may be written as h = h′s with unique h′ ∈ G′ and s ∈ R. With these preliminaries, we get:
[f ⋆G ψ]d(g ′t) = [f ⋆G ψ]d(g) (5)
= ∑ h∈G Cl−1∑ c=1 fc(h)ψd,c(g −1h), (6) = ∑
h′∈G′ ∑ s∈R Cl−1∑ c=1 fc(h ′s)ψd,c(t −1g′−1h′s), (7)
= ∑
h′∈G′ Cl−1∑ c=1 ∑ s∈R f̃(c,s)(h ′)ψ̃(d,t),(c,s)(g ′−1h′), (8)
= [ f̃ ⋆G′ ψ̃ ] (d,t) (g′), (9)
which shows the claim. Thus, the convolution of f̃ with ψ̃ is equivariant to G but parametrized as a G′-equivariant group convolutional layer, where the representatives are expanded into independent channels. This morphism can be viewed as initializing a G′-equivariant layer with a pre-trained prior of equivariance to G, maintaining any previous training.
Standard convolutional layers are a special case of group-equivariant layers, where the group is translational symmetry over pixel space. Regular group convolutions are often implemented by relaxation to the translational symmetry group by expanding the filter via the appropriate group actions, allowing a standard convolution implementation from a deep learning library to be used. The equivariance relaxation morphism generalizes this concept to any subgroup. This, as well as how the equivariance relaxation morphism is implemented, is discussed further in Appendix B.
4.2 [G]-MIXED EQUIVARIANT LAYER
Towards learning equivariance, we additionally propose partial equivariance via a mixture of layers, each constrained to equivariance to different groups, applied in parallel to the same input then combined via a weighted sum. The equivariance relaxation morphism provides a mapping of group elements between a group and a subgroup. For a set of groups [G], such as a subgroup lattice of some group G, we define a [G]-mixed equivariant layer as:[
f⋆̂[G][ψ] ] (d,t) (g) = ∑
G∈[G]
zG [ f ⋆G′ ψ̃G ] (d,t) (g) (10)
= f ⋆G′ ∑ G∈[G] zGψ̃G (d,t) (g), (11)
where each element zG of [z] := {zG|G ∈ [G]} is an architectural weighting parameter such that∑ G∈[G] zG = 1, G ′ is a subgroup of all groups in [G], each element ψG of [ψ] is a filter with a domain of G, and ψ̃G is the transformation of ψG from a domain of G to G′ as defined in Equation 4. Thus, the layer is parametrized by [ψ] and [z], computing a weighted sum of operations that are equivariant to different groups of [G]. The layer may be equivalently computed by convolution of the input with the weighted sum of transformed filters, shown in Equation 11. We provide further implementation details in Appendix B.
5 EQUIVARIANCE-AWARE NEURAL ARCHITECTURE ALGORITHMS
We present two NAS methods that utilize the presented mechanisms for discovering appropriate equivariance during training: Evolutionary Equivariance-Aware NAS (EquiNASE) and Differen-
Algorithm 1 Evolutionary equivariance-aware neural architecture search. procedure EQUINASE(Initial symmetry group G)
Initialize population with a G-equivariant group convolutional network. for each generation do
for each network in population do Add children of network with relaxed equivariance constraints into population.
for each network in population do Partially train network on dataset.
Select Pareto-efficient and high accuracy networks as new population. return population
tiable Equivariance-Aware NAS (EquiNASD). Both methods optimize an architecture while learning weights, yielding a final trained network adapted to equivariances present in the training data. However, they differ in NAS paradigm and approximate equivariance representation: EquiNASE , in Section 5.1, searches for networks composed of layers each fully equivariant to possibly different groups, while EquiNASD, in Section 5.2, searches for smooth mixtures of equivariant layers.
5.1 EVOLUTIONARY EQUIVARIANCE-AWARE NAS
Towards finding the optimal full equivariance per layer, the equivariance relaxation morphism presented in Section 4.1 is applied as the genetic operator in an evolutionary hill-climbing algorithm. The Evolutionary Equivariance-Aware NAS (EquiNASE) algorithm, given in Algorithm 1, is similar to other evolutionary NAS methods such as Elsken et al. (2017) with pareto selection as in Falanti et al. (2022). A population of networks, which starts with an individual with all layers equivariant to the largest possible group, undergoes mutation via equivariance relaxation and selection based on accuracy and parameter count to optimize neural architecture while learning network parameters. See Appendix A.2 for further background on evolutionary NAS.
In each generation, candidate networks are evaluated based on maximizing validation accuracy and minimizing parameter count: the pareto-dominant individuals with highest accuracy are kept, then additional high-accuracy individuals are added if necessary until the desired parent population size is reached. Offspring are generated from each parent separately by mutation using the relaxation morphism. This preserves the weights of the parametrized equivariance during mutation, allowing for the continuous training of networks over evolution by inheritance from parent individuals. Specifically, mutation reduces a single layer’s parametrized equivariance to a subgroup within the constraint that each layer has parametrized equivariance to a subgroup of all preceding layers. This constraint yields local equivariance properties for the network, as shown in Weiler & Cesa (2019) and Elsayed et al. (2020) to be empirically favorable in image classification tasks. The resulting individuals are each trained independently for a given training time, and then this process repeats.
The second objective of minimizing parameter count is intended to advance efficient networks, such as those with large symmetry groups. Accuracy-based selection alone would necessarily prefer larger networks as mutation via the equivariance relaxation morphism results in two networks with identical performance but different size, the relaxed network having more parameters, until training; potentially short-term increases in validation accuracy after training would then result in the selection of individuals with more parameters. Thus, the proposed strategy of selecting both paretodominant and high-accuracy individuals is intended to maintain a diverse yet efficient population without succumbing to overly greedy selections too early.
5.2 DIFFERENTIABLE EQUIVARIANCE-AWARE NAS
In a contrasting paradigm, the [G]-mixed equivariant layer presented in Section 4.2 allows for smoothly searching across a spectrum of equivariance for each layer via a differentiable NAS algorithm. Our Differentiable Equivariance-Aware NAS (EquiNASD) algorithm, defined in Algorithm 2, is inspired by DARTS (Liu et al., 2018) with significant changes detailed in the following paragraphs. EquiNASD simplifies the bilevel optimization of the architecture weighting parameters Z and filter weights Ψ into alternating independent updates, computing the gradient update for Z with the current, rather than optimal, Ψ for the current architecture encoded by Z, to boost search efficiency with minimal performance loss compared to higher order approximations (Liu et al., 2018).
Algorithm 2 Differentiable equivariance-aware neural architecture search. procedure EQUINASD(Set of groups [G])
Initialize network with [G]-mixed equivariant layers, parametrized by Ψ and Z. while not converged do
Update Z by ∇ZL(Ψ, Z). Update Ψ by ∇ΨL(Ψ, Z).
return trained network
In most differentiable NAS search spaces, the desired output architecture is discretized to select a subset of architectural options within constraints, then the weights are re-initialized and trained within the static architecture. In our formulation, this is not necessary as any mixed operation can be equivalently expressed as a single layer equivariant to any group G′ that is a common subgroup to all groups of the mixed operation (Eq. 11): in our experimental case, this is a standard translation-equivariant convolutional layer, so the final model can be equivalently expressed as a standard convolutional model with encoded partial equivariance. Thus, the final optimized architecture and trained weights are output from the single search process. We explore the standard NAS paradigm, where weights are reinitialized and trained in the final static architecture, in Appendix D.
In order to enforce that the scaling of each filter does not confound the architecture weighting parameters, we use the weight normalizing reparametrization (Salimans & Kingma, 2016) and do not update the scalar norm parameter of each filter after initialization.
We do not use disjoint datasets for updating Ψ and Z, but rather draw one batch for Ψ and another for Z independently and randomly from the same training split. This allows for a standard dataset split and to use the validation set for hyperparameter tuning.
These two NAS approaches present adaptations of two standard types of NAS, evolutionary and differentiable, to the search for optimal partial equivariance. We next study empirically the two EquiNAS methods on three datasets, one with known rotational symmetry and two with unknown but visually significant rotational and reflectional symmetry.
6 EXPERIMENTS
We focus on the regular representation of groups and show experiments with reflectional and up to 4-fold rotational symmetry groups applied to image classification tasks. Examples of symmetry groups acting on pixel space, which corresponds to Z2, include T (2), which consists of discrete translations in both dimensions; the cyclical groups Cn, which consist of n-fold rotations; and the dihedral groupsDn, which consist of reflections with n-fold rotations, where n ∈ {1, 2, 4} for exact symmetry without interpolation. The p4 group consists of discrete translations and multiples of 90◦ rotations and may be represented as T (2) ⋊ C4. The p4m group consists of discrete translations, reflections, and multiples of 90◦ rotations and may be represented as T (2)⋊D4. As standard convolutional layers are already equivariant to T (2), we refer to layers also equivariant to n-fold rotations with or without reflections asDn orCn-equivariant, respectively. So, aC1 equivariant convolutional layer is a standard translation-equivariant convolutional layer. We use {C1, D1, C2, D2, C4, D4} as the set of potential groups for mutation in EquiNASE and as [G] in EquiNASD.
We present experiments on image classification for a variety of datasets. The Rotated MNIST dataset (Larochelle et al., 2007, RotMNIST) is a version of the MNIST handwritten digit dataset but with the images rotated by any angle. This task serves as a simple investigational study with known symmetry, while the following two tasks are more realistic and complex. The Galaxy10 DECals dataset (Leung & Bovy, 2019, Galaxy10) contains galaxy images in 10 broad categories. The ISIC 2019 dataset (Codella et al., 2018; Tschandl et al., 2018; Combalia et al., 2019, ISIC) contains dermascopic images of 8 types of skin cancer plus a null class. For Galaxy10 and ISIC, we downsample the images to 64 × 64 due to computational constraints, which adds notable difficulty to the tasks. These tasks exhibit varying levels of rotational and reflectional symmetry, motivating architectural optimization to determine the most effective application of equivariance constraints.
Across all experiments, the architectures are designed to have consistent channel dimensions once expanded to a standard translation-equivariant convolutional layer for each layer across models. Thus, constrained equivariance to a larger symmetry group results in fewer learnable parameters. A
layer constrained to C4 equivariance has |C4 \D4| = 2 times as many independent channels and as many parameters as a layer constrained toD4 equivariance. This is a notably different paradigm than other works that equate parameter counts across architectures with different equivariance properties.
As baseline comparisons, we train and test G-CNNs with static architectures. In addition to the static baselines, we re-implement the residual pathway priors (RPP) approach by Finzi et al. (2021) as a C1 equivariant layer with regularization in parallel with a D4 equivariant convolutional layer.
Further experiment details such as architecture details and other hyperparameters are in Appendix C. For each paradigm of experiments, we present results in the following subsections, with general discussion in Section 7. Additional ablation and random search baselines are in Appendix D.
6.1 EVOLUTIONARY EQUIVARIANCE-AWARE NAS
The classification test errors are listed in Table 1. The advantages of equivariance search methods are most apparent in the Galaxy10 benchmark. While EquiNASE outperforms most baselines on RotMNIST and all baselines on ISIC, it has similar performance and some final architectures to the D4 baseline for both tasks. However, the D4 baseline fails at the Galaxy10 task, demonstrating that the same equivariant architecture can not always be naively applied. Both search methods, EquiNASE and RPP, outperform all baseline models on Galaxy10, and by a large margin for EquiNASE .
The evolutionary progress on RotMNIST is shown in Figure 2: the selected population maintains a fully equivariant network in every generation. The final selected population originates from two main lineages, one staying fully equivariant until the last generations and the other diverging from
the fully equivariant network midway through, showing that training with dynamically constrained parametrizations can produce performant models.
In addition to the normally initialized static baselines, we also train and test baselines that are initialized with priors to larger symmetry groups. These are implemented by initializing all layers to be constrained to the prior symmetry group, then using the equivariance relaxation morphism on each layer. EquiNASE searches for relaxation schedules that yield trained priors on equivariance, while these additional baselines yield untrained priors. The results in Table 1 show that, theC1-equivariant networks generally improve with either equivariance prior, while the C4 equivariant networks perform better with D4 equivariance initialization only when the D4 constrained baselines also work well. The untrained prior methods do not perform as well as EquiNASE on RotMNIST, showing the benefit of investing some training time to the constrained equivariance. For the other tasks, the baselines with priors have better performances than their constrained baseline counterparts.
6.2 DIFFERENTIABLE EQUIVARIANCE-AWARE NAS
The classification test errors are listed in Table 2. EquiNASD achieves better test accuracy than the other comparable methods on RotMNIST and Galaxy10. Due to differences in training protocol, only comparisons of relative rankings with Table 1 are possible: baseline methods accuracies follow similar ranking patterns, suggesting the benefit of general C4 equivariance for RotMNIST and Galaxy10 and general D4 equivariance, including RPP, for ISIC. In this training protocol notably with adaptive optimizers, the results are more consistent across methods and trials.
The dynamics of architecture weighting parameters for one exemplary trial per task are shown in Figure 3. The general trend of less constrained layers toward the output supports the conjecture of local equivariance being beneficial. However, this effect is less consistent for ISIC, the only task where EquiNASD did not exceed baselines, possibly indicating less inherent symmetry. As seen in
Appendix D, the final mixing of architectures for ISIC included a high level of C1, indicating that feature analysis outside of these symmetry groups is important for this benchmark.
Previous differentiable NAS works often used regularization of network size or even architecture weighting parameters themselves to encourage efficient architectures with a single highly weighted choice for each layer. However, our algorithm shows strong preference for a single, more equivariant and thus more expressive layer, notably to D4 or C4 equivariance, without such regularization. This may be due to the bilevel optimization dynamics: more constrained layers may be able to make more effective updates and thus become favorable compared to the lagging larger layers.
7 DISCUSSION
To our knowledge, this is the first work which proposes search methods for networks with dynamically constrained equivariance. Many NAS approaches separately search for an architecture and then reinitialize and retrain the weights, while our two proposed approaches find an optimal architecture with trained weights in a single process, notably with dynamically constrained weights. Gradientbased tuning (Maclaurin et al., 2015) has shown the benefit not only of optimizing hyperparameters but also of dynamically adjusting them during training (Lichtarge et al., 2022). Dynamically constrained weights can reap the benefits of specialization and generalization over the course of training.
Our two equivariance-aware NAS approaches have distinct approaches: EquiNASE searches for architectures composed of discretely equivariant layers, while EquiNASD searches for continuous mixtures of equivariance within each layer. The EquiNASD algorithm avoids many known problems in differentiable NAS such as the discretization gap that occurs when searching over a continuous relaxation of a discrete architectural search space (Xie et al., 2021), such as that of EquiNASE . Towards searching for discretely equivariant layers using the [G]-mixed equivariant layer, proximal NAS algorithms use techniques such as projection (Yao et al., 2020) and straight-through estimation (Li et al., 2022) to avoid the discretization gap and thus may be effective for this application.
EquiNASE is innately greedy: at each selection step, the population is evaluated by known current performance rather than unknown final performance, biased to architectures that train quickly. Networks with more equivariance constraints tend to learn faster, but equivariance relaxation may yield large gradients for newly unconstrained parameters and thus fast increases in performance. Further work could utilize metrics for final performance, such as proxies (White et al., 2022).
The theoretical and algorithmic contributions of this work are applicable beyond the image classification experiments presented to architectures with parametrized equivariance to any discrete group. We leave the extension to other group representations and domains as future work, such as the continuous case via careful analysis of the regular representation, still given G′ \G is finite. Our proposed equivariance-aware NAS problems can be practically applied to find effective models or architectures for datasets with hypothesizable symmetry. EquiNASE may particularly work well on tasks that benefit from local equivariance, determined by analyzing the architecture weighting parameters from first applying the more efficient EquiNASD, as well as for finding good discrete architectures within which to retrain weights, based on the ablation and random comparisons. We thus recommend EquiNASD for practical applications if the final model is not restricted to discrete equivariance, in which case it can be used to inform design decisions for applying EquiNASE .
Beyond NAS, the equivariance relaxation morphism could be used in other applications such as fine-tuning and distillation. Layers of a pre-trained equivariant network could be expanded via equivariance relaxation before fine-tuning on the same or a new task. Similarly, a network could be distilled to a wider architecture for additional performance benefits.
Conclusion We present two mechanisms towards equivariance-aware architectural optimization, the equivariance relaxation morphism and the [G]-mixed equivariant layer, and two NAS algorithms that respectively implement these mechanisms evolutionarily as EquiNASE and differentiably as EquiNASD. We investigate how dynamic equivariance achieved by these algorithms affects the training and performance of models across multiple image classification tasks of varying complexity and assumed symmetry, demonstrating that these techniques can search for performant architectures and weights even on noisy tasks. The proposed mechanisms and algorithms are extendable beyond vision tasks to any architecture with parametrized equivariance to any discrete group.
A ADDITIONAL BACKGROUND
A.1 SYMMETRIES IN NEURAL NETWORKS
A symmetry of an object is a mapping of the object onto itself such that structure is preserved. A symmetry group G is a set of such mappings along with a binary operation · : G × G → G, known as the group product, that satisfies axioms for closure, associativity, the identity, and the inverse (Herstein, 2006). A group G acts on a set X via the group action . : G × X → X , (g, x) 7→ g.x that satisfies axioms for identity and compatibility: X is called a G-space. Equivariance is the property of a mapping such that transformation of the input results in equivalent transformation of the output. Formally, a mapping h : X → Z between two G-spaces is G-equivariant if for all g ∈ G and x ∈ X we have: h(g.x) = g.h(x). For example, an image segmentation neural network should be T (2)-equivariant: shifting the input should result in the same shift in the output.
Invariance is a special case of equivariance, where the output of the function is completely independent of transformation of the input. Formally, a mapping h : X → Z is G-invariant if for all g ∈ G and x ∈ X we have: h(g.x) = h(x). For example, an image classification network should be T (2)-invariant: shifting the input should not change the output. Symmetries leave objects invariant.
For two groups G and H with group products ·G and ·H respectively where H acts on G with group action ., the (outer) semi-direct product G ⋊ H of H acting on G is a group composed of the set of elements G ×H with group product (g, h) · (g′, h′) = (g ·G (h.g′), h ·H h′) and inverse (g, h)−1 = (h−1.g−1, h−1).
A subgroup H of G is a nonempty subset with the same group product that also fulfills the group axioms. Then, gH = {g · h|h ∈ H} and Hg = {h · g|h ∈ H} denote the left coset and right coset, respectively, of H with representative g.
A.2 NEURAL ARCHITECTURE SEARCH
Evolutionary algorithms are optimization methods inspired by evolution in biology, where individuals in a population compete with their phenotypic traits in order to pass on their genotypic traits to offspring. The population is the current collection of individuals. Each individual is an instance of the object to be optimized and has a genotype that is decoded into a phenotype. In this case, each individual is a neural network, with a genotype that encodes the parametrized equivariance group of each convolutional layer, represented as a vector of integers. The individual continues training on the task before competing against other individuals to be selected as a parent to mutate to generate the next population. Each parent itself is kept for the next population, as well as each valid child that is generated via the equivariance relaxation morphism, such that they are functionally equivalent to their parent at initialization (although with a different architecture) and thus have the same fitness before training. Each individual in this population is partially trained, such that these children diverge from their siblings and parent, so that the next set of parents may be selected and this process repeats.
Pareto dominance can be used in multi-objective optimizations to select the next parent population. For a population of individuals each scored in n objectives, an individual is pareto-optimal if no individual has at least one strictly better score for an objective without that of any other objectives being strictly worse. The pareto front is the set formed by all pareto-optimal individuals.
B IMPLEMENTATION DETAILS
Group convolutional layers The implementation of regular group convolutional layers can be viewed as a special case of our proposed equivariance relaxation morphism. With the preliminaries given in Section 4.1 and the case ofG′ = T (2), f̃ and ψ̃ are computed such that f̃(c,s)(g′) := fc(g′s) and ψ̃(d,t),(c,s)(g′) := ψd,c(t−1g′s) for each g′ ∈ T (2), c ∈ [Cl−1], s, t ∈ R, and d ∈ [Cl].
Let SG := |R|. The learnable parameters of the Gl-equivariant lth layer with Cl output channels, corresponding to ψ, are stored as a tensor of sizeCl×Cl−1×SGl×Kl×Kl. The filter transformation expands this filter tensor by performing the action of each r ∈ R on another copy of the tensor to expand its shape along a new dimension, resulting in a tensor of sizeCl×SGl×Cl−1×SGl×Kl×Kl, which is reshaped to ClSGl ×Cl−1SGl ×Kl×Kl. The input tensor to the lth layer, corresponding to f, is in the shape ofB×Cl−1×SGl×Hl−1×Wl−1,which is reshaped toB×Cl−1SGl×Hl−1×Wl−1 and convolved with the expanded filter. The output of shape B × ClSGl ×Hl ×Wl is reshaped to B × Cl × SGl ×Hl ×Wl.
Equivariance relaxation morphism To implement the equivariance relaxation morphism, the new filter tensor is initialized by applying Equation 4 such that result of applying the preceding filter transformation is equivalent. Our implementation of group actions relies on group channel indexing to represent the order of group elements: to ensure this is consistent before and after the morphism, the appropriate reordering of the output and input channels of the expanded filter are applied upon expansion. The new filter tensor has a shape of Cl|R| × Cl−1|R| × SGl/|R| ×Kl ×Kl. The [G]mixed equivariant layer is built on top of this implementation, also using proper input and output channel reordering between layers to ensure correct mixing of group channels.
C EXPERIMENTAL DETAILS
Architecture backbone For both EquiNASE and EquiNASD experiments, we use the same backbone architecture, such that the static baselines have the same architecture across experiments. The architectures have a lifting layer followed by 7 group convolutional layers, for a total of 8 convolutional layers. After 4 layers, the channel count doubles, from 16 to 32 for a D4 equivariant layer and scaling up for smaller symmetry group equivariance constraints. An average pooling layer is placed after every other layer for all architectures and additionally after the fifth and seventh convolutional layers for Galaxy10 and ISIC. After the final group convolutional layer is a group-dimension average pooling followed by two linear layers to the output dimension. Every convolutional and linear layer except the output layer is immediately followed by a batchnorm then a ReLU.
Hyperparameters The hyperparameters for each algorithm are selected such that baselines only differ by training time and optimizers. The learning rates were selected by grid search over baselines on RotMNIST. For all experiments in Sections 6.1, we use a simple SGD optimizer with learning rate 0.1 to avoid confounding effects such as momentum during the morphism. For EquiNASE , the parent selection size is 5, the training time per generation is 0.5 epochs, and the number of generations is 50 for all tasks. Baselines were trained for the equivalent number of epochs. For all experiments in Section 6.2, we use separate Adam optimizers for Ψ and Z, each with a learning rate of 0.01 and otherwise default settings. The total training time is 100 epochs for RotMNIST and 50 epochs for Galaxy10 and ISIC. For RPP, we use a C1-equivariant layer with an L2 regularization parameter of 1e−6 in parallel with a D4-equivariant layer without regularization. For RotMNIST and MNIST, we use the standard training and test splits with a batch size of 64, reserving 10% of the training data as the validation set. For Galaxy10, we set aside 10% of the dataset as the test set, reserving 10% of the remaining training data as the validation set. For ISIC, we set aside 10% of the available training dataset as the test set, reserving 10% of the remaining data as the validation set and the rest as training data. For the latter two datasets, we resize the
images to 64 × 64 due to computational constraints and use a batchsize of 32. The validation sets were previously used for hyperparameter tuning: for experimental results, they are only used for the experiments in Section 6.1 as necessary for the EquiNASE algorithm. No data augmentation is performed, although the datasets are normalized.
D ADDITIONAL EXPERIMENTS AND FIGURES
To explore the symmetry discovery of EquiNASD, we apply it to six augmentations of the MNIST dataset (LeCun et al., 1998), where each augmentation applies the group actions of each group in {C1, C2, C4, D1, D2, D4} respectively. The resulting architecture dynamics of this experiment are shown in Figure 4, showing that less augmented versions still have some inherent symmetry, while more augmented versions induce stronger architectural changes towards layers that are equivariant to larger groups. Across all augmentations, earlier layers tended towards more constraints to equivariance.
As ablation studies and comparisons, we implement two kinds of random search for each NAS method. The first ablates smart architecture search: EquiNASE Random Select works as described in Algorithm 1 but with random parent selection (instead of pareto-front selection) and EquiNASD Random Z works as described in Algorithm 2 but with random Z updates (instead of gradient descent) by shuffling Z gradients. The second is more akin to standard NAS random search: for the evolutionary paradigm, we train 30 randomly selected static architectures in the discrete architecture search space for the same training time and selecting the top 5 by validation accuracy, and for the differentiable paradigm, we train 25 randomly selected static architectures in the continuous architecture search space and selecting the top 5 by validation accuracy. 30 and 25 were respectively calculated to be approximately the same compute cost as the trials of EquiNASE and EquiNASD. These are labeled as “Random Static” for both the evolutionary and differentiable paradigms. Since Random Static trains static architectures while EquiNASE and EquiNASD dynamically search for both architectures and parameters, we take the best 5 architectures for each and retrain their param-
eters from scratch as in the standard NAS paradigm, labeled as EquiNASE Retrain and EquiNASD Retrain, respectively, for fair comparison to Random Static.
The results of these additional experiments are compared against those of our main algorithms and baselines in Figures 5 and 8.
Shown in Figure 5, EquiNASE outperforms EquiNASE Random Select, showing the benefit of using informed selection to guide the relaxation of equivariance constraints over training. Additionally, EquiNASE Retrain outperforms the Random Static baseline, showing that using compute in an informed search is more beneficial than just randomly searching the space of static architecture constraints.
EquiNASD finds competitive architectures on average and can find architectures which outperform baseline choices like architectures fully equivariant to C1 or D4. From comparing EquiNASD Retrain to EquiNASD results in Figure 8, retraining a resulting architecture is not consistently better or worse than using the final weights from EquiNASD, showing that there isn’t a disadvantage to training weights during search and avoiding the additional cost of a post-search training step.
The use of randomized loss information in Random Static in Figure 8 shows that an informed search for architecture hyperparameters is generally useful. However, experiments on the ISIC benchmark demonstrates that the architecture search can be deceptive and that random loss information can outperform informed loss. This motivates exploration into the use of noise during the search process for architectural parameters.
A sampling of random continuous architectures in Random Static in Figure 8 shows that random architectures can perform well on problems where fully equivariant architectures like the D4 baseline already perform well. However, on the Galaxy10 problem, the D4 and C4 baseline have high variance, suggesting that a fully equivariant architecture is sub-optimal. On this baseline, EquiNASD greatly outperforms a search of random architectures, demonstrating that EquiNASD can discover the appropriate equivariance for a specific dataset over fixed or randomly selected architectures.
The search space for EquiNASD is already well-formed for random architectures, compared to the discrete search space of EquiNASE . This is enabled by the [G]-mixed equivariant layer, which is a contribution of this work. Random non-mixed equivariant architectures do worse on all three benchmarks compared to random architectures which use the [G]-mixed equivariant layer. This can explain why the EquiNASD results are closer to random baselines than the EquiNASE results, as the search space permits easily finding the appropriate mix of equivariances compared to a discretized search space.
Our methods enable searching for both architecture and parameters concurrently in a single training process. This approach is more efficient than NAS approaches that only search for architectures, requiring a retraining process within the resulting architecture for evaluation. However, these ablation and random search comparisons show that our algorithms may get performance gains from adding a retraining phase with a tradeoff of further compute cost. | 1. What is the main contribution of the paper regarding neural architectures and symmetries?
2. What are the strengths and weaknesses of the proposed approaches, particularly in terms of experimental evaluation and comparison with prior works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are some suggestions for improving the experimental depth and analysis of the proposed methods?
5. How does the reviewer think the paper could be improved by following the NAS reproducibility checklist? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to search for neural architectures whose layers have equivariance that can exploit symmetries in the data. They propose two methods—network morphisms combined with evolutionary search and mixture relaxations combined with differentiable NAS—to search the space of symmetries. The methods are evaluated on some image classification tasks.
Strengths And Weaknesses
Strengths:
The problem of finding the correct equivariance that can exploit symmetries in the data is important for the development of general-purpose NAS methods.
The proposed approaches are simple and natural procedures.
The writing is fairly clear and the experimental results are interesting.
Code will be released upon acceptance.
Weaknesses:
The experimental evaluation is somewhat limited. Given the fairly direct extension of prior work (morphisms and weight-sharing) to actually developing the search algorithms, I would expect more emphasis to be placed on experimental depth. This could either come in the form of evaluation on interesting benchmarks where we might expect finding symmetries to outperform regular CNNs (e.g. Tu et al. (2022)) and/or more detailed analysis on more involved search spaces of whether the proposed methods are recovering the “right” symmetry for the dataset, even if synthetic.
The authors do not consider the random search baseline for their method. More generally, it would be useful to follow the NAS reproducibility checklist here (Lindauer & Hutter, 2020).
References:
Lindauer, Hutter. Best practices for scientific research on neural architecture search. JMLR 2020.
Tu, Roberts, Khodak, Shen, Sala, Talwalkar. NAS-Bench-360: Benchmarking neural architecture search on diverse tasks. NeurIPS 2022.
Clarity, Quality, Novelty And Reproducibility
Clarity: the writing is quite clear.
Quality: the empirical evaluation is somewhat limited in scope.
Novelty: the idea of searching over equivariant groups is new and interesting.
Reproducibility: code is promised upon publication. |
ICLR | Title
Equivariance-aware Architectural Optimization of Neural Networks
Abstract
Incorporating equivariance to symmetry groups as a constraint during neural network training can improve performance and generalization for tasks exhibiting those symmetries, but such symmetries are often not perfectly nor explicitly present. This motivates algorithmically optimizing the architectural constraints imposed by equivariance. We propose the equivariance relaxation morphism, which preserves functionality while reparametrizing a group equivariant layer to operate with equivariance constraints on a subgroup, as well as the [G]-mixed equivariant layer, which mixes layers constrained to different groups to enable within-layer equivariance optimization. We further present evolutionary and differentiable neural architecture search (NAS) algorithms that utilize these mechanisms respectively for equivariance-aware architectural optimization. Experiments across a variety of datasets show the benefit of dynamically constrained equivariance to find effective architectures with approximate equivariance.
1 INTRODUCTION
Constraining neural networks to be equivariant to symmetry groups present in the data can improve their task performance, efficiency, and generalization capabilities (Bronstein et al., 2021), as shown by translation-equivariant convolutional neural networks (Fukushima & Miyake, 1982; LeCun et al., 1989) for image-based tasks (LeCun et al., 1998). Seminal works have developed general theories and architectures for equivariance in neural networks, providing a blueprint for equivariant operations on complex structured data (Cohen & Welling, 2016; Ravanbakhsh et al., 2017; Kondor & Trivedi, 2018; Weiler et al., 2021). However, these works design model constraints based on an explicit equivariance property. Furthermore, their architectural assumption of full equivariance in every layer may be overly constraining; e.g., in handwritten digit recognition, full equivariance to 180◦ rotation may lead to misclassifying samples of “6” and “9”. Weiler & Cesa (2019) found that local equivariance from a final subgroup convolutional layer improves performance over full equivariance. If appropriate equivariance constraints are instead learned, the benefits of equivariance could extend to applications where the data may have unknown or imperfect symmetries.
Learning approximate equivariance has been recently approached via novel layer operations (Wang et al., 2022; Finzi et al., 2021; Zhou et al., 2020; Yeh et al., 2022; Basu et al., 2021). Separately, the field of neural architecture search (NAS) aims to optimize full neural network architectures (Zoph & Le, 2017; Real et al., 2017; Elsken et al., 2017; Liu et al., 2018; Lu et al., 2019). Existing NAS methods have not yet explicitly optimized equivariance, although partial or soft equivariant approaches like Romero & Lohit (2022) and van der Ouderaa et al. (2022) approach custom equivariant architectures. An important aspect of NAS is network morphisms: function-preserving architectural changes (Wei et al., 2016) which can be used during training to change the loss landscape and gradient descent trajectory while immediately maintaining the current functionality and loss value (Maile et al., 2022). Developing tools for searching over a space of architectural representations of equivariance would permit NAS algorithms to be applied towards architectural optimization of equivariance.
Contributions First, we present two mechanisms towards equivariance-aware architectural optimization. The equivariance relaxation morphism for group convolutional layers partially expands the representation and parameters of the layer to enable less constrained learning with a prior on symmetry. The [G]-mixed equivariant layer parametrizes a layer as a weighted sum of layers equivariant to different groups, permitting the learning of architectural weighting parameters.
Second, we implement these concepts within two algorithms for architectural optimization of partially-equivariant networks. Evolutionary Equivariance-Aware NAS (EquiNASE) utilizes the equivariance relaxation morphism in a greedy evolutionary algorithm, dynamically relaxing constraints throughout the training process. Differentiable Equivariance-Aware NAS (EquiNASD) implements [G]-mixed equivariant layers throughout a network to learn the appropriate approximate equivariance of each layer, in addition to their optimized weights, during training.
Finally, we analyze the proposed mechanisms via their respective NAS approaches in multiple image classification tasks, investigating how the dynamically learned approximate equivariance affects training and performance over baseline models and other approaches.
2 RELATED WORKS
Approximate equivariance Although no other works on approximate equivariance explicitly study architectural optimization, some approaches are architectural in nature. We compare our contributions with the most conceptually similar works to our knowledge.
The main contributions of Basu et al. (2021) and Agrawal & Ostrowski (2022) are similar to our proposed equivariant relaxation morphism. Basu et al. (2021) also utilizes subgroup decomposition but instead algorithmically builds up equivariances from smaller groups, while our work focuses on relaxing existing constraints. Agrawal & Ostrowski (2022) presents theoretical contributions towards network morphisms for group-invariant shallow neural networks: in comparison, our work focuses on deep group convolutional architectures and implements the morphism in a NAS algorithm.
The main contributions of Wang et al. (2022) and Finzi et al. (2021) are similar to our proposed [G]mixed equivariant layer. Wang et al. (2022) also uses a weighted sum of filters, but uses the same group for each filter and defines the weights over the domain of group elements. Finzi et al. (2021) uses an equivariant layer in parallel to a linear layer with weighted regularization, thus only using two layers in parallel and weighting them by regularization rather than parametrization. Mouli & Ribeiro (2021) also progressively relaxes equivariance constraints, but with regularized rather than parametrized constraints.
In more diverse approaches, Zhou et al. (2020) and Yeh et al. (2022) represent symmetry-inducing weight sharing via learnable matrices. Romero & Lohit (2022) and van der Ouderaa et al. (2022) learn partial or soft equivariances for each layer.
Neural architecture search Neural architecture search (NAS) aims to optimize both the architecture and its parameters for a given task. Liu et al. (2018) approaches this difficult bi-level optimization by creating a large super-network containing all possible elements and continuously relaxing the discrete architectural parameters to enable search by gradient descent. Other NAS approaches include evolutionary algorithms (Real et al., 2017; Lu et al., 2019; Elsken et al., 2017) and reinforcement learning (Zoph & Le, 2017), which search over discretely represented architectures.
3 BACKGROUND
We assume familiarity with group theory (see Appendix A.1). For discrete group G, the lth Gequivariant group convolutional layer (Cohen & Welling, 2016) of a group convolutional neural network (G-CNN) convolves1 the feature map f : G → RCl−1 from the previous layer with a filter with kernel size k represented as learnable parameters ψ : G→ RCl×Cl−1 . For each output channel
1We identify the correlation and convolution operators as they only differ where the inverse group element is placed and refer to both as ”convolution” throughout this work.
[G]-mixed equivariant layer
C4 equivariance C2 equivariance
A B
d ∈ [Cl], where [C] := {1, . . . , C}, and group element g ∈ G, the layer’s output is defined as:
[f ⋆G ψ]d(g) = ∑ h∈G Cl−1∑ c=1 fc(h)ψd,c(g −1h). (1)
The first layer is a special case: the input to the network needs to be lifted via this operation such that the output feature map of this layer has a domain of G. In the case of image data, an image x with C channels may be interpreted as a function x : Z2 → RC mapping each pixel in coordinate space to a real number for each channel, where the cth channel of x is referred to as xc. The input is x : Z2 → RC0 , so the layer is instead a lifting convolution:
[x ⋆G ψ]d(g) = ∑ y∈Z2 C0∑ c=1 xc(y)ψd,c(g −1y). (2)
We present our contributions in the group convolutional layer case, although similar claims apply for the lifting convolutional layer case.
4 TOWARDS ARCHITECTURAL OPTIMIZATION OVER SUBGROUPS
We propose two mechanisms to enable search over subgroups: the equivariance relaxation morphism and a [G]-mixed equivariant layer. The proposed morphism, depicted in Figure 1(A) and described in Section 4.1, changes the equivariance constraint from one group to another subgroup while preserving the learned weights of the initial group convolutional operator. The [G]-mixed equivariant layer, shown in Figure 1(B) and presented in Section 4.2, allows for a single layer to represent equivariance to multiple subgroups through a weighted sum.
4.1 EQUIVARIANCE RELAXATION MORPHISM
The equivariance relaxation morphism reparametrizes a G-equivariant group (or lifting) convolutional layer to operate over a subgroup of G, partially removing weight-sharing constraints from the parameter space while maintaining the functionality of the layer.
Let G′ ≤ G be a subgroup of G such that G′ \ G is finite. Let R be a system of representatives of the left quotient (including the identity element), so that G′ \ G = {G′r | r ∈ R} , where G′r := {g′r | g′ ∈ G′} . Given a G-equivariant group convolutional layer with feature map f and filter ψ, we define the relaxed feature map f̃ : G′ → R(Cl−1×|R|) and relaxed filter ψ̃ : G′ → R(Cl×|R|)×(Cl−1×|R|) as follows. For c ∈ [Cl−1], s, t ∈ R, d ∈ [Cl]:
f̃(c,s)(g ′) := fc(g ′s), (3)
ψ̃(d,t),(c,s)(g ′) := ψd,c(t −1g′s). (4)
We define the equivariance relaxation morphism from G to G′ as the reparametrization of ψ as ψ̃ (Eq. 4) and reshaping of f as f̃ (Eq. 3). We will show that the new layer, [f̃ ⋆G′ ψ̃](d,t)(g′), is equivalent to [f ⋆G ψ]d(g′t) down to reshaping. Since the mapping G′ × R → G, (g′, t) 7→ g′t, is bijective, every g can uniquely be written as g = g′t with g′ ∈ G′ and t ∈ R. For g ∈ G, G′g ∈ G′ \G has a unique representative t ∈ R with G′g = G′t, and g′ := gt−1 ∈ G′. Similarly, h ∈ G may be written as h = h′s with unique h′ ∈ G′ and s ∈ R. With these preliminaries, we get:
[f ⋆G ψ]d(g ′t) = [f ⋆G ψ]d(g) (5)
= ∑ h∈G Cl−1∑ c=1 fc(h)ψd,c(g −1h), (6) = ∑
h′∈G′ ∑ s∈R Cl−1∑ c=1 fc(h ′s)ψd,c(t −1g′−1h′s), (7)
= ∑
h′∈G′ Cl−1∑ c=1 ∑ s∈R f̃(c,s)(h ′)ψ̃(d,t),(c,s)(g ′−1h′), (8)
= [ f̃ ⋆G′ ψ̃ ] (d,t) (g′), (9)
which shows the claim. Thus, the convolution of f̃ with ψ̃ is equivariant to G but parametrized as a G′-equivariant group convolutional layer, where the representatives are expanded into independent channels. This morphism can be viewed as initializing a G′-equivariant layer with a pre-trained prior of equivariance to G, maintaining any previous training.
Standard convolutional layers are a special case of group-equivariant layers, where the group is translational symmetry over pixel space. Regular group convolutions are often implemented by relaxation to the translational symmetry group by expanding the filter via the appropriate group actions, allowing a standard convolution implementation from a deep learning library to be used. The equivariance relaxation morphism generalizes this concept to any subgroup. This, as well as how the equivariance relaxation morphism is implemented, is discussed further in Appendix B.
4.2 [G]-MIXED EQUIVARIANT LAYER
Towards learning equivariance, we additionally propose partial equivariance via a mixture of layers, each constrained to equivariance to different groups, applied in parallel to the same input then combined via a weighted sum. The equivariance relaxation morphism provides a mapping of group elements between a group and a subgroup. For a set of groups [G], such as a subgroup lattice of some group G, we define a [G]-mixed equivariant layer as:[
f⋆̂[G][ψ] ] (d,t) (g) = ∑
G∈[G]
zG [ f ⋆G′ ψ̃G ] (d,t) (g) (10)
= f ⋆G′ ∑ G∈[G] zGψ̃G (d,t) (g), (11)
where each element zG of [z] := {zG|G ∈ [G]} is an architectural weighting parameter such that∑ G∈[G] zG = 1, G ′ is a subgroup of all groups in [G], each element ψG of [ψ] is a filter with a domain of G, and ψ̃G is the transformation of ψG from a domain of G to G′ as defined in Equation 4. Thus, the layer is parametrized by [ψ] and [z], computing a weighted sum of operations that are equivariant to different groups of [G]. The layer may be equivalently computed by convolution of the input with the weighted sum of transformed filters, shown in Equation 11. We provide further implementation details in Appendix B.
5 EQUIVARIANCE-AWARE NEURAL ARCHITECTURE ALGORITHMS
We present two NAS methods that utilize the presented mechanisms for discovering appropriate equivariance during training: Evolutionary Equivariance-Aware NAS (EquiNASE) and Differen-
Algorithm 1 Evolutionary equivariance-aware neural architecture search. procedure EQUINASE(Initial symmetry group G)
Initialize population with a G-equivariant group convolutional network. for each generation do
for each network in population do Add children of network with relaxed equivariance constraints into population.
for each network in population do Partially train network on dataset.
Select Pareto-efficient and high accuracy networks as new population. return population
tiable Equivariance-Aware NAS (EquiNASD). Both methods optimize an architecture while learning weights, yielding a final trained network adapted to equivariances present in the training data. However, they differ in NAS paradigm and approximate equivariance representation: EquiNASE , in Section 5.1, searches for networks composed of layers each fully equivariant to possibly different groups, while EquiNASD, in Section 5.2, searches for smooth mixtures of equivariant layers.
5.1 EVOLUTIONARY EQUIVARIANCE-AWARE NAS
Towards finding the optimal full equivariance per layer, the equivariance relaxation morphism presented in Section 4.1 is applied as the genetic operator in an evolutionary hill-climbing algorithm. The Evolutionary Equivariance-Aware NAS (EquiNASE) algorithm, given in Algorithm 1, is similar to other evolutionary NAS methods such as Elsken et al. (2017) with pareto selection as in Falanti et al. (2022). A population of networks, which starts with an individual with all layers equivariant to the largest possible group, undergoes mutation via equivariance relaxation and selection based on accuracy and parameter count to optimize neural architecture while learning network parameters. See Appendix A.2 for further background on evolutionary NAS.
In each generation, candidate networks are evaluated based on maximizing validation accuracy and minimizing parameter count: the pareto-dominant individuals with highest accuracy are kept, then additional high-accuracy individuals are added if necessary until the desired parent population size is reached. Offspring are generated from each parent separately by mutation using the relaxation morphism. This preserves the weights of the parametrized equivariance during mutation, allowing for the continuous training of networks over evolution by inheritance from parent individuals. Specifically, mutation reduces a single layer’s parametrized equivariance to a subgroup within the constraint that each layer has parametrized equivariance to a subgroup of all preceding layers. This constraint yields local equivariance properties for the network, as shown in Weiler & Cesa (2019) and Elsayed et al. (2020) to be empirically favorable in image classification tasks. The resulting individuals are each trained independently for a given training time, and then this process repeats.
The second objective of minimizing parameter count is intended to advance efficient networks, such as those with large symmetry groups. Accuracy-based selection alone would necessarily prefer larger networks as mutation via the equivariance relaxation morphism results in two networks with identical performance but different size, the relaxed network having more parameters, until training; potentially short-term increases in validation accuracy after training would then result in the selection of individuals with more parameters. Thus, the proposed strategy of selecting both paretodominant and high-accuracy individuals is intended to maintain a diverse yet efficient population without succumbing to overly greedy selections too early.
5.2 DIFFERENTIABLE EQUIVARIANCE-AWARE NAS
In a contrasting paradigm, the [G]-mixed equivariant layer presented in Section 4.2 allows for smoothly searching across a spectrum of equivariance for each layer via a differentiable NAS algorithm. Our Differentiable Equivariance-Aware NAS (EquiNASD) algorithm, defined in Algorithm 2, is inspired by DARTS (Liu et al., 2018) with significant changes detailed in the following paragraphs. EquiNASD simplifies the bilevel optimization of the architecture weighting parameters Z and filter weights Ψ into alternating independent updates, computing the gradient update for Z with the current, rather than optimal, Ψ for the current architecture encoded by Z, to boost search efficiency with minimal performance loss compared to higher order approximations (Liu et al., 2018).
Algorithm 2 Differentiable equivariance-aware neural architecture search. procedure EQUINASD(Set of groups [G])
Initialize network with [G]-mixed equivariant layers, parametrized by Ψ and Z. while not converged do
Update Z by ∇ZL(Ψ, Z). Update Ψ by ∇ΨL(Ψ, Z).
return trained network
In most differentiable NAS search spaces, the desired output architecture is discretized to select a subset of architectural options within constraints, then the weights are re-initialized and trained within the static architecture. In our formulation, this is not necessary as any mixed operation can be equivalently expressed as a single layer equivariant to any group G′ that is a common subgroup to all groups of the mixed operation (Eq. 11): in our experimental case, this is a standard translation-equivariant convolutional layer, so the final model can be equivalently expressed as a standard convolutional model with encoded partial equivariance. Thus, the final optimized architecture and trained weights are output from the single search process. We explore the standard NAS paradigm, where weights are reinitialized and trained in the final static architecture, in Appendix D.
In order to enforce that the scaling of each filter does not confound the architecture weighting parameters, we use the weight normalizing reparametrization (Salimans & Kingma, 2016) and do not update the scalar norm parameter of each filter after initialization.
We do not use disjoint datasets for updating Ψ and Z, but rather draw one batch for Ψ and another for Z independently and randomly from the same training split. This allows for a standard dataset split and to use the validation set for hyperparameter tuning.
These two NAS approaches present adaptations of two standard types of NAS, evolutionary and differentiable, to the search for optimal partial equivariance. We next study empirically the two EquiNAS methods on three datasets, one with known rotational symmetry and two with unknown but visually significant rotational and reflectional symmetry.
6 EXPERIMENTS
We focus on the regular representation of groups and show experiments with reflectional and up to 4-fold rotational symmetry groups applied to image classification tasks. Examples of symmetry groups acting on pixel space, which corresponds to Z2, include T (2), which consists of discrete translations in both dimensions; the cyclical groups Cn, which consist of n-fold rotations; and the dihedral groupsDn, which consist of reflections with n-fold rotations, where n ∈ {1, 2, 4} for exact symmetry without interpolation. The p4 group consists of discrete translations and multiples of 90◦ rotations and may be represented as T (2) ⋊ C4. The p4m group consists of discrete translations, reflections, and multiples of 90◦ rotations and may be represented as T (2)⋊D4. As standard convolutional layers are already equivariant to T (2), we refer to layers also equivariant to n-fold rotations with or without reflections asDn orCn-equivariant, respectively. So, aC1 equivariant convolutional layer is a standard translation-equivariant convolutional layer. We use {C1, D1, C2, D2, C4, D4} as the set of potential groups for mutation in EquiNASE and as [G] in EquiNASD.
We present experiments on image classification for a variety of datasets. The Rotated MNIST dataset (Larochelle et al., 2007, RotMNIST) is a version of the MNIST handwritten digit dataset but with the images rotated by any angle. This task serves as a simple investigational study with known symmetry, while the following two tasks are more realistic and complex. The Galaxy10 DECals dataset (Leung & Bovy, 2019, Galaxy10) contains galaxy images in 10 broad categories. The ISIC 2019 dataset (Codella et al., 2018; Tschandl et al., 2018; Combalia et al., 2019, ISIC) contains dermascopic images of 8 types of skin cancer plus a null class. For Galaxy10 and ISIC, we downsample the images to 64 × 64 due to computational constraints, which adds notable difficulty to the tasks. These tasks exhibit varying levels of rotational and reflectional symmetry, motivating architectural optimization to determine the most effective application of equivariance constraints.
Across all experiments, the architectures are designed to have consistent channel dimensions once expanded to a standard translation-equivariant convolutional layer for each layer across models. Thus, constrained equivariance to a larger symmetry group results in fewer learnable parameters. A
layer constrained to C4 equivariance has |C4 \D4| = 2 times as many independent channels and as many parameters as a layer constrained toD4 equivariance. This is a notably different paradigm than other works that equate parameter counts across architectures with different equivariance properties.
As baseline comparisons, we train and test G-CNNs with static architectures. In addition to the static baselines, we re-implement the residual pathway priors (RPP) approach by Finzi et al. (2021) as a C1 equivariant layer with regularization in parallel with a D4 equivariant convolutional layer.
Further experiment details such as architecture details and other hyperparameters are in Appendix C. For each paradigm of experiments, we present results in the following subsections, with general discussion in Section 7. Additional ablation and random search baselines are in Appendix D.
6.1 EVOLUTIONARY EQUIVARIANCE-AWARE NAS
The classification test errors are listed in Table 1. The advantages of equivariance search methods are most apparent in the Galaxy10 benchmark. While EquiNASE outperforms most baselines on RotMNIST and all baselines on ISIC, it has similar performance and some final architectures to the D4 baseline for both tasks. However, the D4 baseline fails at the Galaxy10 task, demonstrating that the same equivariant architecture can not always be naively applied. Both search methods, EquiNASE and RPP, outperform all baseline models on Galaxy10, and by a large margin for EquiNASE .
The evolutionary progress on RotMNIST is shown in Figure 2: the selected population maintains a fully equivariant network in every generation. The final selected population originates from two main lineages, one staying fully equivariant until the last generations and the other diverging from
the fully equivariant network midway through, showing that training with dynamically constrained parametrizations can produce performant models.
In addition to the normally initialized static baselines, we also train and test baselines that are initialized with priors to larger symmetry groups. These are implemented by initializing all layers to be constrained to the prior symmetry group, then using the equivariance relaxation morphism on each layer. EquiNASE searches for relaxation schedules that yield trained priors on equivariance, while these additional baselines yield untrained priors. The results in Table 1 show that, theC1-equivariant networks generally improve with either equivariance prior, while the C4 equivariant networks perform better with D4 equivariance initialization only when the D4 constrained baselines also work well. The untrained prior methods do not perform as well as EquiNASE on RotMNIST, showing the benefit of investing some training time to the constrained equivariance. For the other tasks, the baselines with priors have better performances than their constrained baseline counterparts.
6.2 DIFFERENTIABLE EQUIVARIANCE-AWARE NAS
The classification test errors are listed in Table 2. EquiNASD achieves better test accuracy than the other comparable methods on RotMNIST and Galaxy10. Due to differences in training protocol, only comparisons of relative rankings with Table 1 are possible: baseline methods accuracies follow similar ranking patterns, suggesting the benefit of general C4 equivariance for RotMNIST and Galaxy10 and general D4 equivariance, including RPP, for ISIC. In this training protocol notably with adaptive optimizers, the results are more consistent across methods and trials.
The dynamics of architecture weighting parameters for one exemplary trial per task are shown in Figure 3. The general trend of less constrained layers toward the output supports the conjecture of local equivariance being beneficial. However, this effect is less consistent for ISIC, the only task where EquiNASD did not exceed baselines, possibly indicating less inherent symmetry. As seen in
Appendix D, the final mixing of architectures for ISIC included a high level of C1, indicating that feature analysis outside of these symmetry groups is important for this benchmark.
Previous differentiable NAS works often used regularization of network size or even architecture weighting parameters themselves to encourage efficient architectures with a single highly weighted choice for each layer. However, our algorithm shows strong preference for a single, more equivariant and thus more expressive layer, notably to D4 or C4 equivariance, without such regularization. This may be due to the bilevel optimization dynamics: more constrained layers may be able to make more effective updates and thus become favorable compared to the lagging larger layers.
7 DISCUSSION
To our knowledge, this is the first work which proposes search methods for networks with dynamically constrained equivariance. Many NAS approaches separately search for an architecture and then reinitialize and retrain the weights, while our two proposed approaches find an optimal architecture with trained weights in a single process, notably with dynamically constrained weights. Gradientbased tuning (Maclaurin et al., 2015) has shown the benefit not only of optimizing hyperparameters but also of dynamically adjusting them during training (Lichtarge et al., 2022). Dynamically constrained weights can reap the benefits of specialization and generalization over the course of training.
Our two equivariance-aware NAS approaches have distinct approaches: EquiNASE searches for architectures composed of discretely equivariant layers, while EquiNASD searches for continuous mixtures of equivariance within each layer. The EquiNASD algorithm avoids many known problems in differentiable NAS such as the discretization gap that occurs when searching over a continuous relaxation of a discrete architectural search space (Xie et al., 2021), such as that of EquiNASE . Towards searching for discretely equivariant layers using the [G]-mixed equivariant layer, proximal NAS algorithms use techniques such as projection (Yao et al., 2020) and straight-through estimation (Li et al., 2022) to avoid the discretization gap and thus may be effective for this application.
EquiNASE is innately greedy: at each selection step, the population is evaluated by known current performance rather than unknown final performance, biased to architectures that train quickly. Networks with more equivariance constraints tend to learn faster, but equivariance relaxation may yield large gradients for newly unconstrained parameters and thus fast increases in performance. Further work could utilize metrics for final performance, such as proxies (White et al., 2022).
The theoretical and algorithmic contributions of this work are applicable beyond the image classification experiments presented to architectures with parametrized equivariance to any discrete group. We leave the extension to other group representations and domains as future work, such as the continuous case via careful analysis of the regular representation, still given G′ \G is finite. Our proposed equivariance-aware NAS problems can be practically applied to find effective models or architectures for datasets with hypothesizable symmetry. EquiNASE may particularly work well on tasks that benefit from local equivariance, determined by analyzing the architecture weighting parameters from first applying the more efficient EquiNASD, as well as for finding good discrete architectures within which to retrain weights, based on the ablation and random comparisons. We thus recommend EquiNASD for practical applications if the final model is not restricted to discrete equivariance, in which case it can be used to inform design decisions for applying EquiNASE .
Beyond NAS, the equivariance relaxation morphism could be used in other applications such as fine-tuning and distillation. Layers of a pre-trained equivariant network could be expanded via equivariance relaxation before fine-tuning on the same or a new task. Similarly, a network could be distilled to a wider architecture for additional performance benefits.
Conclusion We present two mechanisms towards equivariance-aware architectural optimization, the equivariance relaxation morphism and the [G]-mixed equivariant layer, and two NAS algorithms that respectively implement these mechanisms evolutionarily as EquiNASE and differentiably as EquiNASD. We investigate how dynamic equivariance achieved by these algorithms affects the training and performance of models across multiple image classification tasks of varying complexity and assumed symmetry, demonstrating that these techniques can search for performant architectures and weights even on noisy tasks. The proposed mechanisms and algorithms are extendable beyond vision tasks to any architecture with parametrized equivariance to any discrete group.
A ADDITIONAL BACKGROUND
A.1 SYMMETRIES IN NEURAL NETWORKS
A symmetry of an object is a mapping of the object onto itself such that structure is preserved. A symmetry group G is a set of such mappings along with a binary operation · : G × G → G, known as the group product, that satisfies axioms for closure, associativity, the identity, and the inverse (Herstein, 2006). A group G acts on a set X via the group action . : G × X → X , (g, x) 7→ g.x that satisfies axioms for identity and compatibility: X is called a G-space. Equivariance is the property of a mapping such that transformation of the input results in equivalent transformation of the output. Formally, a mapping h : X → Z between two G-spaces is G-equivariant if for all g ∈ G and x ∈ X we have: h(g.x) = g.h(x). For example, an image segmentation neural network should be T (2)-equivariant: shifting the input should result in the same shift in the output.
Invariance is a special case of equivariance, where the output of the function is completely independent of transformation of the input. Formally, a mapping h : X → Z is G-invariant if for all g ∈ G and x ∈ X we have: h(g.x) = h(x). For example, an image classification network should be T (2)-invariant: shifting the input should not change the output. Symmetries leave objects invariant.
For two groups G and H with group products ·G and ·H respectively where H acts on G with group action ., the (outer) semi-direct product G ⋊ H of H acting on G is a group composed of the set of elements G ×H with group product (g, h) · (g′, h′) = (g ·G (h.g′), h ·H h′) and inverse (g, h)−1 = (h−1.g−1, h−1).
A subgroup H of G is a nonempty subset with the same group product that also fulfills the group axioms. Then, gH = {g · h|h ∈ H} and Hg = {h · g|h ∈ H} denote the left coset and right coset, respectively, of H with representative g.
A.2 NEURAL ARCHITECTURE SEARCH
Evolutionary algorithms are optimization methods inspired by evolution in biology, where individuals in a population compete with their phenotypic traits in order to pass on their genotypic traits to offspring. The population is the current collection of individuals. Each individual is an instance of the object to be optimized and has a genotype that is decoded into a phenotype. In this case, each individual is a neural network, with a genotype that encodes the parametrized equivariance group of each convolutional layer, represented as a vector of integers. The individual continues training on the task before competing against other individuals to be selected as a parent to mutate to generate the next population. Each parent itself is kept for the next population, as well as each valid child that is generated via the equivariance relaxation morphism, such that they are functionally equivalent to their parent at initialization (although with a different architecture) and thus have the same fitness before training. Each individual in this population is partially trained, such that these children diverge from their siblings and parent, so that the next set of parents may be selected and this process repeats.
Pareto dominance can be used in multi-objective optimizations to select the next parent population. For a population of individuals each scored in n objectives, an individual is pareto-optimal if no individual has at least one strictly better score for an objective without that of any other objectives being strictly worse. The pareto front is the set formed by all pareto-optimal individuals.
B IMPLEMENTATION DETAILS
Group convolutional layers The implementation of regular group convolutional layers can be viewed as a special case of our proposed equivariance relaxation morphism. With the preliminaries given in Section 4.1 and the case ofG′ = T (2), f̃ and ψ̃ are computed such that f̃(c,s)(g′) := fc(g′s) and ψ̃(d,t),(c,s)(g′) := ψd,c(t−1g′s) for each g′ ∈ T (2), c ∈ [Cl−1], s, t ∈ R, and d ∈ [Cl].
Let SG := |R|. The learnable parameters of the Gl-equivariant lth layer with Cl output channels, corresponding to ψ, are stored as a tensor of sizeCl×Cl−1×SGl×Kl×Kl. The filter transformation expands this filter tensor by performing the action of each r ∈ R on another copy of the tensor to expand its shape along a new dimension, resulting in a tensor of sizeCl×SGl×Cl−1×SGl×Kl×Kl, which is reshaped to ClSGl ×Cl−1SGl ×Kl×Kl. The input tensor to the lth layer, corresponding to f, is in the shape ofB×Cl−1×SGl×Hl−1×Wl−1,which is reshaped toB×Cl−1SGl×Hl−1×Wl−1 and convolved with the expanded filter. The output of shape B × ClSGl ×Hl ×Wl is reshaped to B × Cl × SGl ×Hl ×Wl.
Equivariance relaxation morphism To implement the equivariance relaxation morphism, the new filter tensor is initialized by applying Equation 4 such that result of applying the preceding filter transformation is equivalent. Our implementation of group actions relies on group channel indexing to represent the order of group elements: to ensure this is consistent before and after the morphism, the appropriate reordering of the output and input channels of the expanded filter are applied upon expansion. The new filter tensor has a shape of Cl|R| × Cl−1|R| × SGl/|R| ×Kl ×Kl. The [G]mixed equivariant layer is built on top of this implementation, also using proper input and output channel reordering between layers to ensure correct mixing of group channels.
C EXPERIMENTAL DETAILS
Architecture backbone For both EquiNASE and EquiNASD experiments, we use the same backbone architecture, such that the static baselines have the same architecture across experiments. The architectures have a lifting layer followed by 7 group convolutional layers, for a total of 8 convolutional layers. After 4 layers, the channel count doubles, from 16 to 32 for a D4 equivariant layer and scaling up for smaller symmetry group equivariance constraints. An average pooling layer is placed after every other layer for all architectures and additionally after the fifth and seventh convolutional layers for Galaxy10 and ISIC. After the final group convolutional layer is a group-dimension average pooling followed by two linear layers to the output dimension. Every convolutional and linear layer except the output layer is immediately followed by a batchnorm then a ReLU.
Hyperparameters The hyperparameters for each algorithm are selected such that baselines only differ by training time and optimizers. The learning rates were selected by grid search over baselines on RotMNIST. For all experiments in Sections 6.1, we use a simple SGD optimizer with learning rate 0.1 to avoid confounding effects such as momentum during the morphism. For EquiNASE , the parent selection size is 5, the training time per generation is 0.5 epochs, and the number of generations is 50 for all tasks. Baselines were trained for the equivalent number of epochs. For all experiments in Section 6.2, we use separate Adam optimizers for Ψ and Z, each with a learning rate of 0.01 and otherwise default settings. The total training time is 100 epochs for RotMNIST and 50 epochs for Galaxy10 and ISIC. For RPP, we use a C1-equivariant layer with an L2 regularization parameter of 1e−6 in parallel with a D4-equivariant layer without regularization. For RotMNIST and MNIST, we use the standard training and test splits with a batch size of 64, reserving 10% of the training data as the validation set. For Galaxy10, we set aside 10% of the dataset as the test set, reserving 10% of the remaining training data as the validation set. For ISIC, we set aside 10% of the available training dataset as the test set, reserving 10% of the remaining data as the validation set and the rest as training data. For the latter two datasets, we resize the
images to 64 × 64 due to computational constraints and use a batchsize of 32. The validation sets were previously used for hyperparameter tuning: for experimental results, they are only used for the experiments in Section 6.1 as necessary for the EquiNASE algorithm. No data augmentation is performed, although the datasets are normalized.
D ADDITIONAL EXPERIMENTS AND FIGURES
To explore the symmetry discovery of EquiNASD, we apply it to six augmentations of the MNIST dataset (LeCun et al., 1998), where each augmentation applies the group actions of each group in {C1, C2, C4, D1, D2, D4} respectively. The resulting architecture dynamics of this experiment are shown in Figure 4, showing that less augmented versions still have some inherent symmetry, while more augmented versions induce stronger architectural changes towards layers that are equivariant to larger groups. Across all augmentations, earlier layers tended towards more constraints to equivariance.
As ablation studies and comparisons, we implement two kinds of random search for each NAS method. The first ablates smart architecture search: EquiNASE Random Select works as described in Algorithm 1 but with random parent selection (instead of pareto-front selection) and EquiNASD Random Z works as described in Algorithm 2 but with random Z updates (instead of gradient descent) by shuffling Z gradients. The second is more akin to standard NAS random search: for the evolutionary paradigm, we train 30 randomly selected static architectures in the discrete architecture search space for the same training time and selecting the top 5 by validation accuracy, and for the differentiable paradigm, we train 25 randomly selected static architectures in the continuous architecture search space and selecting the top 5 by validation accuracy. 30 and 25 were respectively calculated to be approximately the same compute cost as the trials of EquiNASE and EquiNASD. These are labeled as “Random Static” for both the evolutionary and differentiable paradigms. Since Random Static trains static architectures while EquiNASE and EquiNASD dynamically search for both architectures and parameters, we take the best 5 architectures for each and retrain their param-
eters from scratch as in the standard NAS paradigm, labeled as EquiNASE Retrain and EquiNASD Retrain, respectively, for fair comparison to Random Static.
The results of these additional experiments are compared against those of our main algorithms and baselines in Figures 5 and 8.
Shown in Figure 5, EquiNASE outperforms EquiNASE Random Select, showing the benefit of using informed selection to guide the relaxation of equivariance constraints over training. Additionally, EquiNASE Retrain outperforms the Random Static baseline, showing that using compute in an informed search is more beneficial than just randomly searching the space of static architecture constraints.
EquiNASD finds competitive architectures on average and can find architectures which outperform baseline choices like architectures fully equivariant to C1 or D4. From comparing EquiNASD Retrain to EquiNASD results in Figure 8, retraining a resulting architecture is not consistently better or worse than using the final weights from EquiNASD, showing that there isn’t a disadvantage to training weights during search and avoiding the additional cost of a post-search training step.
The use of randomized loss information in Random Static in Figure 8 shows that an informed search for architecture hyperparameters is generally useful. However, experiments on the ISIC benchmark demonstrates that the architecture search can be deceptive and that random loss information can outperform informed loss. This motivates exploration into the use of noise during the search process for architectural parameters.
A sampling of random continuous architectures in Random Static in Figure 8 shows that random architectures can perform well on problems where fully equivariant architectures like the D4 baseline already perform well. However, on the Galaxy10 problem, the D4 and C4 baseline have high variance, suggesting that a fully equivariant architecture is sub-optimal. On this baseline, EquiNASD greatly outperforms a search of random architectures, demonstrating that EquiNASD can discover the appropriate equivariance for a specific dataset over fixed or randomly selected architectures.
The search space for EquiNASD is already well-formed for random architectures, compared to the discrete search space of EquiNASE . This is enabled by the [G]-mixed equivariant layer, which is a contribution of this work. Random non-mixed equivariant architectures do worse on all three benchmarks compared to random architectures which use the [G]-mixed equivariant layer. This can explain why the EquiNASD results are closer to random baselines than the EquiNASE results, as the search space permits easily finding the appropriate mix of equivariances compared to a discretized search space.
Our methods enable searching for both architecture and parameters concurrently in a single training process. This approach is more efficient than NAS approaches that only search for architectures, requiring a retraining process within the resulting architecture for evaluation. However, these ablation and random search comparisons show that our algorithms may get performance gains from adding a retraining phase with a tradeoff of further compute cost. | 1. What is the focus and contribution of the paper regarding semantic correspondence?
2. What are the strengths of the proposed approach, particularly in terms of neural representation?
3. What are the weaknesses of the paper, especially for the experiment section?
4. Do you have any concerns about the semantic correspondence representation?
5. What are the limitations regarding the NeMF approach?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes two approaches to optimizing the weights and architecture of networks with approximate equivariance. The first approach EquiNAS_E uses an evolutionary algorithm to progessively weaken the symmetry constraints in the layers of an equivariant NN as it trains. The second EquiNAS_D uses differentiable weights, optimized alternatingly with the normal network weights, which combine different kernel with different levels of symmetry constraints. Both methods are tested on image classification tasks.
Strengths And Weaknesses
Strengths
Equivariant networks have more hyperparameters to tune due to the choice of symmetry group. For example, in E(2)-CNN (https://arxiv.org/abs/1911.08251), the authors experiment with symmetry groups
C
n
and
D
n
for many values of n before finding n=12 to work best. This paper proposes a method for automatically determining the correct level of symmetry.
Moreover, on many problems end-to-end strict symmetry is not desirable if the symmetry in the domain is only approximate or if symmetry constraints interfere with optimization. The proposed method adds to the relatively few methods which learn relaxed symmetry constraints and adapt to the level of symmetry in the data.
I found the equivariance relaxation morphism to be a simple and effective idea for relaxing symmetry constraints in networks on a per-layer basis.
The [G]-mixed equivariant layer also seems like a reasonable idea for selecting the degree of equivariance by differentiably choosing weights over kernels with different levels of equivariance. In theory this allows the model to learn a specific equivariance or an approximate equivariance which is in-between. It also allows the network to reduce the symmetry constraint in later layers, which is what is done effectively by CNNs in practice by downsampling with stride.
Experimental results on the learned subgroup weights (Fig. 2) reveal interesting trends showing that the networks do select for higher equivariance in early layers and lower equivariance in later layers which matches what has found to be effective in practice for equivariant networks.
Weaknesses / Questions
I am not sure of the framing of the method. I am not an expert on NAS and am open to correction, but it seems to me both proposed methods find a trained network, not an architecture, since they are training the network weights and selecting an architecture at the same time. The final architecture may not have great performance if trained from scratch. In this sense, it seems the method is more a special training procedure than NAS method. EquiNAS_D, in particular, is simply an architecture in which some weights are optimized alternatingly.
Given the evolutionary algorithm starts with a single network, how do we know to what extent its evolution is guided by architecture versus the specific weights in the mutated networks? In experiments, I think both population level variance and initialization variance should be accounted for.
One of the biggest issues is the fact that the relaxation morphism increases the number of parameters making it difficult to distinguish performance increases are from relaxing constrains or increasing parameter count. The authors do discuss this issue and suggest their optimization strategy over both parameter count and accuracy helps. It would be better, however, if the number of parameters could be preserved by the relaxation.
The experimental results are not completely convincing of when this method would add significant value over simply doing hyperparameter search over equivariant networks. In Table 1, an equivariant baseline is best on rotMNIST and ISIC, and a non-rotationally equivariant network with equivariant initialization is best for Galaxy10. In table 2, EquiNAS_D does do best for rotMNIST, but the score is much lower than the best values in E(2)-CNN after full tuning of the best equivariant method. In Galaxy10, EquiNAS_D does outperform, but with very high variance.
I have some issues with the clarity of the description of 4.2. The condition that all groups in
[
G
]
be subgroups or supergroups of all other groups in
[
G
]
does not appear to be met by the expermental example since
D
2
is not a subgroup or supergroup of
C
4
. It's also not clear to me why this condition is necessary. It is a fairly strict condition, although I think it would include useful cases. I am also a bit unclear on the form of the input signal
f
. In order to be convolved with
Ψ
~
and have
G
-equivariance for each
G
in
[
G
]
,
f
should be defined over a supergroup of all
G
∈
[
G
]
. Is this the case? I am correct that if
z
G
is a one-hot vector on the group
G
, then the operation is
G
-equivariant?
Questions
If the evolution algorithm can only mutate in the direction of symmetry constrain relaxation, doesn't this bias the algorithm towards relaxing all the symmetry constraints?
Does this work have a relation to diffstride (https://arxiv.org/abs/2202.01653) in that stride is typically a hyperparameter that corresonds to a choice of the subgroup of the translation group and can be shown to differentiably optimized?
After equivariance is broken in a given layer, there isn't much point in principle to persevering it downstream. Are mutations automatically applied to all following layers or did your method discover this? If so, I think that is interesting enough to be considered a strength.
Minor
In Eqn 1, if you reverse the order of
f
and
Ψ
and drop the sum over c and indices
c
,
d
, the equation can be simplified using matrix multiplication.
4.1, para 2, "neutral element" -> "identity element"
4.1 may benefit from writing as a proposition
4.1, Para 3, Line 5,
h
∈
G
′
->
h
′
∈
G
′
Sec 5., Line 4, typo "optimizffe"
Clarity, Quality, Novelty And Reproducibility
The paper is fairly clear with some weak spots corresponding to my questions above. It could use illustrations and examples to help explain the method. The quality is okay. The experiments could be more thorough but do provide a basic trial for the proposed method. The method is tested on 3 different datasets. The method seems novel to me. There is no code provided but it is promised. The description seems clear enough to repeat similar experiments. |
ICLR | Title
Equivariance-aware Architectural Optimization of Neural Networks
Abstract
Incorporating equivariance to symmetry groups as a constraint during neural network training can improve performance and generalization for tasks exhibiting those symmetries, but such symmetries are often not perfectly nor explicitly present. This motivates algorithmically optimizing the architectural constraints imposed by equivariance. We propose the equivariance relaxation morphism, which preserves functionality while reparametrizing a group equivariant layer to operate with equivariance constraints on a subgroup, as well as the [G]-mixed equivariant layer, which mixes layers constrained to different groups to enable within-layer equivariance optimization. We further present evolutionary and differentiable neural architecture search (NAS) algorithms that utilize these mechanisms respectively for equivariance-aware architectural optimization. Experiments across a variety of datasets show the benefit of dynamically constrained equivariance to find effective architectures with approximate equivariance.
1 INTRODUCTION
Constraining neural networks to be equivariant to symmetry groups present in the data can improve their task performance, efficiency, and generalization capabilities (Bronstein et al., 2021), as shown by translation-equivariant convolutional neural networks (Fukushima & Miyake, 1982; LeCun et al., 1989) for image-based tasks (LeCun et al., 1998). Seminal works have developed general theories and architectures for equivariance in neural networks, providing a blueprint for equivariant operations on complex structured data (Cohen & Welling, 2016; Ravanbakhsh et al., 2017; Kondor & Trivedi, 2018; Weiler et al., 2021). However, these works design model constraints based on an explicit equivariance property. Furthermore, their architectural assumption of full equivariance in every layer may be overly constraining; e.g., in handwritten digit recognition, full equivariance to 180◦ rotation may lead to misclassifying samples of “6” and “9”. Weiler & Cesa (2019) found that local equivariance from a final subgroup convolutional layer improves performance over full equivariance. If appropriate equivariance constraints are instead learned, the benefits of equivariance could extend to applications where the data may have unknown or imperfect symmetries.
Learning approximate equivariance has been recently approached via novel layer operations (Wang et al., 2022; Finzi et al., 2021; Zhou et al., 2020; Yeh et al., 2022; Basu et al., 2021). Separately, the field of neural architecture search (NAS) aims to optimize full neural network architectures (Zoph & Le, 2017; Real et al., 2017; Elsken et al., 2017; Liu et al., 2018; Lu et al., 2019). Existing NAS methods have not yet explicitly optimized equivariance, although partial or soft equivariant approaches like Romero & Lohit (2022) and van der Ouderaa et al. (2022) approach custom equivariant architectures. An important aspect of NAS is network morphisms: function-preserving architectural changes (Wei et al., 2016) which can be used during training to change the loss landscape and gradient descent trajectory while immediately maintaining the current functionality and loss value (Maile et al., 2022). Developing tools for searching over a space of architectural representations of equivariance would permit NAS algorithms to be applied towards architectural optimization of equivariance.
Contributions First, we present two mechanisms towards equivariance-aware architectural optimization. The equivariance relaxation morphism for group convolutional layers partially expands the representation and parameters of the layer to enable less constrained learning with a prior on symmetry. The [G]-mixed equivariant layer parametrizes a layer as a weighted sum of layers equivariant to different groups, permitting the learning of architectural weighting parameters.
Second, we implement these concepts within two algorithms for architectural optimization of partially-equivariant networks. Evolutionary Equivariance-Aware NAS (EquiNASE) utilizes the equivariance relaxation morphism in a greedy evolutionary algorithm, dynamically relaxing constraints throughout the training process. Differentiable Equivariance-Aware NAS (EquiNASD) implements [G]-mixed equivariant layers throughout a network to learn the appropriate approximate equivariance of each layer, in addition to their optimized weights, during training.
Finally, we analyze the proposed mechanisms via their respective NAS approaches in multiple image classification tasks, investigating how the dynamically learned approximate equivariance affects training and performance over baseline models and other approaches.
2 RELATED WORKS
Approximate equivariance Although no other works on approximate equivariance explicitly study architectural optimization, some approaches are architectural in nature. We compare our contributions with the most conceptually similar works to our knowledge.
The main contributions of Basu et al. (2021) and Agrawal & Ostrowski (2022) are similar to our proposed equivariant relaxation morphism. Basu et al. (2021) also utilizes subgroup decomposition but instead algorithmically builds up equivariances from smaller groups, while our work focuses on relaxing existing constraints. Agrawal & Ostrowski (2022) presents theoretical contributions towards network morphisms for group-invariant shallow neural networks: in comparison, our work focuses on deep group convolutional architectures and implements the morphism in a NAS algorithm.
The main contributions of Wang et al. (2022) and Finzi et al. (2021) are similar to our proposed [G]mixed equivariant layer. Wang et al. (2022) also uses a weighted sum of filters, but uses the same group for each filter and defines the weights over the domain of group elements. Finzi et al. (2021) uses an equivariant layer in parallel to a linear layer with weighted regularization, thus only using two layers in parallel and weighting them by regularization rather than parametrization. Mouli & Ribeiro (2021) also progressively relaxes equivariance constraints, but with regularized rather than parametrized constraints.
In more diverse approaches, Zhou et al. (2020) and Yeh et al. (2022) represent symmetry-inducing weight sharing via learnable matrices. Romero & Lohit (2022) and van der Ouderaa et al. (2022) learn partial or soft equivariances for each layer.
Neural architecture search Neural architecture search (NAS) aims to optimize both the architecture and its parameters for a given task. Liu et al. (2018) approaches this difficult bi-level optimization by creating a large super-network containing all possible elements and continuously relaxing the discrete architectural parameters to enable search by gradient descent. Other NAS approaches include evolutionary algorithms (Real et al., 2017; Lu et al., 2019; Elsken et al., 2017) and reinforcement learning (Zoph & Le, 2017), which search over discretely represented architectures.
3 BACKGROUND
We assume familiarity with group theory (see Appendix A.1). For discrete group G, the lth Gequivariant group convolutional layer (Cohen & Welling, 2016) of a group convolutional neural network (G-CNN) convolves1 the feature map f : G → RCl−1 from the previous layer with a filter with kernel size k represented as learnable parameters ψ : G→ RCl×Cl−1 . For each output channel
1We identify the correlation and convolution operators as they only differ where the inverse group element is placed and refer to both as ”convolution” throughout this work.
[G]-mixed equivariant layer
C4 equivariance C2 equivariance
A B
d ∈ [Cl], where [C] := {1, . . . , C}, and group element g ∈ G, the layer’s output is defined as:
[f ⋆G ψ]d(g) = ∑ h∈G Cl−1∑ c=1 fc(h)ψd,c(g −1h). (1)
The first layer is a special case: the input to the network needs to be lifted via this operation such that the output feature map of this layer has a domain of G. In the case of image data, an image x with C channels may be interpreted as a function x : Z2 → RC mapping each pixel in coordinate space to a real number for each channel, where the cth channel of x is referred to as xc. The input is x : Z2 → RC0 , so the layer is instead a lifting convolution:
[x ⋆G ψ]d(g) = ∑ y∈Z2 C0∑ c=1 xc(y)ψd,c(g −1y). (2)
We present our contributions in the group convolutional layer case, although similar claims apply for the lifting convolutional layer case.
4 TOWARDS ARCHITECTURAL OPTIMIZATION OVER SUBGROUPS
We propose two mechanisms to enable search over subgroups: the equivariance relaxation morphism and a [G]-mixed equivariant layer. The proposed morphism, depicted in Figure 1(A) and described in Section 4.1, changes the equivariance constraint from one group to another subgroup while preserving the learned weights of the initial group convolutional operator. The [G]-mixed equivariant layer, shown in Figure 1(B) and presented in Section 4.2, allows for a single layer to represent equivariance to multiple subgroups through a weighted sum.
4.1 EQUIVARIANCE RELAXATION MORPHISM
The equivariance relaxation morphism reparametrizes a G-equivariant group (or lifting) convolutional layer to operate over a subgroup of G, partially removing weight-sharing constraints from the parameter space while maintaining the functionality of the layer.
Let G′ ≤ G be a subgroup of G such that G′ \ G is finite. Let R be a system of representatives of the left quotient (including the identity element), so that G′ \ G = {G′r | r ∈ R} , where G′r := {g′r | g′ ∈ G′} . Given a G-equivariant group convolutional layer with feature map f and filter ψ, we define the relaxed feature map f̃ : G′ → R(Cl−1×|R|) and relaxed filter ψ̃ : G′ → R(Cl×|R|)×(Cl−1×|R|) as follows. For c ∈ [Cl−1], s, t ∈ R, d ∈ [Cl]:
f̃(c,s)(g ′) := fc(g ′s), (3)
ψ̃(d,t),(c,s)(g ′) := ψd,c(t −1g′s). (4)
We define the equivariance relaxation morphism from G to G′ as the reparametrization of ψ as ψ̃ (Eq. 4) and reshaping of f as f̃ (Eq. 3). We will show that the new layer, [f̃ ⋆G′ ψ̃](d,t)(g′), is equivalent to [f ⋆G ψ]d(g′t) down to reshaping. Since the mapping G′ × R → G, (g′, t) 7→ g′t, is bijective, every g can uniquely be written as g = g′t with g′ ∈ G′ and t ∈ R. For g ∈ G, G′g ∈ G′ \G has a unique representative t ∈ R with G′g = G′t, and g′ := gt−1 ∈ G′. Similarly, h ∈ G may be written as h = h′s with unique h′ ∈ G′ and s ∈ R. With these preliminaries, we get:
[f ⋆G ψ]d(g ′t) = [f ⋆G ψ]d(g) (5)
= ∑ h∈G Cl−1∑ c=1 fc(h)ψd,c(g −1h), (6) = ∑
h′∈G′ ∑ s∈R Cl−1∑ c=1 fc(h ′s)ψd,c(t −1g′−1h′s), (7)
= ∑
h′∈G′ Cl−1∑ c=1 ∑ s∈R f̃(c,s)(h ′)ψ̃(d,t),(c,s)(g ′−1h′), (8)
= [ f̃ ⋆G′ ψ̃ ] (d,t) (g′), (9)
which shows the claim. Thus, the convolution of f̃ with ψ̃ is equivariant to G but parametrized as a G′-equivariant group convolutional layer, where the representatives are expanded into independent channels. This morphism can be viewed as initializing a G′-equivariant layer with a pre-trained prior of equivariance to G, maintaining any previous training.
Standard convolutional layers are a special case of group-equivariant layers, where the group is translational symmetry over pixel space. Regular group convolutions are often implemented by relaxation to the translational symmetry group by expanding the filter via the appropriate group actions, allowing a standard convolution implementation from a deep learning library to be used. The equivariance relaxation morphism generalizes this concept to any subgroup. This, as well as how the equivariance relaxation morphism is implemented, is discussed further in Appendix B.
4.2 [G]-MIXED EQUIVARIANT LAYER
Towards learning equivariance, we additionally propose partial equivariance via a mixture of layers, each constrained to equivariance to different groups, applied in parallel to the same input then combined via a weighted sum. The equivariance relaxation morphism provides a mapping of group elements between a group and a subgroup. For a set of groups [G], such as a subgroup lattice of some group G, we define a [G]-mixed equivariant layer as:[
f⋆̂[G][ψ] ] (d,t) (g) = ∑
G∈[G]
zG [ f ⋆G′ ψ̃G ] (d,t) (g) (10)
= f ⋆G′ ∑ G∈[G] zGψ̃G (d,t) (g), (11)
where each element zG of [z] := {zG|G ∈ [G]} is an architectural weighting parameter such that∑ G∈[G] zG = 1, G ′ is a subgroup of all groups in [G], each element ψG of [ψ] is a filter with a domain of G, and ψ̃G is the transformation of ψG from a domain of G to G′ as defined in Equation 4. Thus, the layer is parametrized by [ψ] and [z], computing a weighted sum of operations that are equivariant to different groups of [G]. The layer may be equivalently computed by convolution of the input with the weighted sum of transformed filters, shown in Equation 11. We provide further implementation details in Appendix B.
5 EQUIVARIANCE-AWARE NEURAL ARCHITECTURE ALGORITHMS
We present two NAS methods that utilize the presented mechanisms for discovering appropriate equivariance during training: Evolutionary Equivariance-Aware NAS (EquiNASE) and Differen-
Algorithm 1 Evolutionary equivariance-aware neural architecture search. procedure EQUINASE(Initial symmetry group G)
Initialize population with a G-equivariant group convolutional network. for each generation do
for each network in population do Add children of network with relaxed equivariance constraints into population.
for each network in population do Partially train network on dataset.
Select Pareto-efficient and high accuracy networks as new population. return population
tiable Equivariance-Aware NAS (EquiNASD). Both methods optimize an architecture while learning weights, yielding a final trained network adapted to equivariances present in the training data. However, they differ in NAS paradigm and approximate equivariance representation: EquiNASE , in Section 5.1, searches for networks composed of layers each fully equivariant to possibly different groups, while EquiNASD, in Section 5.2, searches for smooth mixtures of equivariant layers.
5.1 EVOLUTIONARY EQUIVARIANCE-AWARE NAS
Towards finding the optimal full equivariance per layer, the equivariance relaxation morphism presented in Section 4.1 is applied as the genetic operator in an evolutionary hill-climbing algorithm. The Evolutionary Equivariance-Aware NAS (EquiNASE) algorithm, given in Algorithm 1, is similar to other evolutionary NAS methods such as Elsken et al. (2017) with pareto selection as in Falanti et al. (2022). A population of networks, which starts with an individual with all layers equivariant to the largest possible group, undergoes mutation via equivariance relaxation and selection based on accuracy and parameter count to optimize neural architecture while learning network parameters. See Appendix A.2 for further background on evolutionary NAS.
In each generation, candidate networks are evaluated based on maximizing validation accuracy and minimizing parameter count: the pareto-dominant individuals with highest accuracy are kept, then additional high-accuracy individuals are added if necessary until the desired parent population size is reached. Offspring are generated from each parent separately by mutation using the relaxation morphism. This preserves the weights of the parametrized equivariance during mutation, allowing for the continuous training of networks over evolution by inheritance from parent individuals. Specifically, mutation reduces a single layer’s parametrized equivariance to a subgroup within the constraint that each layer has parametrized equivariance to a subgroup of all preceding layers. This constraint yields local equivariance properties for the network, as shown in Weiler & Cesa (2019) and Elsayed et al. (2020) to be empirically favorable in image classification tasks. The resulting individuals are each trained independently for a given training time, and then this process repeats.
The second objective of minimizing parameter count is intended to advance efficient networks, such as those with large symmetry groups. Accuracy-based selection alone would necessarily prefer larger networks as mutation via the equivariance relaxation morphism results in two networks with identical performance but different size, the relaxed network having more parameters, until training; potentially short-term increases in validation accuracy after training would then result in the selection of individuals with more parameters. Thus, the proposed strategy of selecting both paretodominant and high-accuracy individuals is intended to maintain a diverse yet efficient population without succumbing to overly greedy selections too early.
5.2 DIFFERENTIABLE EQUIVARIANCE-AWARE NAS
In a contrasting paradigm, the [G]-mixed equivariant layer presented in Section 4.2 allows for smoothly searching across a spectrum of equivariance for each layer via a differentiable NAS algorithm. Our Differentiable Equivariance-Aware NAS (EquiNASD) algorithm, defined in Algorithm 2, is inspired by DARTS (Liu et al., 2018) with significant changes detailed in the following paragraphs. EquiNASD simplifies the bilevel optimization of the architecture weighting parameters Z and filter weights Ψ into alternating independent updates, computing the gradient update for Z with the current, rather than optimal, Ψ for the current architecture encoded by Z, to boost search efficiency with minimal performance loss compared to higher order approximations (Liu et al., 2018).
Algorithm 2 Differentiable equivariance-aware neural architecture search. procedure EQUINASD(Set of groups [G])
Initialize network with [G]-mixed equivariant layers, parametrized by Ψ and Z. while not converged do
Update Z by ∇ZL(Ψ, Z). Update Ψ by ∇ΨL(Ψ, Z).
return trained network
In most differentiable NAS search spaces, the desired output architecture is discretized to select a subset of architectural options within constraints, then the weights are re-initialized and trained within the static architecture. In our formulation, this is not necessary as any mixed operation can be equivalently expressed as a single layer equivariant to any group G′ that is a common subgroup to all groups of the mixed operation (Eq. 11): in our experimental case, this is a standard translation-equivariant convolutional layer, so the final model can be equivalently expressed as a standard convolutional model with encoded partial equivariance. Thus, the final optimized architecture and trained weights are output from the single search process. We explore the standard NAS paradigm, where weights are reinitialized and trained in the final static architecture, in Appendix D.
In order to enforce that the scaling of each filter does not confound the architecture weighting parameters, we use the weight normalizing reparametrization (Salimans & Kingma, 2016) and do not update the scalar norm parameter of each filter after initialization.
We do not use disjoint datasets for updating Ψ and Z, but rather draw one batch for Ψ and another for Z independently and randomly from the same training split. This allows for a standard dataset split and to use the validation set for hyperparameter tuning.
These two NAS approaches present adaptations of two standard types of NAS, evolutionary and differentiable, to the search for optimal partial equivariance. We next study empirically the two EquiNAS methods on three datasets, one with known rotational symmetry and two with unknown but visually significant rotational and reflectional symmetry.
6 EXPERIMENTS
We focus on the regular representation of groups and show experiments with reflectional and up to 4-fold rotational symmetry groups applied to image classification tasks. Examples of symmetry groups acting on pixel space, which corresponds to Z2, include T (2), which consists of discrete translations in both dimensions; the cyclical groups Cn, which consist of n-fold rotations; and the dihedral groupsDn, which consist of reflections with n-fold rotations, where n ∈ {1, 2, 4} for exact symmetry without interpolation. The p4 group consists of discrete translations and multiples of 90◦ rotations and may be represented as T (2) ⋊ C4. The p4m group consists of discrete translations, reflections, and multiples of 90◦ rotations and may be represented as T (2)⋊D4. As standard convolutional layers are already equivariant to T (2), we refer to layers also equivariant to n-fold rotations with or without reflections asDn orCn-equivariant, respectively. So, aC1 equivariant convolutional layer is a standard translation-equivariant convolutional layer. We use {C1, D1, C2, D2, C4, D4} as the set of potential groups for mutation in EquiNASE and as [G] in EquiNASD.
We present experiments on image classification for a variety of datasets. The Rotated MNIST dataset (Larochelle et al., 2007, RotMNIST) is a version of the MNIST handwritten digit dataset but with the images rotated by any angle. This task serves as a simple investigational study with known symmetry, while the following two tasks are more realistic and complex. The Galaxy10 DECals dataset (Leung & Bovy, 2019, Galaxy10) contains galaxy images in 10 broad categories. The ISIC 2019 dataset (Codella et al., 2018; Tschandl et al., 2018; Combalia et al., 2019, ISIC) contains dermascopic images of 8 types of skin cancer plus a null class. For Galaxy10 and ISIC, we downsample the images to 64 × 64 due to computational constraints, which adds notable difficulty to the tasks. These tasks exhibit varying levels of rotational and reflectional symmetry, motivating architectural optimization to determine the most effective application of equivariance constraints.
Across all experiments, the architectures are designed to have consistent channel dimensions once expanded to a standard translation-equivariant convolutional layer for each layer across models. Thus, constrained equivariance to a larger symmetry group results in fewer learnable parameters. A
layer constrained to C4 equivariance has |C4 \D4| = 2 times as many independent channels and as many parameters as a layer constrained toD4 equivariance. This is a notably different paradigm than other works that equate parameter counts across architectures with different equivariance properties.
As baseline comparisons, we train and test G-CNNs with static architectures. In addition to the static baselines, we re-implement the residual pathway priors (RPP) approach by Finzi et al. (2021) as a C1 equivariant layer with regularization in parallel with a D4 equivariant convolutional layer.
Further experiment details such as architecture details and other hyperparameters are in Appendix C. For each paradigm of experiments, we present results in the following subsections, with general discussion in Section 7. Additional ablation and random search baselines are in Appendix D.
6.1 EVOLUTIONARY EQUIVARIANCE-AWARE NAS
The classification test errors are listed in Table 1. The advantages of equivariance search methods are most apparent in the Galaxy10 benchmark. While EquiNASE outperforms most baselines on RotMNIST and all baselines on ISIC, it has similar performance and some final architectures to the D4 baseline for both tasks. However, the D4 baseline fails at the Galaxy10 task, demonstrating that the same equivariant architecture can not always be naively applied. Both search methods, EquiNASE and RPP, outperform all baseline models on Galaxy10, and by a large margin for EquiNASE .
The evolutionary progress on RotMNIST is shown in Figure 2: the selected population maintains a fully equivariant network in every generation. The final selected population originates from two main lineages, one staying fully equivariant until the last generations and the other diverging from
the fully equivariant network midway through, showing that training with dynamically constrained parametrizations can produce performant models.
In addition to the normally initialized static baselines, we also train and test baselines that are initialized with priors to larger symmetry groups. These are implemented by initializing all layers to be constrained to the prior symmetry group, then using the equivariance relaxation morphism on each layer. EquiNASE searches for relaxation schedules that yield trained priors on equivariance, while these additional baselines yield untrained priors. The results in Table 1 show that, theC1-equivariant networks generally improve with either equivariance prior, while the C4 equivariant networks perform better with D4 equivariance initialization only when the D4 constrained baselines also work well. The untrained prior methods do not perform as well as EquiNASE on RotMNIST, showing the benefit of investing some training time to the constrained equivariance. For the other tasks, the baselines with priors have better performances than their constrained baseline counterparts.
6.2 DIFFERENTIABLE EQUIVARIANCE-AWARE NAS
The classification test errors are listed in Table 2. EquiNASD achieves better test accuracy than the other comparable methods on RotMNIST and Galaxy10. Due to differences in training protocol, only comparisons of relative rankings with Table 1 are possible: baseline methods accuracies follow similar ranking patterns, suggesting the benefit of general C4 equivariance for RotMNIST and Galaxy10 and general D4 equivariance, including RPP, for ISIC. In this training protocol notably with adaptive optimizers, the results are more consistent across methods and trials.
The dynamics of architecture weighting parameters for one exemplary trial per task are shown in Figure 3. The general trend of less constrained layers toward the output supports the conjecture of local equivariance being beneficial. However, this effect is less consistent for ISIC, the only task where EquiNASD did not exceed baselines, possibly indicating less inherent symmetry. As seen in
Appendix D, the final mixing of architectures for ISIC included a high level of C1, indicating that feature analysis outside of these symmetry groups is important for this benchmark.
Previous differentiable NAS works often used regularization of network size or even architecture weighting parameters themselves to encourage efficient architectures with a single highly weighted choice for each layer. However, our algorithm shows strong preference for a single, more equivariant and thus more expressive layer, notably to D4 or C4 equivariance, without such regularization. This may be due to the bilevel optimization dynamics: more constrained layers may be able to make more effective updates and thus become favorable compared to the lagging larger layers.
7 DISCUSSION
To our knowledge, this is the first work which proposes search methods for networks with dynamically constrained equivariance. Many NAS approaches separately search for an architecture and then reinitialize and retrain the weights, while our two proposed approaches find an optimal architecture with trained weights in a single process, notably with dynamically constrained weights. Gradientbased tuning (Maclaurin et al., 2015) has shown the benefit not only of optimizing hyperparameters but also of dynamically adjusting them during training (Lichtarge et al., 2022). Dynamically constrained weights can reap the benefits of specialization and generalization over the course of training.
Our two equivariance-aware NAS approaches have distinct approaches: EquiNASE searches for architectures composed of discretely equivariant layers, while EquiNASD searches for continuous mixtures of equivariance within each layer. The EquiNASD algorithm avoids many known problems in differentiable NAS such as the discretization gap that occurs when searching over a continuous relaxation of a discrete architectural search space (Xie et al., 2021), such as that of EquiNASE . Towards searching for discretely equivariant layers using the [G]-mixed equivariant layer, proximal NAS algorithms use techniques such as projection (Yao et al., 2020) and straight-through estimation (Li et al., 2022) to avoid the discretization gap and thus may be effective for this application.
EquiNASE is innately greedy: at each selection step, the population is evaluated by known current performance rather than unknown final performance, biased to architectures that train quickly. Networks with more equivariance constraints tend to learn faster, but equivariance relaxation may yield large gradients for newly unconstrained parameters and thus fast increases in performance. Further work could utilize metrics for final performance, such as proxies (White et al., 2022).
The theoretical and algorithmic contributions of this work are applicable beyond the image classification experiments presented to architectures with parametrized equivariance to any discrete group. We leave the extension to other group representations and domains as future work, such as the continuous case via careful analysis of the regular representation, still given G′ \G is finite. Our proposed equivariance-aware NAS problems can be practically applied to find effective models or architectures for datasets with hypothesizable symmetry. EquiNASE may particularly work well on tasks that benefit from local equivariance, determined by analyzing the architecture weighting parameters from first applying the more efficient EquiNASD, as well as for finding good discrete architectures within which to retrain weights, based on the ablation and random comparisons. We thus recommend EquiNASD for practical applications if the final model is not restricted to discrete equivariance, in which case it can be used to inform design decisions for applying EquiNASE .
Beyond NAS, the equivariance relaxation morphism could be used in other applications such as fine-tuning and distillation. Layers of a pre-trained equivariant network could be expanded via equivariance relaxation before fine-tuning on the same or a new task. Similarly, a network could be distilled to a wider architecture for additional performance benefits.
Conclusion We present two mechanisms towards equivariance-aware architectural optimization, the equivariance relaxation morphism and the [G]-mixed equivariant layer, and two NAS algorithms that respectively implement these mechanisms evolutionarily as EquiNASE and differentiably as EquiNASD. We investigate how dynamic equivariance achieved by these algorithms affects the training and performance of models across multiple image classification tasks of varying complexity and assumed symmetry, demonstrating that these techniques can search for performant architectures and weights even on noisy tasks. The proposed mechanisms and algorithms are extendable beyond vision tasks to any architecture with parametrized equivariance to any discrete group.
A ADDITIONAL BACKGROUND
A.1 SYMMETRIES IN NEURAL NETWORKS
A symmetry of an object is a mapping of the object onto itself such that structure is preserved. A symmetry group G is a set of such mappings along with a binary operation · : G × G → G, known as the group product, that satisfies axioms for closure, associativity, the identity, and the inverse (Herstein, 2006). A group G acts on a set X via the group action . : G × X → X , (g, x) 7→ g.x that satisfies axioms for identity and compatibility: X is called a G-space. Equivariance is the property of a mapping such that transformation of the input results in equivalent transformation of the output. Formally, a mapping h : X → Z between two G-spaces is G-equivariant if for all g ∈ G and x ∈ X we have: h(g.x) = g.h(x). For example, an image segmentation neural network should be T (2)-equivariant: shifting the input should result in the same shift in the output.
Invariance is a special case of equivariance, where the output of the function is completely independent of transformation of the input. Formally, a mapping h : X → Z is G-invariant if for all g ∈ G and x ∈ X we have: h(g.x) = h(x). For example, an image classification network should be T (2)-invariant: shifting the input should not change the output. Symmetries leave objects invariant.
For two groups G and H with group products ·G and ·H respectively where H acts on G with group action ., the (outer) semi-direct product G ⋊ H of H acting on G is a group composed of the set of elements G ×H with group product (g, h) · (g′, h′) = (g ·G (h.g′), h ·H h′) and inverse (g, h)−1 = (h−1.g−1, h−1).
A subgroup H of G is a nonempty subset with the same group product that also fulfills the group axioms. Then, gH = {g · h|h ∈ H} and Hg = {h · g|h ∈ H} denote the left coset and right coset, respectively, of H with representative g.
A.2 NEURAL ARCHITECTURE SEARCH
Evolutionary algorithms are optimization methods inspired by evolution in biology, where individuals in a population compete with their phenotypic traits in order to pass on their genotypic traits to offspring. The population is the current collection of individuals. Each individual is an instance of the object to be optimized and has a genotype that is decoded into a phenotype. In this case, each individual is a neural network, with a genotype that encodes the parametrized equivariance group of each convolutional layer, represented as a vector of integers. The individual continues training on the task before competing against other individuals to be selected as a parent to mutate to generate the next population. Each parent itself is kept for the next population, as well as each valid child that is generated via the equivariance relaxation morphism, such that they are functionally equivalent to their parent at initialization (although with a different architecture) and thus have the same fitness before training. Each individual in this population is partially trained, such that these children diverge from their siblings and parent, so that the next set of parents may be selected and this process repeats.
Pareto dominance can be used in multi-objective optimizations to select the next parent population. For a population of individuals each scored in n objectives, an individual is pareto-optimal if no individual has at least one strictly better score for an objective without that of any other objectives being strictly worse. The pareto front is the set formed by all pareto-optimal individuals.
B IMPLEMENTATION DETAILS
Group convolutional layers The implementation of regular group convolutional layers can be viewed as a special case of our proposed equivariance relaxation morphism. With the preliminaries given in Section 4.1 and the case ofG′ = T (2), f̃ and ψ̃ are computed such that f̃(c,s)(g′) := fc(g′s) and ψ̃(d,t),(c,s)(g′) := ψd,c(t−1g′s) for each g′ ∈ T (2), c ∈ [Cl−1], s, t ∈ R, and d ∈ [Cl].
Let SG := |R|. The learnable parameters of the Gl-equivariant lth layer with Cl output channels, corresponding to ψ, are stored as a tensor of sizeCl×Cl−1×SGl×Kl×Kl. The filter transformation expands this filter tensor by performing the action of each r ∈ R on another copy of the tensor to expand its shape along a new dimension, resulting in a tensor of sizeCl×SGl×Cl−1×SGl×Kl×Kl, which is reshaped to ClSGl ×Cl−1SGl ×Kl×Kl. The input tensor to the lth layer, corresponding to f, is in the shape ofB×Cl−1×SGl×Hl−1×Wl−1,which is reshaped toB×Cl−1SGl×Hl−1×Wl−1 and convolved with the expanded filter. The output of shape B × ClSGl ×Hl ×Wl is reshaped to B × Cl × SGl ×Hl ×Wl.
Equivariance relaxation morphism To implement the equivariance relaxation morphism, the new filter tensor is initialized by applying Equation 4 such that result of applying the preceding filter transformation is equivalent. Our implementation of group actions relies on group channel indexing to represent the order of group elements: to ensure this is consistent before and after the morphism, the appropriate reordering of the output and input channels of the expanded filter are applied upon expansion. The new filter tensor has a shape of Cl|R| × Cl−1|R| × SGl/|R| ×Kl ×Kl. The [G]mixed equivariant layer is built on top of this implementation, also using proper input and output channel reordering between layers to ensure correct mixing of group channels.
C EXPERIMENTAL DETAILS
Architecture backbone For both EquiNASE and EquiNASD experiments, we use the same backbone architecture, such that the static baselines have the same architecture across experiments. The architectures have a lifting layer followed by 7 group convolutional layers, for a total of 8 convolutional layers. After 4 layers, the channel count doubles, from 16 to 32 for a D4 equivariant layer and scaling up for smaller symmetry group equivariance constraints. An average pooling layer is placed after every other layer for all architectures and additionally after the fifth and seventh convolutional layers for Galaxy10 and ISIC. After the final group convolutional layer is a group-dimension average pooling followed by two linear layers to the output dimension. Every convolutional and linear layer except the output layer is immediately followed by a batchnorm then a ReLU.
Hyperparameters The hyperparameters for each algorithm are selected such that baselines only differ by training time and optimizers. The learning rates were selected by grid search over baselines on RotMNIST. For all experiments in Sections 6.1, we use a simple SGD optimizer with learning rate 0.1 to avoid confounding effects such as momentum during the morphism. For EquiNASE , the parent selection size is 5, the training time per generation is 0.5 epochs, and the number of generations is 50 for all tasks. Baselines were trained for the equivalent number of epochs. For all experiments in Section 6.2, we use separate Adam optimizers for Ψ and Z, each with a learning rate of 0.01 and otherwise default settings. The total training time is 100 epochs for RotMNIST and 50 epochs for Galaxy10 and ISIC. For RPP, we use a C1-equivariant layer with an L2 regularization parameter of 1e−6 in parallel with a D4-equivariant layer without regularization. For RotMNIST and MNIST, we use the standard training and test splits with a batch size of 64, reserving 10% of the training data as the validation set. For Galaxy10, we set aside 10% of the dataset as the test set, reserving 10% of the remaining training data as the validation set. For ISIC, we set aside 10% of the available training dataset as the test set, reserving 10% of the remaining data as the validation set and the rest as training data. For the latter two datasets, we resize the
images to 64 × 64 due to computational constraints and use a batchsize of 32. The validation sets were previously used for hyperparameter tuning: for experimental results, they are only used for the experiments in Section 6.1 as necessary for the EquiNASE algorithm. No data augmentation is performed, although the datasets are normalized.
D ADDITIONAL EXPERIMENTS AND FIGURES
To explore the symmetry discovery of EquiNASD, we apply it to six augmentations of the MNIST dataset (LeCun et al., 1998), where each augmentation applies the group actions of each group in {C1, C2, C4, D1, D2, D4} respectively. The resulting architecture dynamics of this experiment are shown in Figure 4, showing that less augmented versions still have some inherent symmetry, while more augmented versions induce stronger architectural changes towards layers that are equivariant to larger groups. Across all augmentations, earlier layers tended towards more constraints to equivariance.
As ablation studies and comparisons, we implement two kinds of random search for each NAS method. The first ablates smart architecture search: EquiNASE Random Select works as described in Algorithm 1 but with random parent selection (instead of pareto-front selection) and EquiNASD Random Z works as described in Algorithm 2 but with random Z updates (instead of gradient descent) by shuffling Z gradients. The second is more akin to standard NAS random search: for the evolutionary paradigm, we train 30 randomly selected static architectures in the discrete architecture search space for the same training time and selecting the top 5 by validation accuracy, and for the differentiable paradigm, we train 25 randomly selected static architectures in the continuous architecture search space and selecting the top 5 by validation accuracy. 30 and 25 were respectively calculated to be approximately the same compute cost as the trials of EquiNASE and EquiNASD. These are labeled as “Random Static” for both the evolutionary and differentiable paradigms. Since Random Static trains static architectures while EquiNASE and EquiNASD dynamically search for both architectures and parameters, we take the best 5 architectures for each and retrain their param-
eters from scratch as in the standard NAS paradigm, labeled as EquiNASE Retrain and EquiNASD Retrain, respectively, for fair comparison to Random Static.
The results of these additional experiments are compared against those of our main algorithms and baselines in Figures 5 and 8.
Shown in Figure 5, EquiNASE outperforms EquiNASE Random Select, showing the benefit of using informed selection to guide the relaxation of equivariance constraints over training. Additionally, EquiNASE Retrain outperforms the Random Static baseline, showing that using compute in an informed search is more beneficial than just randomly searching the space of static architecture constraints.
EquiNASD finds competitive architectures on average and can find architectures which outperform baseline choices like architectures fully equivariant to C1 or D4. From comparing EquiNASD Retrain to EquiNASD results in Figure 8, retraining a resulting architecture is not consistently better or worse than using the final weights from EquiNASD, showing that there isn’t a disadvantage to training weights during search and avoiding the additional cost of a post-search training step.
The use of randomized loss information in Random Static in Figure 8 shows that an informed search for architecture hyperparameters is generally useful. However, experiments on the ISIC benchmark demonstrates that the architecture search can be deceptive and that random loss information can outperform informed loss. This motivates exploration into the use of noise during the search process for architectural parameters.
A sampling of random continuous architectures in Random Static in Figure 8 shows that random architectures can perform well on problems where fully equivariant architectures like the D4 baseline already perform well. However, on the Galaxy10 problem, the D4 and C4 baseline have high variance, suggesting that a fully equivariant architecture is sub-optimal. On this baseline, EquiNASD greatly outperforms a search of random architectures, demonstrating that EquiNASD can discover the appropriate equivariance for a specific dataset over fixed or randomly selected architectures.
The search space for EquiNASD is already well-formed for random architectures, compared to the discrete search space of EquiNASE . This is enabled by the [G]-mixed equivariant layer, which is a contribution of this work. Random non-mixed equivariant architectures do worse on all three benchmarks compared to random architectures which use the [G]-mixed equivariant layer. This can explain why the EquiNASD results are closer to random baselines than the EquiNASE results, as the search space permits easily finding the appropriate mix of equivariances compared to a discretized search space.
Our methods enable searching for both architecture and parameters concurrently in a single training process. This approach is more efficient than NAS approaches that only search for architectures, requiring a retraining process within the resulting architecture for evaluation. However, these ablation and random search comparisons show that our algorithms may get performance gains from adding a retraining phase with a tradeoff of further compute cost. | 1. What is the main contribution of the paper regarding neural architectural search?
2. How does the proposed approach differ from existing methods in G-CNN literature?
3. What are the strengths and weaknesses of the proposed equivariance relaxation morphism?
4. Can the approach be extended to continuous groups?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
6. Are there any questions regarding the experimental results and their interpretation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes two methods for equivariance-aware neural architectural search (NAS). The motivation for this work stems from the fact that symmetries present in a data set might often be imperfect or non-explicitly known. The first method is based on the proposed equivariance relaxation morphism, a procedure that equivalently reparameterizes a group convolution layer to operate with equivariance constraints on a subgroup; an evolutionary NAS based on such morphism is later adopted. The other method, named
[
G
]
-mixed equivariant layer, linearly combines layers equivariant to different groups, whose coefficient is search through a differentiable NAS. Thoughtful experiments on three datasets are conducted to demonstrate the performance.
Strengths And Weaknesses
Strength
The paper is well motivated. Most G-CNN literature assume that the symmetry group of the learning task is explicitly known and perfect. The paper instead uses a equivariance-aware NAS procedure to search for the optimal architecture for (potentially) unknown and imperfect group transformations.
The paper is mostly well-written.
The experiments are detailed and thorough.
Weakness
The equivariance relaxation morphism, which relies on reparameterizing the group convolution on a subgroup, seems to be limited to discrete group. Can this be extended to continuous groups?
Moreover, the relaxation morphism is also restrictive in the sense that the number of the unstructured channel (c_out) cannot be adjusted after changing to a subgroup. This inevitably increases the model size as group size decreases, which is indeed the case for EquiNAS_D.
The clarity of the paper in section 5.1 can benefit from more explanation on the technical terms such as population, pareto selection, pareto front, etc.
In MNIST_rot, the review would like to see what happens when
C
8
and
D
8
are included in the picture.
If I understand correctly, the reported EquiNAS_E result on Galaxy10 in Table 1 might be misleading. Figure 4(a) shows that only one NAS output in the end achieves significantly smaller test error compared to all the remaining results.
Are the models involved in EquiNAS_D significantly larger compared to the baseline? After all, a linear combination of layers equivariant to different groups is taken. Does that mean the model is slow to train and (especially) test?
Clarity, Quality, Novelty And Reproducibility
The paper is mostly well-written and well-organized. The idea is novel. |
ICLR | Title
Event-former: A Self-supervised Learning Paradigm for Temporal Point Processes
Abstract
Self-supervision is one of the hallmarks of representation learning in the increasingly popular suite of foundation models including large language models such as BERT and GPT-3, but it has not been pursued in the context of multivariate event streams, to the best of our knowledge. We introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Specifically, we design a novel pre-training strategy for the encoder where we not only mask random event epochs but also insert randomly sampled ‘void’ epochs where an event does not occur; this differs from the typical discrete-time pretext tasks such as word-masking in BERT but expands the effectiveness of masking to better capture continuous-time dynamics. The pre-trained model can subsequently be fine-tuned on a potentially much smaller event dataset, similar to other foundation models. We demonstrate the effectiveness of our proposed paradigm on the next-event prediction task using synthetic datasets and 3 real applications, observing a relative performance boost of as high as up to 15% compared to state-of-the art models.
N/A
Self-supervision is one of the hallmarks of representation learning in the increasingly popular suite of foundation models including large language models such as BERT and GPT-3, but it has not been pursued in the context of multivariate event streams, to the best of our knowledge. We introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Specifically, we design a novel pre-training strategy for the encoder where we not only mask random event epochs but also insert randomly sampled ‘void’ epochs where an event does not occur; this differs from the typical discrete-time pretext tasks such as word-masking in BERT but expands the effectiveness of masking to better capture continuous-time dynamics. The pre-trained model can subsequently be fine-tuned on a potentially much smaller event dataset, similar to other foundation models. We demonstrate the effectiveness of our proposed paradigm on the next-event prediction task using synthetic datasets and 3 real applications, observing a relative performance boost of as high as up to 15% compared to state-of-the art models.
1 INTRODUCTION
Transfer learning occurs when a model is pre-trained on a task, such as classification on a large labelled dataset such as ImageNet, and the model’s ‘knowledge’ is then applied to another task, such as classification on medical images. In the current era of AI, transfer in domains such as natural language processing and image processing often leverages self-supervised learning, where pre-training for representation learning is done using unlabeled data. Although the fundamental ideas of transfer are not new, there is a clear emerging paradigm around foundation models (Bommasani et al., 2021), such as BERT (Devlin et al., 2018) and GTP-3 (Brown et al., 2020), which are trained with diverse unlabeled data at scale using self-supervision. These pre-trained models are then fine-tuned and adapted to different downstream tasks that respectively come with limited labeled data. Recent progress has been possible primarily due to improvements in hardware, development of the attention mechanism (Vaswani et al., 2017), and availability of substantial unlabeled training data.
We extend and pursue self-supervised learning in the context of multivariate event streams, i.e. data involving irregular occurrences of different types of events. Event stream datasets are widely available across domains, for instance in the form of social network interactions, customer transactions, system logs, financial events, health episodes, etc. It is well known that temporal point processes provide a sound mathematical framework for modeling such datasets (Daley & Jones, 2003). In this paper, we introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Although self-supervised learning has recently been explored for classical time series data (Zerveas et al., 2021), to the best of our knowledge, self-supervision has not yet been explored in the context of point processes.
Neural models for temporal point processes (e.g. Du et al., 2016; Mei & Eisner, 2016; Xiao et al., 2017) have advanced the state of the art in event modeling, particularly for the task of event prediction. The typical approach in this line of work is to train a neural network on a large amount of event data. Our proposed paradigm differs from current standard practices by taking a transfer learning approach analogous to foundation models: to first pre-train a neural model on a (potentially) large event dataset and then fine-tune the model for prediction on a limited event dataset.
Transfer learning with event models has many potential applications. For instance, there may be abundant data from electronic health records containing information around a particular patient population; this data could potentially be leveraged for another population whose data is either unavailable or harder to obtain. This is an issue relevant to health equity since there may be data related concerns for some under represented populations. Similarly, financial event data from an industrial sector could potentially be transferred to another. Electronic commerce is yet another illustrative domain where transfer learning techniques may help to transfer purchase behaviors across a large pool of user populations and/or product types.
Although there is some related work around multi-task learning with event streams, such as through deploying hierarchical Gaussian process models (Lian et al., 2015) or time-scale graphical event models (Monvoisin & Leray, 2019), this line of research typically considers learning by pooling together disparate data from the same population. In contrast, we tackle the more ambitious effort of transferring from one or multiple event datasets to another. Specifically, we consider the typical setting of homogeneous transfer learning (Zhuang et al., 2021), where all datasets that get pooled for pre-training involve the same set of event types. This allows for potentially leveraging different datasets even though there may be realistic variations with respect to parametric or structural dependencies present in the corresponding event streams across each dataset.
Contributions: We make the following major contributions:
• We introduce a self-supervised paradigm for transfer learning in temporal point processes. A crucial innovation is to explicitly incorporate information about the absence of events, which improves the modeling of temporal dynamics without burdening training efficiency.
• We propose a masked event model, which is a new way to derive a pretext task for selfsupervision targets in transformer models for event streams.
• Our empirical evaluation demonstrates improved transfer learning performance for event prediction on synthetic and real datasets relative to state of the art transformer event models.
2 BACKGROUND AND RELATED WORK
2.1 TEMPORAL POINT PROCESSES
Multivariate temporal point processes (MTPP) are elegant mathematical models for event streams where event types/labels from some discrete set occur in continuous time (Daley & Jones, 2003). A multi-dimensional MTPP generates sequences with timestamps and associated labels of the form S = {(ti, yi)}ni=1 where ti is the time of occurrence of ith event and yi is its label. The cardinality of label set L is M . A strict temporally ordered event stream assumes a period of events observed within [0, T ] for each ti ∈ [0, T ] for all i ∈ [1, 2, ..., n]. MTPPs are characterized by conditional intensity functions for each label representing the rate at which it occurs at any time t, λe(t) = lim∆t→0 E(Ne(t+∆t|ht)−Ne(t|ht)) ∆t , whereNe(t|ht) counts the number of occurrences of label e prior to historical occurrences (or simply history) ht.
Several MTPP models such as multivariate versions of the classic Hawkes process (Hawkes, 1971) and piecewise-constant models (Gunawardana & Meek, 2016; Bhattacharjya et al., 2018) assume some parametric form of the conditional intensity function. Neural MTPPs are more recent variants that capture the underlying dynamics using neural networks. Recent years have witnessed rising popularity of neural MTPPs due to their state-of-the-art performance on benchmark datasets for predictive tasks (Du et al., 2016; Mei & Eisner, 2016; Xiao et al., 2017; Omi et al., 2019; Shchur et al., 2019; Zuo et al., 2020). A common training objective in neural MTPPs involves minimizing the negative log-likelihood. The log-likelihood of observing a sequence S is the sum of log-likelihood of events and non-events and can be computed as:
log p(S) = n∑
i=1 M∑ e=1 logλe(ti)− ∫ T 0 M∑ e=1 λe(t)dt (1)
Many of these neural MTPPs assume some form of evolution dynamics between events in order to compute the second term in Eq. 1, such as recurrent neural network (RNN) evolution (Xiao et al.,
2017), exponential decay (Mei & Eisner, 2016), intensity-free modeling of the integral (Omi et al., 2019), or the usage of explicit epochs indicating absence of events (Gao et al., 2020).
2.2 TRANSFORMERS FOR EVENT DATA
Attention (Xiao et al., 2019) and transformer-based event models have shown promising results in recent years, including the self-attentive Hawkes process (SAHP) (Zhang et al., 2020), transformer Hawkes process (THP) (Zuo et al., 2020) and attentive neural point process (ANPP) (Gu, 2021). The self-attention mechanism, in our context, relates different event instances of a single stream in order to compute a representation of the stream. The architecture of transformers for MTPPs generally consists of an embedding layer and a self-attention layer. In the transformer Hawkes process (THP) (Zuo et al., 2020), for example, the embedding layer includes a time embedding and event-type embedding. Time embedding is achieved through:
[z(tj)]i =
{ cos(tj/10000 i−1 d ) if i is odd
sin(tj/10000 i d ) if i is even
(2)
where tj is a timestamp and d is the dimension of encoding. Time embedding and one-hot encoded types are combined to form the embedded input X. For sequence S = {ti, yi}Li=1, time embedding zi for each instance is specified in Eq. 2 and for the entire sequence with length L, the embedding is Z ∈ RM×L . Type embedding are through the product of a trainable embedding matrix U ∈ RM×K and one hot encoded vectors yi’s for all type instances, i.e. X = (UY + Z)T where Y = [y1,y2, ...,yL]. Q, K, V are query, key and value matrix; they are linear transformations of X, i.e. Q = XWQ, K = XWK , V = XWV where WQ, WK , WV are trainable weights. Attention output C is computed by the following:
C = softmax( QKT√ Mk )V = AsV (3)
where As denotes attention score matrix. The output C is then fed into a pointwise feed forward neural network (FFN) (commonly with residual connection) to learn a high level representation of the sequence for modeling conditional intensity functions.
2.3 SELF-SUPERVISION FOR SEQUENCE DATA
Sequence models such as RNNs have achieved much success in various applications, but more recent methods typically rely on transformer architectures (Vaswani et al., 2017) and the attention mechanism, especially in popular application of natural language process (NLP). With fine-tuning on the downstream tasks, these pre-training models lead to sizable improvement over previous state of the art. However, large-scale transformers are also bulky and resource-hungry, typically with billions of parameters (Brown et al., 2020) and cost millions of USD to train (Floridi & Chiriatti, 2020).
Self-supervision is typically achieved in sequence models by deriving an effective pretext task that is trained through supervised learning with a masking strategy for indicating self-supervision targets. For example, in BERT, about 15% of the words are randomly masked using an independent Bernoulli model of masking, and replaced with a new [MASK] label or a random word. In some recent work on self-supervision for time series data (Zerveas et al., 2021), the masking is done to ensure longer lengths of masked values, all replaced with the value 0, to get geometrically distributed run-lengths of masked values. Here we introduce a novel pretext task with a masking strategy specially tailored for asynchronous event data in continuous time. As we show later through experimental ablation studies, straightforward application of prior discrete-time sequence-based masking strategies proves inadequate for event data, because the intensity rates of events that represent continuous-time dynamics may vary in general between any two consecutively observed events. This aspect distinguishes event streams from discrete-time data such as time series and has not been addressed by masking in standard temporal transformers.
3 A SELF-SUPERVISED LEARNING PARADIGM
We introduce a self-supervised modeling paradigm with a transformer-based architecture for multivariate event streams that we refer to as Event-former. In particular, we present a novel pretext
training task specific to event data to learn a suitable representation for event streams. Such a representation can then be used by a small feed forward network for fine-tuning on a sequential next event prediction task. A high-level figurative scheme is shown in Figure 1; we provide details in the subsections to follow.
Our proposed pre-training paradigm from Figure 1 (left) involves three major aspects that distinguish it from prior work: 1) injecting void events (which are formalized in the next paragraph) to improve the representation learning of event dynamics in continuous time, 2) an effective masking strategy that uses both positional and temporal encoding on the above augmented event stream with void events, and 3) forcing the attention mechanism to adhere to the temporal order of events. We explain each aspect below before prescribing the full pre-training scheme, followed by a brief explanation of the fine-tuning procedure as depicted in Figure 1 (right).
3.1 VOID EVENTS IN TRANSFORMERS
Recall that an observed event stream is of the form S = {(ti, yi)}ni=1 where ti and yi are the ith event’s time stamp and label, respectively. We consider a modified stream where we inject a predetermined number of void events involving epochs where no event occurs; these are of the form (t′i, null) where ‘null’ is a new label signifying absence of an event occurrence. The modified stream is denoted S′ = {t′i, y′i}n ′
i=1 where S ⊂ S′ and y′i ∈ {L ∪ null} ∀i. The role of the void events is to provide additional information about the dynamics of the continuous-time process by explicitly indicating that no event occurs within two consecutively observed events.
Explicitly specifying selected epochs where events do not happen has been used previously in some related work; see for instance the notion of ‘fake epochs’ in Gao et al. (2020), which was originally developed for RNNs and helped boost the performance of a neural point process on a model fitting task. The main idea is that in point process models, the inter-event duration between two successive events is just as important as the event epochs themselves. This is seen from the integral terms in the conditional intensity based log-likelihood expression for event streams (see Eq. 1). Just as the internal hidden state of a recurrent neural network (such as an LSTM) only changes in discrete steps upon seeing the next token, the transformer based representations too behave in a similar manner. In reality, the conditional intensity rates can evolve continuously. Introducing the void epochs in the inter-event void space provides a convenient way to force the evolution of the transformer based internal representations inside the inter-event interval, and this in turn leads to improvement in both the pre-training and the fine-tuning steps for next event prediction related tasks. We thereby ad-
dress an inherent shortcoming of transformers for event datasets in a non-parametric manner, i.e. without the need of specific parametric or process assumptions such as in the transformer Hawkes process (Zuo et al., 2020). However, to use void events in transformers requires further adaptation, particularly during the training process where masking is also used. Next we present an effective approach for masking with void event labels.
3.2 MASKING STRATEGY & INPUT ENCODING
We consider a masked event model (MEM) pre-training device that operates on the modified event stream S′, where some events (t′i, y ′ i) are randomly masked for the task of prediction given history. Our masking approach is broadly similar to past work but specialized to our model. When an event epoch in the above expanded event stream S′ is masked, its timestamp is replaced with the value zero and its label is replaced with the value [MASK]. Further, for the choice of which tokens get masked, our model admits both the independent strategy used in BERT (Devlin et al., 2018) as well as the serially correlated temporal strategy used in time series (Zerveas et al., 2021). An ablation study shown later in Table 2 indicates that either of these strategies works well when combined with the proposed MEM model, and leads to improvements through transfer learning in both MSE and accuracy for predicting the next event time and label respectively. We also note that the results are worse without the proposed MEM model’s expansion of the event stream, i.e. without the injection of void event epochs. Our model differs from existing literature on masking in that both actual events and void events are admitted as candidates for masking. This combined approach leads to improved representations in experimental evaluation. The MEM model based representations are able to implicitly learn about both the event arrival rates due to masked learning with real epochs, as well as the inter-event empty spaces due to masked learning with void epochs.
In addition to the choice of masking strategy in transformer models, one also needs an encoding for the position information in the input sequence so that the uniqueness of each location is retained to some extent. Traditional positional encoding (PE) (Vaswani et al., 2017) used in transformers is not sufficient by itself for event stream data because events are associated with irregular time stamps, unlike natural language sequences. Similarly, temporal encoding (TE), such as proposed in prior work (Zuo et al., 2020), also proves inadequate by itself in our setting because our masking strategy replaces the time stamps of masked events with zero. Note that this would render indistinguishable any two distinct events (i.e. with distinct time stamps) of the same event type in the input event stream. As seen in Figure 3 in the Appendix, using TE alone leads to early plateauing of the loss function, and this is often a telltale signature of poor end-task performance. To address this issue, we propose the combined encoding strategy of using PE and TE together. We also show that the combined encoding strategy preserves the universal approximation results of standard transformers.
Theorem 1 Transformers with combined PE and TE are universal approximators for any continuous sequence-to-sequence function with compact domain, i.e. they approximate any continuous functions f: X→H with ϵ error w.r.t p-norm where 1 ≤ p <∞ and X,H ∈ Rd×L.
Please refer to the Appendix for a proof of the above result. Yun et al. (2019) establishes that transformers with PE are universal approximators for any continuous sequence-to-sequence function with compact support (Theorem 3 in their paper) and is applicable to language sequences. The afore mentioned result however applies uniquely to event streams. More importantly, it separates two distinct event epoch encodings to (potentially) distinguish representations and establishes the predictability and learnability of a transformer model (with a certain structure) for the MEM.
3.3 TEMPORAL UPPER TRIANGULAR ATTENTION
While masked language models such as BERT leverage contextual information from both prior and post tokens of interest, here we only consider prior tokens. This is because our main task of interest is event prediction given only the past, as typical in most real-world prediction problems, which prohibits us from using post token context. We apply an upper triangular attention so that a current event epoch only attends to prior events. In MEM, any representation in pre-training as shown in Figure 1 (left) for a masked event (regardless of whether the event is observed or void) only attends to history in the past. With these pieces that define the MEM model, we next describe the pre-training and fine-tuning steps that respectively produce and exploit the self-supervised representations.
3.4 PRE-TRAINING SCHEME
Pre-training using the MEM is conducted by first randomly injecting void events into event streams, masking some of the events, and then computing a self-supervised loss determined by predicting the masked events. In this fashion, the MEM is trained to not only predict the time and label of observed events, but also try to be as accurate as possible at determining when events do not happen. In the most general setting, suppose that the sequence of time stamps for void events, denoted τ , is randomly generated from some distribution P. A special case of this random injection is when exactly 1 void event is uniformly generated between each pair of consecutive events in S to create modified event stream S′. After randomly selecting a pre-determined percentage of events to mask, the loss for the self-supervised prediction task can be computed as:
L = Eτ∼P[Levent(t̂′m, ŷ′m; t′∗m, y′∗m)], (4)
where m denote the indices of the randomly selected masked events, similar conceptually to Devlin et al. (2018). The hat and star notation for t (y) refer to the model’s predicted time (label) and the ground truth time (label), respectively. Note that Eq. 4 will in practice be challenging to optimize, due to the stochastic objective and additional computation complexity from sampling and inserting void events between every two consecutive events in every event stream in a batch when performing stochastic gradient descent. The time complexity for such an insertion during training is O(KL) where K is the number of event streams and L is the maximum length by merging the two sorted lists.
To reduce the computational cost and improve efficiency, we propose a practical solution by adopting a simpler but just as effective sampling strategy for void events. Specifically, we only sample void events once from the original dataset as an approximation and then merge as a pre-processing step. Thus no additional computing cost occurs during training. Let NM be the total of number of masked event epochs. We use the following to measure the prediction loss for each masked event, whether it is observed or void:
Levent = 1
NM NM∑ i CE(softmax(H′mW y + by)i,:, y ′∗ m,i) + γMSE((H ′ mw t + bt)i, t ′∗ m,i), (5)
where H′m is the masked high level representation from the transformer model of a modified stream and Wy ∈ Rd×M , by ∈ RM , wt ∈ Rd and bt ∈ R are trainable weights and biases for label prediction cross entropy (CE) loss and time prediction mean square error (MSE). In addition, index i in the above equation implies a general instance of masked event epoch and i, : corresponds to the ith row of the output matrix. We use γ as the trade-off between the two loss terms. It is worth noting that we use only one hidden layer for masked event prediction; we avoid using deep feed forward networks to force the transformer model to learn a high quality representation H′m so that it facilitates the fine-tuning process for downstream tasks.
3.5 FINE-TUNING
After pre-training, MEM can then be applied to model any new event sequence S and be further fine-tuned to obtain a better representation Ĥi. Note that during fine-tuning, we do not include void events, which simplifies the training steps and is compatible with any existing approach. Each learned representation is then fed into a small feed forward neural network for downstream tasks involving event prediction. In other words, our model fine-tunes by consuming each individual event representation Ĥi and predicting the next label yi+1 as well as time ti+1. The power of this approach is primarily through the conversion of sequential prediction into tabular regression and classification. For an event dataset with K event streams, each with length nk, the loss in fine tuning step Lpred is the following:
Lpred = K∑ l=1 nk∑ i=1 CE(softmax(MLP (Ĥli)), y l i+1) + αMSE(MLP (Ĥ l i), t l i+1) (6)
where α is a similar trade-off between cross entropy and mean square error. Regression and classification share the same multi-layer neural network (MLP) for computational efficiency in our setting.
4 EXPERIMENTS
4.1 BASELINES
We use the following baselines for experiments. To focus attention on the potential benefits of selfsupervision rather than the choice of neural architecture, we replace non-transformer architectures in baselines with a suitable counterpart transformer.
Recurrent Marked Temporal Point Process (Du et al., 2016) and Event Recurrent Point Process (Xiao et al., 2017). We replace the RNNs in both originally proposed models with transformers. We note that the original implementation 1 only predicts the very last event (tn, yn) given prior events; for a fair comparison, we therefore modify the code to evaluate next inter-event time di where di = ti − ti−1 for i ∈ {2, 3, ...n}. Lognormal Mixture (Shchur et al., 2019). We replace the RNN with a transformer and take the expectation of the learned mixture model for next inter-event prediction.
Transformer Hawkes Process 2 (Zuo et al., 2020). This model is representative of the current stateof-the-art for event sequence modeling. It already involves a transformer architecture and therefore does not need any modification.
We use the following acronyms for the afore mentioned models, where the prefix ‘T-’ clarifies that some of these are transformer-based extensions: T-RMTPP, T-ERPP, T-LNM and THP. Following Zuo et al. (2020), we evaluate model performance on next event time prediction with root mean square error and on next event label prediction with accuracy.
4.2 SYNTHETIC EXPERIMENTS
We conduct experiments using synthetic data generated from two representative parametric families of multivariate temporal point processes: multivariate Hawkes processes (Bacry et al., 2015) and proximal graphical event models (Bhattacharjya et al., 2018). We aim to pre-train a masked event model on a set of datasets and fine-tune the model on a different dataset for the event prediction task.
Hawkes-Exp Dynamics. We generate 400 samples each from 10-dimensional Hawkes’ process dynamics for 5 datasets (A, B, C, D, E) with different parameters and combine them to form a pre-training dataset. We further split each into train-dev sets 75-25 and use the dev set for hyperparameter selection. We also generate 5 folds of a dataset F with different parameters as the target; each fold contains 500 event sequences and is further split into train-dev-test 60-20-20 subsets. Final evaluation is performed on the test subsets.
PGEM Dynamics. We generate 500 samples each from the proximal graphical event model (PGEM) Bhattacharjya et al. (2018) generator with 4 datasets (A, B, C, D) of different parameters where each contains 5 event labels. We combine these to form the pre-training dataset. Similarly, we generate an additional 5 folds of a dataset E with different parameters as the target, each of which contains 500 event sequences. Each fold is further split into train-dev-test 60-20-20 subsets, and as before, final evaluation is performed on the test subsets.
Results. As shown in Table 1, Event-former achieves best results for predicting both the next event time and event type as compared to all baselines. For the Hawkes-Exp generated data, it boosts prediction performance on average 4-5% compared to the best baseline result; on PGEM the increase is around 1-3%. The benefit of our approach along with its efficacy is its efficiency. While we only pre-train once, a typical fine-tuning procedure in this study involves a 20 fold smaller network – ∼ 1M trainable parameters compared to say the THP model with∼ 20M trainable parameters at its recommended setting. This suggests our model learns a suitable representation of the event dynamics, specially for Hawkes process. Figure 2 shows the T-SNE projection of learned representations of event data generated by the Hawkes-Exp model datasets A, B, C, D and E onto the 2D plane. Each model generates a unique fragment segment that somewhat overlaps with another model. The linear and curvaceous segment pattern observed here is not uncommon for projecting time-series embedding onto a 2D plane (Wong & Chung, 2019). This overlapping of the representations from
1https://github.com/woshiyyya/ERPP-RMTPP 2https://github.com/SimiaoZuo/Transformer-Hawkes-Process
pre-training data (A, B, C, D and E) with the target data F provides a visual explanation for how various datasets compare in terms of the learned representation.
4.2.1 ABLATIONAL MASKING EXPERIMENTS
A typical deployment of MEM involves 3 components: inserted random void epochs, random selection of masks and masking fraction. Our default setting is through the use of void events and uniformly randomly selecting 15% for masking during pre-training. We perform 3 ablation studies on the synthetic datasets: 1) void vs. no void events, 2) geometric vs. random mask and 3) mask fraction. Ablation 1 evaluates the effect of injected random void epochs in MEM on prediction. Clearly from Table 2, we notice a drop of type accuracy and increase of RMSE in time prediction for both cases; in particular, the predictive performance deteriorates to below the baselines on Hawkes-Exp. The injection of void epochs is justified for producing competitive results in random masking. Ablation 2 compares the impact of two masking strategies: geometric and random. We employ the former from Zerveas et al.
(2021), along with inserted void epochs. Geometric masks produce slightly deteriorated results particularly on PGEM data, suggesting masking consecutive segments may not aid in learning dynamics in the continuous-time setting. Ablation 3 compares the choice of fraction of randomly masked epochs. In general, we find no significant difference between 15% and 30% for event prediction.
4.3 REAL APPLICATIONS
Datasets. We perform transfer learning experiments on 3 real applications, as listed below. A descriptive summary of the 6 datasets used are shown in Table 3.
• Financial: Defi-Mainnet and Defi-Polygon are privately curated datasets involving userlevel crypto currency transactions from the Aave website3. Mainnet and Polygon represent two different protocols/deployments on the platform. We test the algorithms on Mainnet after pre-training on Polygon.
3aave.com
Dataset # classes # seqs. Avg. length # events Data Type
Defi-Mainnet 6 20539 32 654844 Financial Defi-Polygon 6 33597 85 2856453 Financial Electronics 4 9993 20 195726 E-Commerce Cosmetics 4 19301 39 752109 E-Commerce ACLED-India 6 111 17 1934 Political ACLED-Bangladesh 4 97 17 1697 Political
Results. As demonstrated by its highest accuracy and lowest RMSE in Table 4, transfer learning with Event-former in these datasets consistently improves upon all baselines, and the improvement ranges from 3% to 15%. The most impressive improvement of 15% is on time prediction in DefiMainnet where using a different protocol appears to be sufficient to help with learning the dynamics. This likely suggests that the dynamics of real applications have noticeable shared similarities that the state-of-the-art transformer model approaches are unable to exploit. In addition, we observe that the transfer performance in general increases with increasing size of pre-training data. As shown from Table 3, the 3 pairs of datasets have different amounts of data; the Defi datasets have the highest number of samples, while the ACLED datasets have the lowest number. This is an indication that Event-former may be able to generalize better with more pre-training data.
5 CONCLUSION
In this work, we propose a novel self-supervised paradigm for transfer learning in multivariate temporal point processes. We introduce the usage of void events for transformer architectures, which is unique in continuous-time event models, and design a masking strategy for predicting masked event epochs and void spaces in-between. We empirically demonstrate the potential of our approach using synthetic as well as various real-world datasets. In particular, improvement of prediction performance is noticeably significant on transferring tasks over many existing competitive transformerbased approaches. While this study focuses on the homogeneous transfer setting, our approach could potentially be extended to other more complex transfer settings, such as out-of-domain heterogeneous transfer (Zhuang et al., 2021) with datasets that contain non-overlapping event labels. We leave these more complex cases to future work.
4https://www.kaggle.com/datasets/mkechinov/ecommerce-events-history-in-electronics-store 5https://www.kaggle.com/mkechinov/ecommerce-events-history-in-cosmetics-shop 6https://www.kaggle.com/datasets/saimasharleen/acled-bangladesh 7https://www.kaggle.com/datasets/shivkumarganesh/riots-in-india-19972022-acled-dataset-50k
A APPENDIX
A.1 SYNTHETIC GENERATORS
We generated datasets from Hawkes process and Proximal Graphical Event Model. We describe the parameters used in our experiments to generate event datasets.
Hawkes-Exp. We use a standard library to generate datasets from the Hawkes dynamics 8.The parameters are baseline rate, decay coefficent, adjacency (infectivity matrix) and end time. We describe the four for models A, B, C, D, E and F in our study. A. baseline = [0.1097627 , 0.14303787, 0.12055268, 0.10897664, 0.08473096, 0.12917882, 0.08751744, 0.1783546 , 0.19273255, 0.0766883], decay = 2.5, infectivity = [[0.15037453, 0.10045448, 0.10789028, 0.17580114, 0.01349208, 0.01654871, 0.00384014, 0.1581418 , 0.14779747, 0.16524382], [0.1858717 , 0.1517864 , 0.08765006, 0.14824807, 0.02246419, 0.12154197, 0.02722749, 0.17942359, 0.0991161 , 0.07875789], [0.05024778, 0.14705235, 0.0866379 , 0.10796424, 0.0035688 , 0.11730922, 0.11625704, 0.11717598, 0.17924869, 0.12950002], [0.06828233, 0.08300669, 0.13250303, 0.01143879, 0.12664085, 0.12737611, 0.03995854, 0.02448733, 0.05991018, 0.0690806 ], [0.10829905, 0.0833048 , 0.18772458, 0.01938165, 0.03967254, 0.03063796, 0.12404668, 0.04810838, 0.0885677 , 0.04642443], [0.03019353, 0.02096386, 0.1246585 , 0.02624547, 0.03733743, 0.07003299, 0.15593352, 0.01844271, 0.1591532 , 0.01825224], [0.18546165, 0.08901222, 0.18551894, 0.11487999, 0.14041038, 0.00744305, 0.05371431, 0.02282927, 0.05624673, 0.02255028], [0.06039543, 0.07868212, 0.01218371, 0.13152315, 0.10761619, 0.05040616, 0.09938195, 0.01784238, 0.10939112, 0.1765038 ], [0.06050668, 0.1267631 , 0.02503273, 0.13605401, 0.0549677 , 0.03479404, 0.11139803, 0.00381908, 0.15744288, 0.00089182], [0.12873957, 0.05128336, 0.13963744, 0.18275114, 0.04724637, 0.10943116, 0.11244817, 0.10868939, 0.04237051, 0.18095826]] and end-time = 10.
B. Baseline = [8.34044009e-02, 1.44064899e-01, 2.28749635e-05, 6.04665145e-02, 2.93511782e02, 1.84677190e-02, 3.72520423e-02, 6.91121454e-02, 7.93534948e-02, 1.07763347e-01], decay = 2.5, adjacency = [[0. , 0. , 0.03514877, 0.15096311, 0. , 0.11526461, 0.07174169, 0.09604815, 0.02413487, 0. ], [0. , 0. , 0. , 0. , 0.15066599, 0.15379789, 0.01462053, 0.00671417, 0. , 0. ], [0.01690747, 0.07239546, 0.16467728, 0. , 0.11894528, 0. , 0.11802102, 0.14348615, 0.00314406, 0.12896239], [0.17000181, 0. , 0. , 0. , 0. , 0. , 0. , 0.0504772 , 0.04947341, 0. ], [0. , 0.11670321, 0.03638242, 0. , 0. , 0.00917392, 0. , 0.0252251 , 0.10131151, 0.1203002 ], [0. , 0.07118317, 0.11937903, 0.07120436, 0. , 0. , 0. , 0.08851807, 0.16239168, 0.10083865], [0. , 0.02363421, 0.02394394, 0.1388041 , 0.06836732, 0.02842716, 0.15945428, 0. , 0. , 0.12481123], [0.15185513, 0. , 0.1290996 , 0. , 0. , 0.15401787, 0. , 0.16587219, 0.11405672, 0.10687992], [0. , 0. , 0.07734744, 0.09943488, 0.07016556, 0.04074891, 0. , 0. , 0.00049346, 0.10609756], [0.05615574, 0.09061013, 0.15230831, 0.06142066, 0.15619243, 0.10716606, 0.00271994, 0.15978585, 0.11877677, 0.17145652]] and end-time = 20.
C. baseline = [0.08719898, 0.00518525, 0.1099325 , 0.08706448, 0.08407356, 0.06606696, 0.04092973, 0.12385419, 0.05993093, 0.05336546], decay = 2.5, adjacency = [[0. , 0.09623737, 0. , 0. , 0. , 0.14283231, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0.14434229, 0.10548788, 0. , 0. ], [0. , 0. , 0.16178088, 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0.14554646, 0. , 0. , 0. , 0. , 0.09531432, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.04899491], [0. , 0. , 0. , 0.07048057, 0. , 0.07546073, 0. , 0. , 0. , 0. ], [0.05697369, 0. , 0. , 0. , 0. , 0. , 0.11709833, 0. , 0.03100542, 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0.04016475, 0. , 0.10768499, 0.06297179]] and end-time = 20.
D. Baseline = [0.11015958, 0.14162956, 0.05818095, 0.10216552, 0.17858939, 0.17925862, 0.02511706, 0.04144858, 0.01029344, 0.08816197], decay = [[8., 9., 2., 7., 3., 3., 2., 4., 6., 9.], [2., 9., 8., 9., 2., 1., 6., 5., 2., 6.], [5., 8., 7., 1., 1., 3., 5., 6., 9., 9.], [8., 6., 2., 2., 2., 6., 6., 8., 5., 4.], [1., 1., 1., 1., 3., 3., 8., 1., 6., 1.], [2., 5., 2., 3., 3., 5., 9., 1., 7., 1.], [5., 2., 6., 2., 9., 9., 8., 1., 1., 2.], [8., 9., 8., 5., 1., 1., 5., 4., 1., 9.], [3., 8., 3., 2., 4., 3., 5., 2., 3., 3.], [8., 4., 5., 2., 7., 8., 2., 1., 1., 6.]], adjacency = [[0. , 0.14343731, 0. , 0.11247478, 0. , 0. , 0. , 0.09416725, 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.15805746, 0. , 0.08262718], [0. , 0. , 0.03018515, 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0.11090719, 0.08158544, 0. , 0.11109462, 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0.14264684, 0. , 0. , 0.11786607, 0. , 0. , 0.01101593, 0. ], [0.04954495, 0. , 0. , 0. , 0. , 0.12385743, 0. , 0. , 0.02375575, 0.05345351], [0. , 0.14941748, 0.02618691, 0. , 0.13608937, 0. , 0.06263167, 0. , 0.04097688, 0.14101171], [0. , 0.11902986, 0. , 0.04889382, 0. , 0. , 0.01569298, 0.03678315, 0. , 0. ], [0.05359555, 0. , 0. , 0.09188512, 0. , 0.14255311, 0. , 0. , 0. , 0. ], [0.12792415, 0.05843994, 0.16156482, 0.11931973, 0. , 0.00774966, 0.00947755, 0. , 0. , 0. ]], and end-time = 10.
8https://x-datainitiative.github.io/tick/modules/generated/tick.hawkes.SimuHawkesExpKernels.html
E. Baseline = [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2], decay = [[9., 4., 9., 9., 1., 6., 4., 6., 8., 7.], [1., 5., 8., 9., 2., 7., 3., 3., 2., 4.], [6., 9., 2., 9., 8., 9., 2., 1., 6., 5.], [2., 6., 5., 8., 7., 1., 1., 3., 5., 6.], [9., 9., 8., 6., 2., 2., 2., 6., 6., 8.], [5., 4., 1., 1., 1., 1., 3., 3., 8., 1.], [6., 1., 2., 5., 2., 3., 3., 5., 9., 1.], [7., 1., 5., 2., 6., 2., 9., 9., 8., 1.], [1., 2., 8., 9., 8., 5., 1., 1., 5., 4.], [1., 9., 3., 8., 3., 2., 4., 3., 5., 2.]], adjacency= [[0.02186539, 0.09356695, 0. , 0.16101355, 0.11527002, 0.09149395, 0. , 0. , 0.15672219, 0. ], [0. , 0. , 0.14241135, 0. , 0.11167029, 0. , 0. , 0. , 0.0934937 , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.15692693, 0. ], [0.08203618, 0. , 0. , 0.02996925, 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0.11011391, 0.08100189, 0. , 0.11029999, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0.14162654, 0. , 0. , 0.11702301, 0. , 0. , 0.01093714], [0. , 0.04919057, 0. , 0. , 0. , 0. , 0.12297152, 0. , 0. , 0.02358583], [0.05307117, 0. , 0.14834875, 0.02599961, 0. , 0.13511597, 0. , 0.06218368, 0. , 0.04068378], [0.1400031 , 0. , 0.11817848, 0. , 0.0485441 , 0. , 0. , 0.01558073, 0.03652006, 0. ], [0. , 0.05321219, 0. , 0. , 0.0912279 , 0. , 0.14153347, 0. , 0. , 0. ]] and end-time = 50.
F. Baseline = [0.21736198, 0.11134775, 0.16980704, 0.33791045, 0.00188754, 0.04862765, 0.26829963, 0.3303411 , 0.05468264, 0.23003733], decay = [[5., 1., 7., 3., 5., 2., 6., 4., 5., 5.], [4., 8., 2., 2., 8., 8., 1., 3., 4., 3.], [6., 9., 2., 1., 8., 7., 3., 1., 9., 3.], [6., 2., 9., 2., 6., 5., 3., 9., 4., 6.], [1., 4., 7., 4., 5., 8., 7., 4., 1., 5.], [5., 6., 8., 7., 7., 3., 5., 3., 8., 2.], [7., 7., 1., 8., 3., 4., 6., 5., 3., 5.], [4., 8., 1., 1., 6., 7., 7., 6., 7., 5.], [8., 4., 3., 4., 9., 8., 2., 6., 4., 1.], [7., 3., 4., 5., 9., 9., 6., 3., 8., 6.]], adjacency = [[0.15693854, 0.04896059, 0.0400508 , 0. , 0. , 0.13021228, 0. , 0.10699903, 0. , 0.15329807], [0.0784283 , 0. , 0. , 0. , 0. , 0. , 0.00310706, 0.0090892 , 0.07758874, 0. ], [0.01672489, 0. , 0. , 0.07851303, 0. , 0. , 0.12848331, 0.08859293, 0. , 0. ], [0.09984995, 0. , 0. , 0.10541925, 0. , 0.08032527, 0. , 0. , 0. , 0. ], [0.14642469, 0.06629365, 0. , 0. , 0. , 0. , 0.11891738, 0.04166225, 0.09808829, 0.17638655], [0.00976324, 0.1100343 , 0.02003261, 0. , 0. , 0.05993539, 0.09739541, 0. , 0. , 0. ], [0. , 0. , 0. , 0.04672133, 0.16916 , 0. , 0.17341419, 0.12078975, 0.14441602, 0. ], [0. , 0.17305542, 0.06927975, 0. , 0.03408974, 0. , 0.08457162, 0.03787486, 0.10863292, 0. ], [0.07186225, 0.05760593, 0. , 0. , 0.08042031, 0.04403479, 0.1033595 , 0.17046747, 0. , 0.05083523], [0. , 0.10029222, 0. , 0.1022067 , 0. , 0. , 0.0588527 , 0. , 0.03530513, 0. ]], and end-time = 40.
PGEM. We implement PGEM generator (Bhattacharjya et al., 2018) to generate 5-dimensional event datasets governed by the PGEM dynamics. The parameters are the conditional intensity (lambdas) for each event type given parental states, parental configuration (parents), windows for each parental state (windows) and end time. We describe the 5 for models A, B, C, D and E in our study. End time is 100 across all models.
A has the following parameters: ’parents’: ’A’: [], ’B’: [], ’C’: [’B’], ’D’: [’A’, ’B’], ’E’: [’C’], ’windows’: ’A’: [], ’B’: [], ’C’: [15], ’D’: [15, 30], ’E’: [15], ’lambdas’: ’A’: (): 0.2, ’B’: (): 0.05, ’C’: (0,): 0.2, (1,): 0.3, ’D’: (0, 0): 0.1, (0, 1): 0.05, (1, 0): 0.3, (1, 1): 0.2, ’E’: (0,): 0.1, (1,): 0.3
B has the follow parameters: ’parents’: ’A’: [’B’], ’B’: [’B’], ’C’: [’B’], ’D’: [’A’], ’E’: [’C’], ’windows’: ’A’: [15], ’B’: [30], ’C’: [15], ’D’: [30], ’E’: [30], ’lambdas’: ’A’: (0,): 0.3, (1,): 0.2, ’B’: (0,): 0.2, (1,): 0.4, ’C’: (0,): 0.4, (1,): 0.1, ’D’: (0,): 0.05, (1,): 0.2, ’E’: (0,): 0.1, (1,): 0.3.
C has the follow parameters: ’parents’: ’A’: [’B’, ’D’], ’B’: [], ’C’: [’B’, ’E’], ’D’: [’B’], ’E’: [’B’], ’windows’: ’A’: [15, 30], ’B’: [], ’C’: [15, 30], ’D’: [30], ’E’: [30], ’lambdas’: ’A’: (0, 0): 0.1, (0, 1): 0.05, (1, 0): 0.3, (1, 1): 0.2, ’B’: (): 0.2, ’C’: (0, 0): 0.2, (0, 1): 0.05, (1, 0): 0.4, (1, 1): 0.3, ’D’: (0,): 0.1, (1,): 0.2, ’E’: (0,): 0.1, (1,): 0.4.
D has the follow parameters: ’parents’: ’A’: [’B’], ’B’: [’C’], ’C’: [’A’], ’D’: [’A’, ’B’], ’E’: [’B’, ’C’], ’windows’: ’A’: [15], ’B’: [30], ’C’: [15], ’D’: [15, 30], ’E’: [30, 15], ’lambdas’: ’A’: (0,): 0.05, (1,): 0.2, ’B’: (0,): 0.1, (1,): 0.3, ’C’: (0,): 0.4, (1,): 0.2, ’D’: (0, 0): 0.1, (0, 1): 0.3, (1, 0): 0.05, (1, 1): 0.2, ’E’: (0, 0): 0.1, (0, 1): 0.02, (1, 0): 0.4, (1, 1): 0.1.
E has the follow parameters: ’parents’: ’A’: [’A’], ’B’: [’A’, ’C’], ’C’: [’C’], ’D’: [’A’, ’E’], ’E’: [’C’, ’D’], ’windows’: ’A’: [15], ’B’: [30, 30], ’C’: [15], ’D’: [15, 30], ’E’: [15, 30], ’lambdas’: ’A’: (0,): 0.1, (1,): 0.3, ’B’: (0, 0): 0.01, (0, 1): 0.05, (1, 0): 0.1, (1, 1): 0.5, ’C’: (0,): 0.2, (1,): 0.4, ’D’: (0, 0): 0.05, (0, 1): 0.02, (1, 0): 0.2, (1, 1): 0.1, ’E’: (0, 0): 0.1, (0, 1): 0.01, (1, 0): 0.3, (1, 1): 0.1.
A.2 REAL DATASETS
Cosmetics and Electronics contain user-level online transactions in an electronics and cosmetics store respectively. While the original cosmetics dataset from Kaggle contain multiple months of transactions, we used the one from Dec, 2019. Similarly, we also used electronics dataset from Dec, 2019. Both share the same four types of events: ’view’, ’cart’,’remove-from-cart’ and ’purchase. The two datasets involve transaction events in seconds. To optimize computation for transformer models, we filtered out sequences longer than 300 events and shorter than 30 and scaled the timestamps into [0,1] to avoid numerical issues.
DeFi-Mainnet and DeFi-Polygon. DeFi Mainnet is built on the more widely used ethereum blockchain. Polygon is a scalable sidechain of Ethereum that allows for much faster and low fee transactions than the original Ethereum Blockchain. The difference in fee structure produces quite different dynamics in the two different AAVE lending protocols. DeFi-Polygon has many more users and transactions per user, but much less total value locked than DeFi Mainnet. The polygon users are much more likely to engage in risky but potentially profitable ”Yield Farming” transactions. Foundation methods would be very useful in modeling the many other lending protocols in AAVE or other lending platforms which are new or less popular and thus have less transactions. The 6 types of actions user performs are : ’borrow’, ’collateral’, ’deposit’, ’liquidation’, ’redeem’, ’repay’. The origin timestamps are mined in Unix Timestamp. We filtered out sequences longer than 300 events and shorter than 30 and scaled the timestamps into [0,1] to avoid numerical issues.
ACLED-India and ACLED-Bangladesh contains sequences where each sequence involves an actor involving some armed conflict (i.e. riots and protests) in the respective country. The former involves events happening from 2016 to 2022, and the latter 2010-2021. The unit of each timestamp is ’days’. There are 6 types of events in the ACLED-India which are: ’Battles’, ’Explosions/Remote violence’, ’Riots’, ’Violence against civilians’,’Protests’,and ’Strategic developments’. The 4 overlapping types in ACLED-Bangladesh are ’Battles’, ’Explosions/Remote violence’, ’Riots’,and ’Violence against civilians’. We filtered out sequences longer than 300 events and shorter than 2 and scaled the timestamps into [0,1] to avoid numerical issues.
A.3 MODEL IMPLEMENTATION AND (PRE)TRAINING
Pretraining. Our pretrain model adapts codes from Zuo et al. (2020) 9. A full repo will be given upon acceptance. The procedure is fully described by Algorithm 1. We train our model via stochastic gradient descent and Adam Kingma & Ba (2014) optimizer is used for optimization. The default transformer architecture we employed are the following for pretraining: the number of blocks for multi-headed self-attention module is 4; the dimension of the value vector after attention has been applied is 512; the number attention heads is 4; the dimension of the hidden layer of the feed forward neural network 1024; the dimension of the value vector 512; the dimension of the key vector 512; dropout is 0.1. We train 100 epochs with a learning rate of 0.0001. γ is set to 1 for all experiments other than one on ACLED-india where we use 10.
9https://github.com/SimiaoZuo/Transformer-Hawkes-Process/tree/master/transformer
Fine-tuning. The fine tuning model consists of a feed forward network with 3 hidden layers of dimension 512. We train with Adam optimizer with learning rate of 0.001 with 100 epochs. α is set 0.01 for all experiments.
PE+TE. The combined TE + PE improves optimization when trained with only 2 samples. With TE only results in a compromised optimization (see Figure 3).
Algorithm 1: Pretraining of Event-former
Given dataset S with D sequences, each with length dl, {(ti, yi)}dli=1 , batch size b Insertion void epochs: S′ = [] for d← 1 to D do
seq’ = [] for i← 1 to dl − 1 do
t’i ∼ Unif(ti, ti+1) seq’.append((t’i, null)) end seqnew = Merge(seq, seq’) S’.append( seqnew )
end Masking to obtain S′m: for d← 1 to D do
for i← 1 to d′l − 1 do mask ∼ Bern(0.15) if mask == 1 then
(0, null)← (t′i, y′i) end
end end split(S’m) :=S’m,tr = {S′m,k}Kk=1, S′m,dev for epoch← 1 to N do
for iteration← 1 to ⌈Kb ⌉ do Sample a batch of sequences B′ from S′m,tr compute Levent(B′) (Eq. 5) back-propagate with gradient∇θ,ϕLevent(B′) ( θ: Transformer model parameters, ϕ:
Weights and bias for regression and classification.) update parameters of network θ, ϕ
end evaluate Levent(S′m,dev), stop training if not improving in 5 epochs
end Return: Optimal parameters ϕ∗, θ∗
A.4 BASELINES MODELS AND IMPLEMENTATION
T-RMTPP and T-ERPP. The original RMTPP and ERPP models are LSTM based. The transformer architecture we used for T-RMTPP and T-ERPP is adapted from Zuo et al. (2020). We used recommended set of parameters for training these two models.
T-LNM. The original LNM is RNN/LSTM/GRU based. The transformer architecture we used for T-LNM is adapted from Zuo et al. (2020). We used 64 mixtures of lognormal components to model that density of log (inter-event) times.
THP. We used the recommended set of parameters to train the model as it is.
A.5 PROOF OF THEOREM 1: TRANSFORMER WITH COMBINED TEMPORAL ENCODING AND POSITION ENCODING
Consider a general type of transformer architecture described by Yun et al. (2019). Our proof for a transformer with combined temporal encoding and position encoding as a universal approximating function for any continuous function for sequence-to-sequence with compact support follows similarly as proof of transformer with position encoding (Theorem 3 in Yun et al. (2019)). Without loss of generality, consider a sequence with timestamps {t1, t2, ..., tn} and let ti be integer-valued for i ∈ {1, 2, 3..., n} and ti < tj for all i < j. If ti are in decimals, we multiply by a constant to transform each given timestamp to an integer without affecting the dynamics of the event sequence. We choose a d-dimensional Temporal encoding for the n event epochs to be the following:
T = 0 t2 − t1 . . . tn − t1 0 t2 − t1 . . . tn − t1 ... ... ...
0 t2 − t1 . . . tn − t1 Similarly a d-dimensional position encoding for the n event epochs to be the following:
P = 0 1 . . . n 0 1 . . . n ... ... ...
0 1 . . . n The combined encoding is :
PE+TE = 0 t2 − t1 + 1 . . . tn − t1 + 1 0 t2 − t1 + 1 . . . tn − t1 + 1 ... ... ...
0 t2 − t1 + 1 . . . tn − t1 + 1 The strict temporal order of timestamps ti − t1 + 1 < ti+1 − t1 + 1for all i ∈ 1, 2, 3, ..., n− 1. This guarantees for all rows the coordinates are monotonically increasing and input values can be partitioned into cubes. The rest of the proof follows directly from the proof of Theorem 3 in Yun et al. (2019) by replacing n with tn by performing quantization by feed-forward layers, contextual mapping by attention layers and function value mapping by feed-forward layers. | 1. What are the main contributions and strengths of the paper regarding transformer-based models and pre-training frameworks?
2. What are the weaknesses or areas for improvement regarding experimental comparisons and clarification of certain aspects?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The two main contributions of the paper are:
A transformer-based model for predicting time and types of events in marked event sequences.
A pre-training framework for such models. During pre-training, types of some events are randomly masked, and "void" events are introduced at random times. The goal of the pre-training objective is to correctly classify the types of the events (void or true type of a masked event) in the sequence.
The proposed model combined with the proposed pre-training approach achieves superior results compared to the baselines in terms of time and event type prediction.
Strengths And Weaknesses
Strengths:
Empirical results: The proposed model achieves consistent improvements compared to the baseline methods. The experimental setup is designed in a way to ensure fair comparison to the baselines -- all models use the same transformer-based encoder, thus allowing to evaluate the effect of the proposed training procedure.
Reproducibility: The experimental setup is described very precisely, and the appendix contains all the information necessary to reproduce the results of the paper.
Clarity of presentation: The paper is written very clearly and is very easy to follow.
Weaknesses:
To support the central claim of the paper - that pre-training beneficially affects the predictive performance of the model - it would be necessary to perform another experiment, where we compare the performance on real-world datasets of (1) a model trained only using "fine-tuning" (as described in Sec 3.5) and (2) a model trained with pre-training + fine-tuning. Currently, only results for option (2) are included in the paper.
The remaining points are not major weaknesses, but rather questions that I would like to see clarified in the updated version.
Are the embedding matrices for event types shared across the pre-training and fine-tuning phases? How is the problem of aligning the event types solved in this case? Different event types can have different effects (excitatory vs. inhibitive), so the matching between event types could lead to very different models.
It would be beneficial to include versions of Table 4 with some measure of variance of the results. Also, is there a specific reason why pre-training results in the other direction are not included in the experiments (e.g., Electronics -> Cosmetics)?
Minor suggestions:
The proposed model is not strictly speaking a point process model - it's not a generative model that can be used to sample new event sequences, but is defined as a discriminative model that directly predicts the type & time of the next event. This aspect should be clarified in the paper. It could be converted to a generative model with small modifications by interpreting the loss function (Eq. 6) as NLL of some TPP model (Eq. 1) (see, e.g., table 2 in https://arxiv.org/abs/2107.03354 for the possible connection).
Section 3.3 - should the attention matrix actually be lower triangular?
Typo in Eq. 1, should be either
∑
i
=
1
n
∑
e
=
1
M
I
[
y
i
=
e
]
log
λ
e
(
t
i
)
or
∑
i
=
1
n
log
λ
y
i
(
t
i
)
Notation
H
i
is used in figure 1, but only gets introduced much later in the paper.
Would be helpful to indicate which metrics are in higher-is-better and which are in lower-is-better formats in tables 1, 3
Clarity, Quality, Novelty And Reproducibility
Described above. |
ICLR | Title
Event-former: A Self-supervised Learning Paradigm for Temporal Point Processes
Abstract
Self-supervision is one of the hallmarks of representation learning in the increasingly popular suite of foundation models including large language models such as BERT and GPT-3, but it has not been pursued in the context of multivariate event streams, to the best of our knowledge. We introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Specifically, we design a novel pre-training strategy for the encoder where we not only mask random event epochs but also insert randomly sampled ‘void’ epochs where an event does not occur; this differs from the typical discrete-time pretext tasks such as word-masking in BERT but expands the effectiveness of masking to better capture continuous-time dynamics. The pre-trained model can subsequently be fine-tuned on a potentially much smaller event dataset, similar to other foundation models. We demonstrate the effectiveness of our proposed paradigm on the next-event prediction task using synthetic datasets and 3 real applications, observing a relative performance boost of as high as up to 15% compared to state-of-the art models.
N/A
Self-supervision is one of the hallmarks of representation learning in the increasingly popular suite of foundation models including large language models such as BERT and GPT-3, but it has not been pursued in the context of multivariate event streams, to the best of our knowledge. We introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Specifically, we design a novel pre-training strategy for the encoder where we not only mask random event epochs but also insert randomly sampled ‘void’ epochs where an event does not occur; this differs from the typical discrete-time pretext tasks such as word-masking in BERT but expands the effectiveness of masking to better capture continuous-time dynamics. The pre-trained model can subsequently be fine-tuned on a potentially much smaller event dataset, similar to other foundation models. We demonstrate the effectiveness of our proposed paradigm on the next-event prediction task using synthetic datasets and 3 real applications, observing a relative performance boost of as high as up to 15% compared to state-of-the art models.
1 INTRODUCTION
Transfer learning occurs when a model is pre-trained on a task, such as classification on a large labelled dataset such as ImageNet, and the model’s ‘knowledge’ is then applied to another task, such as classification on medical images. In the current era of AI, transfer in domains such as natural language processing and image processing often leverages self-supervised learning, where pre-training for representation learning is done using unlabeled data. Although the fundamental ideas of transfer are not new, there is a clear emerging paradigm around foundation models (Bommasani et al., 2021), such as BERT (Devlin et al., 2018) and GTP-3 (Brown et al., 2020), which are trained with diverse unlabeled data at scale using self-supervision. These pre-trained models are then fine-tuned and adapted to different downstream tasks that respectively come with limited labeled data. Recent progress has been possible primarily due to improvements in hardware, development of the attention mechanism (Vaswani et al., 2017), and availability of substantial unlabeled training data.
We extend and pursue self-supervised learning in the context of multivariate event streams, i.e. data involving irregular occurrences of different types of events. Event stream datasets are widely available across domains, for instance in the form of social network interactions, customer transactions, system logs, financial events, health episodes, etc. It is well known that temporal point processes provide a sound mathematical framework for modeling such datasets (Daley & Jones, 2003). In this paper, we introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Although self-supervised learning has recently been explored for classical time series data (Zerveas et al., 2021), to the best of our knowledge, self-supervision has not yet been explored in the context of point processes.
Neural models for temporal point processes (e.g. Du et al., 2016; Mei & Eisner, 2016; Xiao et al., 2017) have advanced the state of the art in event modeling, particularly for the task of event prediction. The typical approach in this line of work is to train a neural network on a large amount of event data. Our proposed paradigm differs from current standard practices by taking a transfer learning approach analogous to foundation models: to first pre-train a neural model on a (potentially) large event dataset and then fine-tune the model for prediction on a limited event dataset.
Transfer learning with event models has many potential applications. For instance, there may be abundant data from electronic health records containing information around a particular patient population; this data could potentially be leveraged for another population whose data is either unavailable or harder to obtain. This is an issue relevant to health equity since there may be data related concerns for some under represented populations. Similarly, financial event data from an industrial sector could potentially be transferred to another. Electronic commerce is yet another illustrative domain where transfer learning techniques may help to transfer purchase behaviors across a large pool of user populations and/or product types.
Although there is some related work around multi-task learning with event streams, such as through deploying hierarchical Gaussian process models (Lian et al., 2015) or time-scale graphical event models (Monvoisin & Leray, 2019), this line of research typically considers learning by pooling together disparate data from the same population. In contrast, we tackle the more ambitious effort of transferring from one or multiple event datasets to another. Specifically, we consider the typical setting of homogeneous transfer learning (Zhuang et al., 2021), where all datasets that get pooled for pre-training involve the same set of event types. This allows for potentially leveraging different datasets even though there may be realistic variations with respect to parametric or structural dependencies present in the corresponding event streams across each dataset.
Contributions: We make the following major contributions:
• We introduce a self-supervised paradigm for transfer learning in temporal point processes. A crucial innovation is to explicitly incorporate information about the absence of events, which improves the modeling of temporal dynamics without burdening training efficiency.
• We propose a masked event model, which is a new way to derive a pretext task for selfsupervision targets in transformer models for event streams.
• Our empirical evaluation demonstrates improved transfer learning performance for event prediction on synthetic and real datasets relative to state of the art transformer event models.
2 BACKGROUND AND RELATED WORK
2.1 TEMPORAL POINT PROCESSES
Multivariate temporal point processes (MTPP) are elegant mathematical models for event streams where event types/labels from some discrete set occur in continuous time (Daley & Jones, 2003). A multi-dimensional MTPP generates sequences with timestamps and associated labels of the form S = {(ti, yi)}ni=1 where ti is the time of occurrence of ith event and yi is its label. The cardinality of label set L is M . A strict temporally ordered event stream assumes a period of events observed within [0, T ] for each ti ∈ [0, T ] for all i ∈ [1, 2, ..., n]. MTPPs are characterized by conditional intensity functions for each label representing the rate at which it occurs at any time t, λe(t) = lim∆t→0 E(Ne(t+∆t|ht)−Ne(t|ht)) ∆t , whereNe(t|ht) counts the number of occurrences of label e prior to historical occurrences (or simply history) ht.
Several MTPP models such as multivariate versions of the classic Hawkes process (Hawkes, 1971) and piecewise-constant models (Gunawardana & Meek, 2016; Bhattacharjya et al., 2018) assume some parametric form of the conditional intensity function. Neural MTPPs are more recent variants that capture the underlying dynamics using neural networks. Recent years have witnessed rising popularity of neural MTPPs due to their state-of-the-art performance on benchmark datasets for predictive tasks (Du et al., 2016; Mei & Eisner, 2016; Xiao et al., 2017; Omi et al., 2019; Shchur et al., 2019; Zuo et al., 2020). A common training objective in neural MTPPs involves minimizing the negative log-likelihood. The log-likelihood of observing a sequence S is the sum of log-likelihood of events and non-events and can be computed as:
log p(S) = n∑
i=1 M∑ e=1 logλe(ti)− ∫ T 0 M∑ e=1 λe(t)dt (1)
Many of these neural MTPPs assume some form of evolution dynamics between events in order to compute the second term in Eq. 1, such as recurrent neural network (RNN) evolution (Xiao et al.,
2017), exponential decay (Mei & Eisner, 2016), intensity-free modeling of the integral (Omi et al., 2019), or the usage of explicit epochs indicating absence of events (Gao et al., 2020).
2.2 TRANSFORMERS FOR EVENT DATA
Attention (Xiao et al., 2019) and transformer-based event models have shown promising results in recent years, including the self-attentive Hawkes process (SAHP) (Zhang et al., 2020), transformer Hawkes process (THP) (Zuo et al., 2020) and attentive neural point process (ANPP) (Gu, 2021). The self-attention mechanism, in our context, relates different event instances of a single stream in order to compute a representation of the stream. The architecture of transformers for MTPPs generally consists of an embedding layer and a self-attention layer. In the transformer Hawkes process (THP) (Zuo et al., 2020), for example, the embedding layer includes a time embedding and event-type embedding. Time embedding is achieved through:
[z(tj)]i =
{ cos(tj/10000 i−1 d ) if i is odd
sin(tj/10000 i d ) if i is even
(2)
where tj is a timestamp and d is the dimension of encoding. Time embedding and one-hot encoded types are combined to form the embedded input X. For sequence S = {ti, yi}Li=1, time embedding zi for each instance is specified in Eq. 2 and for the entire sequence with length L, the embedding is Z ∈ RM×L . Type embedding are through the product of a trainable embedding matrix U ∈ RM×K and one hot encoded vectors yi’s for all type instances, i.e. X = (UY + Z)T where Y = [y1,y2, ...,yL]. Q, K, V are query, key and value matrix; they are linear transformations of X, i.e. Q = XWQ, K = XWK , V = XWV where WQ, WK , WV are trainable weights. Attention output C is computed by the following:
C = softmax( QKT√ Mk )V = AsV (3)
where As denotes attention score matrix. The output C is then fed into a pointwise feed forward neural network (FFN) (commonly with residual connection) to learn a high level representation of the sequence for modeling conditional intensity functions.
2.3 SELF-SUPERVISION FOR SEQUENCE DATA
Sequence models such as RNNs have achieved much success in various applications, but more recent methods typically rely on transformer architectures (Vaswani et al., 2017) and the attention mechanism, especially in popular application of natural language process (NLP). With fine-tuning on the downstream tasks, these pre-training models lead to sizable improvement over previous state of the art. However, large-scale transformers are also bulky and resource-hungry, typically with billions of parameters (Brown et al., 2020) and cost millions of USD to train (Floridi & Chiriatti, 2020).
Self-supervision is typically achieved in sequence models by deriving an effective pretext task that is trained through supervised learning with a masking strategy for indicating self-supervision targets. For example, in BERT, about 15% of the words are randomly masked using an independent Bernoulli model of masking, and replaced with a new [MASK] label or a random word. In some recent work on self-supervision for time series data (Zerveas et al., 2021), the masking is done to ensure longer lengths of masked values, all replaced with the value 0, to get geometrically distributed run-lengths of masked values. Here we introduce a novel pretext task with a masking strategy specially tailored for asynchronous event data in continuous time. As we show later through experimental ablation studies, straightforward application of prior discrete-time sequence-based masking strategies proves inadequate for event data, because the intensity rates of events that represent continuous-time dynamics may vary in general between any two consecutively observed events. This aspect distinguishes event streams from discrete-time data such as time series and has not been addressed by masking in standard temporal transformers.
3 A SELF-SUPERVISED LEARNING PARADIGM
We introduce a self-supervised modeling paradigm with a transformer-based architecture for multivariate event streams that we refer to as Event-former. In particular, we present a novel pretext
training task specific to event data to learn a suitable representation for event streams. Such a representation can then be used by a small feed forward network for fine-tuning on a sequential next event prediction task. A high-level figurative scheme is shown in Figure 1; we provide details in the subsections to follow.
Our proposed pre-training paradigm from Figure 1 (left) involves three major aspects that distinguish it from prior work: 1) injecting void events (which are formalized in the next paragraph) to improve the representation learning of event dynamics in continuous time, 2) an effective masking strategy that uses both positional and temporal encoding on the above augmented event stream with void events, and 3) forcing the attention mechanism to adhere to the temporal order of events. We explain each aspect below before prescribing the full pre-training scheme, followed by a brief explanation of the fine-tuning procedure as depicted in Figure 1 (right).
3.1 VOID EVENTS IN TRANSFORMERS
Recall that an observed event stream is of the form S = {(ti, yi)}ni=1 where ti and yi are the ith event’s time stamp and label, respectively. We consider a modified stream where we inject a predetermined number of void events involving epochs where no event occurs; these are of the form (t′i, null) where ‘null’ is a new label signifying absence of an event occurrence. The modified stream is denoted S′ = {t′i, y′i}n ′
i=1 where S ⊂ S′ and y′i ∈ {L ∪ null} ∀i. The role of the void events is to provide additional information about the dynamics of the continuous-time process by explicitly indicating that no event occurs within two consecutively observed events.
Explicitly specifying selected epochs where events do not happen has been used previously in some related work; see for instance the notion of ‘fake epochs’ in Gao et al. (2020), which was originally developed for RNNs and helped boost the performance of a neural point process on a model fitting task. The main idea is that in point process models, the inter-event duration between two successive events is just as important as the event epochs themselves. This is seen from the integral terms in the conditional intensity based log-likelihood expression for event streams (see Eq. 1). Just as the internal hidden state of a recurrent neural network (such as an LSTM) only changes in discrete steps upon seeing the next token, the transformer based representations too behave in a similar manner. In reality, the conditional intensity rates can evolve continuously. Introducing the void epochs in the inter-event void space provides a convenient way to force the evolution of the transformer based internal representations inside the inter-event interval, and this in turn leads to improvement in both the pre-training and the fine-tuning steps for next event prediction related tasks. We thereby ad-
dress an inherent shortcoming of transformers for event datasets in a non-parametric manner, i.e. without the need of specific parametric or process assumptions such as in the transformer Hawkes process (Zuo et al., 2020). However, to use void events in transformers requires further adaptation, particularly during the training process where masking is also used. Next we present an effective approach for masking with void event labels.
3.2 MASKING STRATEGY & INPUT ENCODING
We consider a masked event model (MEM) pre-training device that operates on the modified event stream S′, where some events (t′i, y ′ i) are randomly masked for the task of prediction given history. Our masking approach is broadly similar to past work but specialized to our model. When an event epoch in the above expanded event stream S′ is masked, its timestamp is replaced with the value zero and its label is replaced with the value [MASK]. Further, for the choice of which tokens get masked, our model admits both the independent strategy used in BERT (Devlin et al., 2018) as well as the serially correlated temporal strategy used in time series (Zerveas et al., 2021). An ablation study shown later in Table 2 indicates that either of these strategies works well when combined with the proposed MEM model, and leads to improvements through transfer learning in both MSE and accuracy for predicting the next event time and label respectively. We also note that the results are worse without the proposed MEM model’s expansion of the event stream, i.e. without the injection of void event epochs. Our model differs from existing literature on masking in that both actual events and void events are admitted as candidates for masking. This combined approach leads to improved representations in experimental evaluation. The MEM model based representations are able to implicitly learn about both the event arrival rates due to masked learning with real epochs, as well as the inter-event empty spaces due to masked learning with void epochs.
In addition to the choice of masking strategy in transformer models, one also needs an encoding for the position information in the input sequence so that the uniqueness of each location is retained to some extent. Traditional positional encoding (PE) (Vaswani et al., 2017) used in transformers is not sufficient by itself for event stream data because events are associated with irregular time stamps, unlike natural language sequences. Similarly, temporal encoding (TE), such as proposed in prior work (Zuo et al., 2020), also proves inadequate by itself in our setting because our masking strategy replaces the time stamps of masked events with zero. Note that this would render indistinguishable any two distinct events (i.e. with distinct time stamps) of the same event type in the input event stream. As seen in Figure 3 in the Appendix, using TE alone leads to early plateauing of the loss function, and this is often a telltale signature of poor end-task performance. To address this issue, we propose the combined encoding strategy of using PE and TE together. We also show that the combined encoding strategy preserves the universal approximation results of standard transformers.
Theorem 1 Transformers with combined PE and TE are universal approximators for any continuous sequence-to-sequence function with compact domain, i.e. they approximate any continuous functions f: X→H with ϵ error w.r.t p-norm where 1 ≤ p <∞ and X,H ∈ Rd×L.
Please refer to the Appendix for a proof of the above result. Yun et al. (2019) establishes that transformers with PE are universal approximators for any continuous sequence-to-sequence function with compact support (Theorem 3 in their paper) and is applicable to language sequences. The afore mentioned result however applies uniquely to event streams. More importantly, it separates two distinct event epoch encodings to (potentially) distinguish representations and establishes the predictability and learnability of a transformer model (with a certain structure) for the MEM.
3.3 TEMPORAL UPPER TRIANGULAR ATTENTION
While masked language models such as BERT leverage contextual information from both prior and post tokens of interest, here we only consider prior tokens. This is because our main task of interest is event prediction given only the past, as typical in most real-world prediction problems, which prohibits us from using post token context. We apply an upper triangular attention so that a current event epoch only attends to prior events. In MEM, any representation in pre-training as shown in Figure 1 (left) for a masked event (regardless of whether the event is observed or void) only attends to history in the past. With these pieces that define the MEM model, we next describe the pre-training and fine-tuning steps that respectively produce and exploit the self-supervised representations.
3.4 PRE-TRAINING SCHEME
Pre-training using the MEM is conducted by first randomly injecting void events into event streams, masking some of the events, and then computing a self-supervised loss determined by predicting the masked events. In this fashion, the MEM is trained to not only predict the time and label of observed events, but also try to be as accurate as possible at determining when events do not happen. In the most general setting, suppose that the sequence of time stamps for void events, denoted τ , is randomly generated from some distribution P. A special case of this random injection is when exactly 1 void event is uniformly generated between each pair of consecutive events in S to create modified event stream S′. After randomly selecting a pre-determined percentage of events to mask, the loss for the self-supervised prediction task can be computed as:
L = Eτ∼P[Levent(t̂′m, ŷ′m; t′∗m, y′∗m)], (4)
where m denote the indices of the randomly selected masked events, similar conceptually to Devlin et al. (2018). The hat and star notation for t (y) refer to the model’s predicted time (label) and the ground truth time (label), respectively. Note that Eq. 4 will in practice be challenging to optimize, due to the stochastic objective and additional computation complexity from sampling and inserting void events between every two consecutive events in every event stream in a batch when performing stochastic gradient descent. The time complexity for such an insertion during training is O(KL) where K is the number of event streams and L is the maximum length by merging the two sorted lists.
To reduce the computational cost and improve efficiency, we propose a practical solution by adopting a simpler but just as effective sampling strategy for void events. Specifically, we only sample void events once from the original dataset as an approximation and then merge as a pre-processing step. Thus no additional computing cost occurs during training. Let NM be the total of number of masked event epochs. We use the following to measure the prediction loss for each masked event, whether it is observed or void:
Levent = 1
NM NM∑ i CE(softmax(H′mW y + by)i,:, y ′∗ m,i) + γMSE((H ′ mw t + bt)i, t ′∗ m,i), (5)
where H′m is the masked high level representation from the transformer model of a modified stream and Wy ∈ Rd×M , by ∈ RM , wt ∈ Rd and bt ∈ R are trainable weights and biases for label prediction cross entropy (CE) loss and time prediction mean square error (MSE). In addition, index i in the above equation implies a general instance of masked event epoch and i, : corresponds to the ith row of the output matrix. We use γ as the trade-off between the two loss terms. It is worth noting that we use only one hidden layer for masked event prediction; we avoid using deep feed forward networks to force the transformer model to learn a high quality representation H′m so that it facilitates the fine-tuning process for downstream tasks.
3.5 FINE-TUNING
After pre-training, MEM can then be applied to model any new event sequence S and be further fine-tuned to obtain a better representation Ĥi. Note that during fine-tuning, we do not include void events, which simplifies the training steps and is compatible with any existing approach. Each learned representation is then fed into a small feed forward neural network for downstream tasks involving event prediction. In other words, our model fine-tunes by consuming each individual event representation Ĥi and predicting the next label yi+1 as well as time ti+1. The power of this approach is primarily through the conversion of sequential prediction into tabular regression and classification. For an event dataset with K event streams, each with length nk, the loss in fine tuning step Lpred is the following:
Lpred = K∑ l=1 nk∑ i=1 CE(softmax(MLP (Ĥli)), y l i+1) + αMSE(MLP (Ĥ l i), t l i+1) (6)
where α is a similar trade-off between cross entropy and mean square error. Regression and classification share the same multi-layer neural network (MLP) for computational efficiency in our setting.
4 EXPERIMENTS
4.1 BASELINES
We use the following baselines for experiments. To focus attention on the potential benefits of selfsupervision rather than the choice of neural architecture, we replace non-transformer architectures in baselines with a suitable counterpart transformer.
Recurrent Marked Temporal Point Process (Du et al., 2016) and Event Recurrent Point Process (Xiao et al., 2017). We replace the RNNs in both originally proposed models with transformers. We note that the original implementation 1 only predicts the very last event (tn, yn) given prior events; for a fair comparison, we therefore modify the code to evaluate next inter-event time di where di = ti − ti−1 for i ∈ {2, 3, ...n}. Lognormal Mixture (Shchur et al., 2019). We replace the RNN with a transformer and take the expectation of the learned mixture model for next inter-event prediction.
Transformer Hawkes Process 2 (Zuo et al., 2020). This model is representative of the current stateof-the-art for event sequence modeling. It already involves a transformer architecture and therefore does not need any modification.
We use the following acronyms for the afore mentioned models, where the prefix ‘T-’ clarifies that some of these are transformer-based extensions: T-RMTPP, T-ERPP, T-LNM and THP. Following Zuo et al. (2020), we evaluate model performance on next event time prediction with root mean square error and on next event label prediction with accuracy.
4.2 SYNTHETIC EXPERIMENTS
We conduct experiments using synthetic data generated from two representative parametric families of multivariate temporal point processes: multivariate Hawkes processes (Bacry et al., 2015) and proximal graphical event models (Bhattacharjya et al., 2018). We aim to pre-train a masked event model on a set of datasets and fine-tune the model on a different dataset for the event prediction task.
Hawkes-Exp Dynamics. We generate 400 samples each from 10-dimensional Hawkes’ process dynamics for 5 datasets (A, B, C, D, E) with different parameters and combine them to form a pre-training dataset. We further split each into train-dev sets 75-25 and use the dev set for hyperparameter selection. We also generate 5 folds of a dataset F with different parameters as the target; each fold contains 500 event sequences and is further split into train-dev-test 60-20-20 subsets. Final evaluation is performed on the test subsets.
PGEM Dynamics. We generate 500 samples each from the proximal graphical event model (PGEM) Bhattacharjya et al. (2018) generator with 4 datasets (A, B, C, D) of different parameters where each contains 5 event labels. We combine these to form the pre-training dataset. Similarly, we generate an additional 5 folds of a dataset E with different parameters as the target, each of which contains 500 event sequences. Each fold is further split into train-dev-test 60-20-20 subsets, and as before, final evaluation is performed on the test subsets.
Results. As shown in Table 1, Event-former achieves best results for predicting both the next event time and event type as compared to all baselines. For the Hawkes-Exp generated data, it boosts prediction performance on average 4-5% compared to the best baseline result; on PGEM the increase is around 1-3%. The benefit of our approach along with its efficacy is its efficiency. While we only pre-train once, a typical fine-tuning procedure in this study involves a 20 fold smaller network – ∼ 1M trainable parameters compared to say the THP model with∼ 20M trainable parameters at its recommended setting. This suggests our model learns a suitable representation of the event dynamics, specially for Hawkes process. Figure 2 shows the T-SNE projection of learned representations of event data generated by the Hawkes-Exp model datasets A, B, C, D and E onto the 2D plane. Each model generates a unique fragment segment that somewhat overlaps with another model. The linear and curvaceous segment pattern observed here is not uncommon for projecting time-series embedding onto a 2D plane (Wong & Chung, 2019). This overlapping of the representations from
1https://github.com/woshiyyya/ERPP-RMTPP 2https://github.com/SimiaoZuo/Transformer-Hawkes-Process
pre-training data (A, B, C, D and E) with the target data F provides a visual explanation for how various datasets compare in terms of the learned representation.
4.2.1 ABLATIONAL MASKING EXPERIMENTS
A typical deployment of MEM involves 3 components: inserted random void epochs, random selection of masks and masking fraction. Our default setting is through the use of void events and uniformly randomly selecting 15% for masking during pre-training. We perform 3 ablation studies on the synthetic datasets: 1) void vs. no void events, 2) geometric vs. random mask and 3) mask fraction. Ablation 1 evaluates the effect of injected random void epochs in MEM on prediction. Clearly from Table 2, we notice a drop of type accuracy and increase of RMSE in time prediction for both cases; in particular, the predictive performance deteriorates to below the baselines on Hawkes-Exp. The injection of void epochs is justified for producing competitive results in random masking. Ablation 2 compares the impact of two masking strategies: geometric and random. We employ the former from Zerveas et al.
(2021), along with inserted void epochs. Geometric masks produce slightly deteriorated results particularly on PGEM data, suggesting masking consecutive segments may not aid in learning dynamics in the continuous-time setting. Ablation 3 compares the choice of fraction of randomly masked epochs. In general, we find no significant difference between 15% and 30% for event prediction.
4.3 REAL APPLICATIONS
Datasets. We perform transfer learning experiments on 3 real applications, as listed below. A descriptive summary of the 6 datasets used are shown in Table 3.
• Financial: Defi-Mainnet and Defi-Polygon are privately curated datasets involving userlevel crypto currency transactions from the Aave website3. Mainnet and Polygon represent two different protocols/deployments on the platform. We test the algorithms on Mainnet after pre-training on Polygon.
3aave.com
Dataset # classes # seqs. Avg. length # events Data Type
Defi-Mainnet 6 20539 32 654844 Financial Defi-Polygon 6 33597 85 2856453 Financial Electronics 4 9993 20 195726 E-Commerce Cosmetics 4 19301 39 752109 E-Commerce ACLED-India 6 111 17 1934 Political ACLED-Bangladesh 4 97 17 1697 Political
Results. As demonstrated by its highest accuracy and lowest RMSE in Table 4, transfer learning with Event-former in these datasets consistently improves upon all baselines, and the improvement ranges from 3% to 15%. The most impressive improvement of 15% is on time prediction in DefiMainnet where using a different protocol appears to be sufficient to help with learning the dynamics. This likely suggests that the dynamics of real applications have noticeable shared similarities that the state-of-the-art transformer model approaches are unable to exploit. In addition, we observe that the transfer performance in general increases with increasing size of pre-training data. As shown from Table 3, the 3 pairs of datasets have different amounts of data; the Defi datasets have the highest number of samples, while the ACLED datasets have the lowest number. This is an indication that Event-former may be able to generalize better with more pre-training data.
5 CONCLUSION
In this work, we propose a novel self-supervised paradigm for transfer learning in multivariate temporal point processes. We introduce the usage of void events for transformer architectures, which is unique in continuous-time event models, and design a masking strategy for predicting masked event epochs and void spaces in-between. We empirically demonstrate the potential of our approach using synthetic as well as various real-world datasets. In particular, improvement of prediction performance is noticeably significant on transferring tasks over many existing competitive transformerbased approaches. While this study focuses on the homogeneous transfer setting, our approach could potentially be extended to other more complex transfer settings, such as out-of-domain heterogeneous transfer (Zhuang et al., 2021) with datasets that contain non-overlapping event labels. We leave these more complex cases to future work.
4https://www.kaggle.com/datasets/mkechinov/ecommerce-events-history-in-electronics-store 5https://www.kaggle.com/mkechinov/ecommerce-events-history-in-cosmetics-shop 6https://www.kaggle.com/datasets/saimasharleen/acled-bangladesh 7https://www.kaggle.com/datasets/shivkumarganesh/riots-in-india-19972022-acled-dataset-50k
A APPENDIX
A.1 SYNTHETIC GENERATORS
We generated datasets from Hawkes process and Proximal Graphical Event Model. We describe the parameters used in our experiments to generate event datasets.
Hawkes-Exp. We use a standard library to generate datasets from the Hawkes dynamics 8.The parameters are baseline rate, decay coefficent, adjacency (infectivity matrix) and end time. We describe the four for models A, B, C, D, E and F in our study. A. baseline = [0.1097627 , 0.14303787, 0.12055268, 0.10897664, 0.08473096, 0.12917882, 0.08751744, 0.1783546 , 0.19273255, 0.0766883], decay = 2.5, infectivity = [[0.15037453, 0.10045448, 0.10789028, 0.17580114, 0.01349208, 0.01654871, 0.00384014, 0.1581418 , 0.14779747, 0.16524382], [0.1858717 , 0.1517864 , 0.08765006, 0.14824807, 0.02246419, 0.12154197, 0.02722749, 0.17942359, 0.0991161 , 0.07875789], [0.05024778, 0.14705235, 0.0866379 , 0.10796424, 0.0035688 , 0.11730922, 0.11625704, 0.11717598, 0.17924869, 0.12950002], [0.06828233, 0.08300669, 0.13250303, 0.01143879, 0.12664085, 0.12737611, 0.03995854, 0.02448733, 0.05991018, 0.0690806 ], [0.10829905, 0.0833048 , 0.18772458, 0.01938165, 0.03967254, 0.03063796, 0.12404668, 0.04810838, 0.0885677 , 0.04642443], [0.03019353, 0.02096386, 0.1246585 , 0.02624547, 0.03733743, 0.07003299, 0.15593352, 0.01844271, 0.1591532 , 0.01825224], [0.18546165, 0.08901222, 0.18551894, 0.11487999, 0.14041038, 0.00744305, 0.05371431, 0.02282927, 0.05624673, 0.02255028], [0.06039543, 0.07868212, 0.01218371, 0.13152315, 0.10761619, 0.05040616, 0.09938195, 0.01784238, 0.10939112, 0.1765038 ], [0.06050668, 0.1267631 , 0.02503273, 0.13605401, 0.0549677 , 0.03479404, 0.11139803, 0.00381908, 0.15744288, 0.00089182], [0.12873957, 0.05128336, 0.13963744, 0.18275114, 0.04724637, 0.10943116, 0.11244817, 0.10868939, 0.04237051, 0.18095826]] and end-time = 10.
B. Baseline = [8.34044009e-02, 1.44064899e-01, 2.28749635e-05, 6.04665145e-02, 2.93511782e02, 1.84677190e-02, 3.72520423e-02, 6.91121454e-02, 7.93534948e-02, 1.07763347e-01], decay = 2.5, adjacency = [[0. , 0. , 0.03514877, 0.15096311, 0. , 0.11526461, 0.07174169, 0.09604815, 0.02413487, 0. ], [0. , 0. , 0. , 0. , 0.15066599, 0.15379789, 0.01462053, 0.00671417, 0. , 0. ], [0.01690747, 0.07239546, 0.16467728, 0. , 0.11894528, 0. , 0.11802102, 0.14348615, 0.00314406, 0.12896239], [0.17000181, 0. , 0. , 0. , 0. , 0. , 0. , 0.0504772 , 0.04947341, 0. ], [0. , 0.11670321, 0.03638242, 0. , 0. , 0.00917392, 0. , 0.0252251 , 0.10131151, 0.1203002 ], [0. , 0.07118317, 0.11937903, 0.07120436, 0. , 0. , 0. , 0.08851807, 0.16239168, 0.10083865], [0. , 0.02363421, 0.02394394, 0.1388041 , 0.06836732, 0.02842716, 0.15945428, 0. , 0. , 0.12481123], [0.15185513, 0. , 0.1290996 , 0. , 0. , 0.15401787, 0. , 0.16587219, 0.11405672, 0.10687992], [0. , 0. , 0.07734744, 0.09943488, 0.07016556, 0.04074891, 0. , 0. , 0.00049346, 0.10609756], [0.05615574, 0.09061013, 0.15230831, 0.06142066, 0.15619243, 0.10716606, 0.00271994, 0.15978585, 0.11877677, 0.17145652]] and end-time = 20.
C. baseline = [0.08719898, 0.00518525, 0.1099325 , 0.08706448, 0.08407356, 0.06606696, 0.04092973, 0.12385419, 0.05993093, 0.05336546], decay = 2.5, adjacency = [[0. , 0.09623737, 0. , 0. , 0. , 0.14283231, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0.14434229, 0.10548788, 0. , 0. ], [0. , 0. , 0.16178088, 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0.14554646, 0. , 0. , 0. , 0. , 0.09531432, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.04899491], [0. , 0. , 0. , 0.07048057, 0. , 0.07546073, 0. , 0. , 0. , 0. ], [0.05697369, 0. , 0. , 0. , 0. , 0. , 0.11709833, 0. , 0.03100542, 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0.04016475, 0. , 0.10768499, 0.06297179]] and end-time = 20.
D. Baseline = [0.11015958, 0.14162956, 0.05818095, 0.10216552, 0.17858939, 0.17925862, 0.02511706, 0.04144858, 0.01029344, 0.08816197], decay = [[8., 9., 2., 7., 3., 3., 2., 4., 6., 9.], [2., 9., 8., 9., 2., 1., 6., 5., 2., 6.], [5., 8., 7., 1., 1., 3., 5., 6., 9., 9.], [8., 6., 2., 2., 2., 6., 6., 8., 5., 4.], [1., 1., 1., 1., 3., 3., 8., 1., 6., 1.], [2., 5., 2., 3., 3., 5., 9., 1., 7., 1.], [5., 2., 6., 2., 9., 9., 8., 1., 1., 2.], [8., 9., 8., 5., 1., 1., 5., 4., 1., 9.], [3., 8., 3., 2., 4., 3., 5., 2., 3., 3.], [8., 4., 5., 2., 7., 8., 2., 1., 1., 6.]], adjacency = [[0. , 0.14343731, 0. , 0.11247478, 0. , 0. , 0. , 0.09416725, 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.15805746, 0. , 0.08262718], [0. , 0. , 0.03018515, 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0.11090719, 0.08158544, 0. , 0.11109462, 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0.14264684, 0. , 0. , 0.11786607, 0. , 0. , 0.01101593, 0. ], [0.04954495, 0. , 0. , 0. , 0. , 0.12385743, 0. , 0. , 0.02375575, 0.05345351], [0. , 0.14941748, 0.02618691, 0. , 0.13608937, 0. , 0.06263167, 0. , 0.04097688, 0.14101171], [0. , 0.11902986, 0. , 0.04889382, 0. , 0. , 0.01569298, 0.03678315, 0. , 0. ], [0.05359555, 0. , 0. , 0.09188512, 0. , 0.14255311, 0. , 0. , 0. , 0. ], [0.12792415, 0.05843994, 0.16156482, 0.11931973, 0. , 0.00774966, 0.00947755, 0. , 0. , 0. ]], and end-time = 10.
8https://x-datainitiative.github.io/tick/modules/generated/tick.hawkes.SimuHawkesExpKernels.html
E. Baseline = [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2], decay = [[9., 4., 9., 9., 1., 6., 4., 6., 8., 7.], [1., 5., 8., 9., 2., 7., 3., 3., 2., 4.], [6., 9., 2., 9., 8., 9., 2., 1., 6., 5.], [2., 6., 5., 8., 7., 1., 1., 3., 5., 6.], [9., 9., 8., 6., 2., 2., 2., 6., 6., 8.], [5., 4., 1., 1., 1., 1., 3., 3., 8., 1.], [6., 1., 2., 5., 2., 3., 3., 5., 9., 1.], [7., 1., 5., 2., 6., 2., 9., 9., 8., 1.], [1., 2., 8., 9., 8., 5., 1., 1., 5., 4.], [1., 9., 3., 8., 3., 2., 4., 3., 5., 2.]], adjacency= [[0.02186539, 0.09356695, 0. , 0.16101355, 0.11527002, 0.09149395, 0. , 0. , 0.15672219, 0. ], [0. , 0. , 0.14241135, 0. , 0.11167029, 0. , 0. , 0. , 0.0934937 , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.15692693, 0. ], [0.08203618, 0. , 0. , 0.02996925, 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0.11011391, 0.08100189, 0. , 0.11029999, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0.14162654, 0. , 0. , 0.11702301, 0. , 0. , 0.01093714], [0. , 0.04919057, 0. , 0. , 0. , 0. , 0.12297152, 0. , 0. , 0.02358583], [0.05307117, 0. , 0.14834875, 0.02599961, 0. , 0.13511597, 0. , 0.06218368, 0. , 0.04068378], [0.1400031 , 0. , 0.11817848, 0. , 0.0485441 , 0. , 0. , 0.01558073, 0.03652006, 0. ], [0. , 0.05321219, 0. , 0. , 0.0912279 , 0. , 0.14153347, 0. , 0. , 0. ]] and end-time = 50.
F. Baseline = [0.21736198, 0.11134775, 0.16980704, 0.33791045, 0.00188754, 0.04862765, 0.26829963, 0.3303411 , 0.05468264, 0.23003733], decay = [[5., 1., 7., 3., 5., 2., 6., 4., 5., 5.], [4., 8., 2., 2., 8., 8., 1., 3., 4., 3.], [6., 9., 2., 1., 8., 7., 3., 1., 9., 3.], [6., 2., 9., 2., 6., 5., 3., 9., 4., 6.], [1., 4., 7., 4., 5., 8., 7., 4., 1., 5.], [5., 6., 8., 7., 7., 3., 5., 3., 8., 2.], [7., 7., 1., 8., 3., 4., 6., 5., 3., 5.], [4., 8., 1., 1., 6., 7., 7., 6., 7., 5.], [8., 4., 3., 4., 9., 8., 2., 6., 4., 1.], [7., 3., 4., 5., 9., 9., 6., 3., 8., 6.]], adjacency = [[0.15693854, 0.04896059, 0.0400508 , 0. , 0. , 0.13021228, 0. , 0.10699903, 0. , 0.15329807], [0.0784283 , 0. , 0. , 0. , 0. , 0. , 0.00310706, 0.0090892 , 0.07758874, 0. ], [0.01672489, 0. , 0. , 0.07851303, 0. , 0. , 0.12848331, 0.08859293, 0. , 0. ], [0.09984995, 0. , 0. , 0.10541925, 0. , 0.08032527, 0. , 0. , 0. , 0. ], [0.14642469, 0.06629365, 0. , 0. , 0. , 0. , 0.11891738, 0.04166225, 0.09808829, 0.17638655], [0.00976324, 0.1100343 , 0.02003261, 0. , 0. , 0.05993539, 0.09739541, 0. , 0. , 0. ], [0. , 0. , 0. , 0.04672133, 0.16916 , 0. , 0.17341419, 0.12078975, 0.14441602, 0. ], [0. , 0.17305542, 0.06927975, 0. , 0.03408974, 0. , 0.08457162, 0.03787486, 0.10863292, 0. ], [0.07186225, 0.05760593, 0. , 0. , 0.08042031, 0.04403479, 0.1033595 , 0.17046747, 0. , 0.05083523], [0. , 0.10029222, 0. , 0.1022067 , 0. , 0. , 0.0588527 , 0. , 0.03530513, 0. ]], and end-time = 40.
PGEM. We implement PGEM generator (Bhattacharjya et al., 2018) to generate 5-dimensional event datasets governed by the PGEM dynamics. The parameters are the conditional intensity (lambdas) for each event type given parental states, parental configuration (parents), windows for each parental state (windows) and end time. We describe the 5 for models A, B, C, D and E in our study. End time is 100 across all models.
A has the following parameters: ’parents’: ’A’: [], ’B’: [], ’C’: [’B’], ’D’: [’A’, ’B’], ’E’: [’C’], ’windows’: ’A’: [], ’B’: [], ’C’: [15], ’D’: [15, 30], ’E’: [15], ’lambdas’: ’A’: (): 0.2, ’B’: (): 0.05, ’C’: (0,): 0.2, (1,): 0.3, ’D’: (0, 0): 0.1, (0, 1): 0.05, (1, 0): 0.3, (1, 1): 0.2, ’E’: (0,): 0.1, (1,): 0.3
B has the follow parameters: ’parents’: ’A’: [’B’], ’B’: [’B’], ’C’: [’B’], ’D’: [’A’], ’E’: [’C’], ’windows’: ’A’: [15], ’B’: [30], ’C’: [15], ’D’: [30], ’E’: [30], ’lambdas’: ’A’: (0,): 0.3, (1,): 0.2, ’B’: (0,): 0.2, (1,): 0.4, ’C’: (0,): 0.4, (1,): 0.1, ’D’: (0,): 0.05, (1,): 0.2, ’E’: (0,): 0.1, (1,): 0.3.
C has the follow parameters: ’parents’: ’A’: [’B’, ’D’], ’B’: [], ’C’: [’B’, ’E’], ’D’: [’B’], ’E’: [’B’], ’windows’: ’A’: [15, 30], ’B’: [], ’C’: [15, 30], ’D’: [30], ’E’: [30], ’lambdas’: ’A’: (0, 0): 0.1, (0, 1): 0.05, (1, 0): 0.3, (1, 1): 0.2, ’B’: (): 0.2, ’C’: (0, 0): 0.2, (0, 1): 0.05, (1, 0): 0.4, (1, 1): 0.3, ’D’: (0,): 0.1, (1,): 0.2, ’E’: (0,): 0.1, (1,): 0.4.
D has the follow parameters: ’parents’: ’A’: [’B’], ’B’: [’C’], ’C’: [’A’], ’D’: [’A’, ’B’], ’E’: [’B’, ’C’], ’windows’: ’A’: [15], ’B’: [30], ’C’: [15], ’D’: [15, 30], ’E’: [30, 15], ’lambdas’: ’A’: (0,): 0.05, (1,): 0.2, ’B’: (0,): 0.1, (1,): 0.3, ’C’: (0,): 0.4, (1,): 0.2, ’D’: (0, 0): 0.1, (0, 1): 0.3, (1, 0): 0.05, (1, 1): 0.2, ’E’: (0, 0): 0.1, (0, 1): 0.02, (1, 0): 0.4, (1, 1): 0.1.
E has the follow parameters: ’parents’: ’A’: [’A’], ’B’: [’A’, ’C’], ’C’: [’C’], ’D’: [’A’, ’E’], ’E’: [’C’, ’D’], ’windows’: ’A’: [15], ’B’: [30, 30], ’C’: [15], ’D’: [15, 30], ’E’: [15, 30], ’lambdas’: ’A’: (0,): 0.1, (1,): 0.3, ’B’: (0, 0): 0.01, (0, 1): 0.05, (1, 0): 0.1, (1, 1): 0.5, ’C’: (0,): 0.2, (1,): 0.4, ’D’: (0, 0): 0.05, (0, 1): 0.02, (1, 0): 0.2, (1, 1): 0.1, ’E’: (0, 0): 0.1, (0, 1): 0.01, (1, 0): 0.3, (1, 1): 0.1.
A.2 REAL DATASETS
Cosmetics and Electronics contain user-level online transactions in an electronics and cosmetics store respectively. While the original cosmetics dataset from Kaggle contain multiple months of transactions, we used the one from Dec, 2019. Similarly, we also used electronics dataset from Dec, 2019. Both share the same four types of events: ’view’, ’cart’,’remove-from-cart’ and ’purchase. The two datasets involve transaction events in seconds. To optimize computation for transformer models, we filtered out sequences longer than 300 events and shorter than 30 and scaled the timestamps into [0,1] to avoid numerical issues.
DeFi-Mainnet and DeFi-Polygon. DeFi Mainnet is built on the more widely used ethereum blockchain. Polygon is a scalable sidechain of Ethereum that allows for much faster and low fee transactions than the original Ethereum Blockchain. The difference in fee structure produces quite different dynamics in the two different AAVE lending protocols. DeFi-Polygon has many more users and transactions per user, but much less total value locked than DeFi Mainnet. The polygon users are much more likely to engage in risky but potentially profitable ”Yield Farming” transactions. Foundation methods would be very useful in modeling the many other lending protocols in AAVE or other lending platforms which are new or less popular and thus have less transactions. The 6 types of actions user performs are : ’borrow’, ’collateral’, ’deposit’, ’liquidation’, ’redeem’, ’repay’. The origin timestamps are mined in Unix Timestamp. We filtered out sequences longer than 300 events and shorter than 30 and scaled the timestamps into [0,1] to avoid numerical issues.
ACLED-India and ACLED-Bangladesh contains sequences where each sequence involves an actor involving some armed conflict (i.e. riots and protests) in the respective country. The former involves events happening from 2016 to 2022, and the latter 2010-2021. The unit of each timestamp is ’days’. There are 6 types of events in the ACLED-India which are: ’Battles’, ’Explosions/Remote violence’, ’Riots’, ’Violence against civilians’,’Protests’,and ’Strategic developments’. The 4 overlapping types in ACLED-Bangladesh are ’Battles’, ’Explosions/Remote violence’, ’Riots’,and ’Violence against civilians’. We filtered out sequences longer than 300 events and shorter than 2 and scaled the timestamps into [0,1] to avoid numerical issues.
A.3 MODEL IMPLEMENTATION AND (PRE)TRAINING
Pretraining. Our pretrain model adapts codes from Zuo et al. (2020) 9. A full repo will be given upon acceptance. The procedure is fully described by Algorithm 1. We train our model via stochastic gradient descent and Adam Kingma & Ba (2014) optimizer is used for optimization. The default transformer architecture we employed are the following for pretraining: the number of blocks for multi-headed self-attention module is 4; the dimension of the value vector after attention has been applied is 512; the number attention heads is 4; the dimension of the hidden layer of the feed forward neural network 1024; the dimension of the value vector 512; the dimension of the key vector 512; dropout is 0.1. We train 100 epochs with a learning rate of 0.0001. γ is set to 1 for all experiments other than one on ACLED-india where we use 10.
9https://github.com/SimiaoZuo/Transformer-Hawkes-Process/tree/master/transformer
Fine-tuning. The fine tuning model consists of a feed forward network with 3 hidden layers of dimension 512. We train with Adam optimizer with learning rate of 0.001 with 100 epochs. α is set 0.01 for all experiments.
PE+TE. The combined TE + PE improves optimization when trained with only 2 samples. With TE only results in a compromised optimization (see Figure 3).
Algorithm 1: Pretraining of Event-former
Given dataset S with D sequences, each with length dl, {(ti, yi)}dli=1 , batch size b Insertion void epochs: S′ = [] for d← 1 to D do
seq’ = [] for i← 1 to dl − 1 do
t’i ∼ Unif(ti, ti+1) seq’.append((t’i, null)) end seqnew = Merge(seq, seq’) S’.append( seqnew )
end Masking to obtain S′m: for d← 1 to D do
for i← 1 to d′l − 1 do mask ∼ Bern(0.15) if mask == 1 then
(0, null)← (t′i, y′i) end
end end split(S’m) :=S’m,tr = {S′m,k}Kk=1, S′m,dev for epoch← 1 to N do
for iteration← 1 to ⌈Kb ⌉ do Sample a batch of sequences B′ from S′m,tr compute Levent(B′) (Eq. 5) back-propagate with gradient∇θ,ϕLevent(B′) ( θ: Transformer model parameters, ϕ:
Weights and bias for regression and classification.) update parameters of network θ, ϕ
end evaluate Levent(S′m,dev), stop training if not improving in 5 epochs
end Return: Optimal parameters ϕ∗, θ∗
A.4 BASELINES MODELS AND IMPLEMENTATION
T-RMTPP and T-ERPP. The original RMTPP and ERPP models are LSTM based. The transformer architecture we used for T-RMTPP and T-ERPP is adapted from Zuo et al. (2020). We used recommended set of parameters for training these two models.
T-LNM. The original LNM is RNN/LSTM/GRU based. The transformer architecture we used for T-LNM is adapted from Zuo et al. (2020). We used 64 mixtures of lognormal components to model that density of log (inter-event) times.
THP. We used the recommended set of parameters to train the model as it is.
A.5 PROOF OF THEOREM 1: TRANSFORMER WITH COMBINED TEMPORAL ENCODING AND POSITION ENCODING
Consider a general type of transformer architecture described by Yun et al. (2019). Our proof for a transformer with combined temporal encoding and position encoding as a universal approximating function for any continuous function for sequence-to-sequence with compact support follows similarly as proof of transformer with position encoding (Theorem 3 in Yun et al. (2019)). Without loss of generality, consider a sequence with timestamps {t1, t2, ..., tn} and let ti be integer-valued for i ∈ {1, 2, 3..., n} and ti < tj for all i < j. If ti are in decimals, we multiply by a constant to transform each given timestamp to an integer without affecting the dynamics of the event sequence. We choose a d-dimensional Temporal encoding for the n event epochs to be the following:
T = 0 t2 − t1 . . . tn − t1 0 t2 − t1 . . . tn − t1 ... ... ...
0 t2 − t1 . . . tn − t1 Similarly a d-dimensional position encoding for the n event epochs to be the following:
P = 0 1 . . . n 0 1 . . . n ... ... ...
0 1 . . . n The combined encoding is :
PE+TE = 0 t2 − t1 + 1 . . . tn − t1 + 1 0 t2 − t1 + 1 . . . tn − t1 + 1 ... ... ...
0 t2 − t1 + 1 . . . tn − t1 + 1 The strict temporal order of timestamps ti − t1 + 1 < ti+1 − t1 + 1for all i ∈ 1, 2, 3, ..., n− 1. This guarantees for all rows the coordinates are monotonically increasing and input values can be partitioned into cubes. The rest of the proof follows directly from the proof of Theorem 3 in Yun et al. (2019) by replacing n with tn by performing quantization by feed-forward layers, contextual mapping by attention layers and function value mapping by feed-forward layers. | 1. What is the focus and contribution of the paper?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the paper's methodology or claims? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper is out of my scope and therefore I just give an empty review. Please see the reviews from other reviewers to tell the quality of the paper.
Strengths And Weaknesses
This paper is out of my scope and therefore I just give an empty review. Please see the reviews from other reviewers to tell the quality of the paper.
Clarity, Quality, Novelty And Reproducibility
This paper is out of my scope and therefore I just give an empty review. Please see the reviews from other reviewers to tell the quality of the paper. |
ICLR | Title
Event-former: A Self-supervised Learning Paradigm for Temporal Point Processes
Abstract
Self-supervision is one of the hallmarks of representation learning in the increasingly popular suite of foundation models including large language models such as BERT and GPT-3, but it has not been pursued in the context of multivariate event streams, to the best of our knowledge. We introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Specifically, we design a novel pre-training strategy for the encoder where we not only mask random event epochs but also insert randomly sampled ‘void’ epochs where an event does not occur; this differs from the typical discrete-time pretext tasks such as word-masking in BERT but expands the effectiveness of masking to better capture continuous-time dynamics. The pre-trained model can subsequently be fine-tuned on a potentially much smaller event dataset, similar to other foundation models. We demonstrate the effectiveness of our proposed paradigm on the next-event prediction task using synthetic datasets and 3 real applications, observing a relative performance boost of as high as up to 15% compared to state-of-the art models.
N/A
Self-supervision is one of the hallmarks of representation learning in the increasingly popular suite of foundation models including large language models such as BERT and GPT-3, but it has not been pursued in the context of multivariate event streams, to the best of our knowledge. We introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Specifically, we design a novel pre-training strategy for the encoder where we not only mask random event epochs but also insert randomly sampled ‘void’ epochs where an event does not occur; this differs from the typical discrete-time pretext tasks such as word-masking in BERT but expands the effectiveness of masking to better capture continuous-time dynamics. The pre-trained model can subsequently be fine-tuned on a potentially much smaller event dataset, similar to other foundation models. We demonstrate the effectiveness of our proposed paradigm on the next-event prediction task using synthetic datasets and 3 real applications, observing a relative performance boost of as high as up to 15% compared to state-of-the art models.
1 INTRODUCTION
Transfer learning occurs when a model is pre-trained on a task, such as classification on a large labelled dataset such as ImageNet, and the model’s ‘knowledge’ is then applied to another task, such as classification on medical images. In the current era of AI, transfer in domains such as natural language processing and image processing often leverages self-supervised learning, where pre-training for representation learning is done using unlabeled data. Although the fundamental ideas of transfer are not new, there is a clear emerging paradigm around foundation models (Bommasani et al., 2021), such as BERT (Devlin et al., 2018) and GTP-3 (Brown et al., 2020), which are trained with diverse unlabeled data at scale using self-supervision. These pre-trained models are then fine-tuned and adapted to different downstream tasks that respectively come with limited labeled data. Recent progress has been possible primarily due to improvements in hardware, development of the attention mechanism (Vaswani et al., 2017), and availability of substantial unlabeled training data.
We extend and pursue self-supervised learning in the context of multivariate event streams, i.e. data involving irregular occurrences of different types of events. Event stream datasets are widely available across domains, for instance in the form of social network interactions, customer transactions, system logs, financial events, health episodes, etc. It is well known that temporal point processes provide a sound mathematical framework for modeling such datasets (Daley & Jones, 2003). In this paper, we introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Although self-supervised learning has recently been explored for classical time series data (Zerveas et al., 2021), to the best of our knowledge, self-supervision has not yet been explored in the context of point processes.
Neural models for temporal point processes (e.g. Du et al., 2016; Mei & Eisner, 2016; Xiao et al., 2017) have advanced the state of the art in event modeling, particularly for the task of event prediction. The typical approach in this line of work is to train a neural network on a large amount of event data. Our proposed paradigm differs from current standard practices by taking a transfer learning approach analogous to foundation models: to first pre-train a neural model on a (potentially) large event dataset and then fine-tune the model for prediction on a limited event dataset.
Transfer learning with event models has many potential applications. For instance, there may be abundant data from electronic health records containing information around a particular patient population; this data could potentially be leveraged for another population whose data is either unavailable or harder to obtain. This is an issue relevant to health equity since there may be data related concerns for some under represented populations. Similarly, financial event data from an industrial sector could potentially be transferred to another. Electronic commerce is yet another illustrative domain where transfer learning techniques may help to transfer purchase behaviors across a large pool of user populations and/or product types.
Although there is some related work around multi-task learning with event streams, such as through deploying hierarchical Gaussian process models (Lian et al., 2015) or time-scale graphical event models (Monvoisin & Leray, 2019), this line of research typically considers learning by pooling together disparate data from the same population. In contrast, we tackle the more ambitious effort of transferring from one or multiple event datasets to another. Specifically, we consider the typical setting of homogeneous transfer learning (Zhuang et al., 2021), where all datasets that get pooled for pre-training involve the same set of event types. This allows for potentially leveraging different datasets even though there may be realistic variations with respect to parametric or structural dependencies present in the corresponding event streams across each dataset.
Contributions: We make the following major contributions:
• We introduce a self-supervised paradigm for transfer learning in temporal point processes. A crucial innovation is to explicitly incorporate information about the absence of events, which improves the modeling of temporal dynamics without burdening training efficiency.
• We propose a masked event model, which is a new way to derive a pretext task for selfsupervision targets in transformer models for event streams.
• Our empirical evaluation demonstrates improved transfer learning performance for event prediction on synthetic and real datasets relative to state of the art transformer event models.
2 BACKGROUND AND RELATED WORK
2.1 TEMPORAL POINT PROCESSES
Multivariate temporal point processes (MTPP) are elegant mathematical models for event streams where event types/labels from some discrete set occur in continuous time (Daley & Jones, 2003). A multi-dimensional MTPP generates sequences with timestamps and associated labels of the form S = {(ti, yi)}ni=1 where ti is the time of occurrence of ith event and yi is its label. The cardinality of label set L is M . A strict temporally ordered event stream assumes a period of events observed within [0, T ] for each ti ∈ [0, T ] for all i ∈ [1, 2, ..., n]. MTPPs are characterized by conditional intensity functions for each label representing the rate at which it occurs at any time t, λe(t) = lim∆t→0 E(Ne(t+∆t|ht)−Ne(t|ht)) ∆t , whereNe(t|ht) counts the number of occurrences of label e prior to historical occurrences (or simply history) ht.
Several MTPP models such as multivariate versions of the classic Hawkes process (Hawkes, 1971) and piecewise-constant models (Gunawardana & Meek, 2016; Bhattacharjya et al., 2018) assume some parametric form of the conditional intensity function. Neural MTPPs are more recent variants that capture the underlying dynamics using neural networks. Recent years have witnessed rising popularity of neural MTPPs due to their state-of-the-art performance on benchmark datasets for predictive tasks (Du et al., 2016; Mei & Eisner, 2016; Xiao et al., 2017; Omi et al., 2019; Shchur et al., 2019; Zuo et al., 2020). A common training objective in neural MTPPs involves minimizing the negative log-likelihood. The log-likelihood of observing a sequence S is the sum of log-likelihood of events and non-events and can be computed as:
log p(S) = n∑
i=1 M∑ e=1 logλe(ti)− ∫ T 0 M∑ e=1 λe(t)dt (1)
Many of these neural MTPPs assume some form of evolution dynamics between events in order to compute the second term in Eq. 1, such as recurrent neural network (RNN) evolution (Xiao et al.,
2017), exponential decay (Mei & Eisner, 2016), intensity-free modeling of the integral (Omi et al., 2019), or the usage of explicit epochs indicating absence of events (Gao et al., 2020).
2.2 TRANSFORMERS FOR EVENT DATA
Attention (Xiao et al., 2019) and transformer-based event models have shown promising results in recent years, including the self-attentive Hawkes process (SAHP) (Zhang et al., 2020), transformer Hawkes process (THP) (Zuo et al., 2020) and attentive neural point process (ANPP) (Gu, 2021). The self-attention mechanism, in our context, relates different event instances of a single stream in order to compute a representation of the stream. The architecture of transformers for MTPPs generally consists of an embedding layer and a self-attention layer. In the transformer Hawkes process (THP) (Zuo et al., 2020), for example, the embedding layer includes a time embedding and event-type embedding. Time embedding is achieved through:
[z(tj)]i =
{ cos(tj/10000 i−1 d ) if i is odd
sin(tj/10000 i d ) if i is even
(2)
where tj is a timestamp and d is the dimension of encoding. Time embedding and one-hot encoded types are combined to form the embedded input X. For sequence S = {ti, yi}Li=1, time embedding zi for each instance is specified in Eq. 2 and for the entire sequence with length L, the embedding is Z ∈ RM×L . Type embedding are through the product of a trainable embedding matrix U ∈ RM×K and one hot encoded vectors yi’s for all type instances, i.e. X = (UY + Z)T where Y = [y1,y2, ...,yL]. Q, K, V are query, key and value matrix; they are linear transformations of X, i.e. Q = XWQ, K = XWK , V = XWV where WQ, WK , WV are trainable weights. Attention output C is computed by the following:
C = softmax( QKT√ Mk )V = AsV (3)
where As denotes attention score matrix. The output C is then fed into a pointwise feed forward neural network (FFN) (commonly with residual connection) to learn a high level representation of the sequence for modeling conditional intensity functions.
2.3 SELF-SUPERVISION FOR SEQUENCE DATA
Sequence models such as RNNs have achieved much success in various applications, but more recent methods typically rely on transformer architectures (Vaswani et al., 2017) and the attention mechanism, especially in popular application of natural language process (NLP). With fine-tuning on the downstream tasks, these pre-training models lead to sizable improvement over previous state of the art. However, large-scale transformers are also bulky and resource-hungry, typically with billions of parameters (Brown et al., 2020) and cost millions of USD to train (Floridi & Chiriatti, 2020).
Self-supervision is typically achieved in sequence models by deriving an effective pretext task that is trained through supervised learning with a masking strategy for indicating self-supervision targets. For example, in BERT, about 15% of the words are randomly masked using an independent Bernoulli model of masking, and replaced with a new [MASK] label or a random word. In some recent work on self-supervision for time series data (Zerveas et al., 2021), the masking is done to ensure longer lengths of masked values, all replaced with the value 0, to get geometrically distributed run-lengths of masked values. Here we introduce a novel pretext task with a masking strategy specially tailored for asynchronous event data in continuous time. As we show later through experimental ablation studies, straightforward application of prior discrete-time sequence-based masking strategies proves inadequate for event data, because the intensity rates of events that represent continuous-time dynamics may vary in general between any two consecutively observed events. This aspect distinguishes event streams from discrete-time data such as time series and has not been addressed by masking in standard temporal transformers.
3 A SELF-SUPERVISED LEARNING PARADIGM
We introduce a self-supervised modeling paradigm with a transformer-based architecture for multivariate event streams that we refer to as Event-former. In particular, we present a novel pretext
training task specific to event data to learn a suitable representation for event streams. Such a representation can then be used by a small feed forward network for fine-tuning on a sequential next event prediction task. A high-level figurative scheme is shown in Figure 1; we provide details in the subsections to follow.
Our proposed pre-training paradigm from Figure 1 (left) involves three major aspects that distinguish it from prior work: 1) injecting void events (which are formalized in the next paragraph) to improve the representation learning of event dynamics in continuous time, 2) an effective masking strategy that uses both positional and temporal encoding on the above augmented event stream with void events, and 3) forcing the attention mechanism to adhere to the temporal order of events. We explain each aspect below before prescribing the full pre-training scheme, followed by a brief explanation of the fine-tuning procedure as depicted in Figure 1 (right).
3.1 VOID EVENTS IN TRANSFORMERS
Recall that an observed event stream is of the form S = {(ti, yi)}ni=1 where ti and yi are the ith event’s time stamp and label, respectively. We consider a modified stream where we inject a predetermined number of void events involving epochs where no event occurs; these are of the form (t′i, null) where ‘null’ is a new label signifying absence of an event occurrence. The modified stream is denoted S′ = {t′i, y′i}n ′
i=1 where S ⊂ S′ and y′i ∈ {L ∪ null} ∀i. The role of the void events is to provide additional information about the dynamics of the continuous-time process by explicitly indicating that no event occurs within two consecutively observed events.
Explicitly specifying selected epochs where events do not happen has been used previously in some related work; see for instance the notion of ‘fake epochs’ in Gao et al. (2020), which was originally developed for RNNs and helped boost the performance of a neural point process on a model fitting task. The main idea is that in point process models, the inter-event duration between two successive events is just as important as the event epochs themselves. This is seen from the integral terms in the conditional intensity based log-likelihood expression for event streams (see Eq. 1). Just as the internal hidden state of a recurrent neural network (such as an LSTM) only changes in discrete steps upon seeing the next token, the transformer based representations too behave in a similar manner. In reality, the conditional intensity rates can evolve continuously. Introducing the void epochs in the inter-event void space provides a convenient way to force the evolution of the transformer based internal representations inside the inter-event interval, and this in turn leads to improvement in both the pre-training and the fine-tuning steps for next event prediction related tasks. We thereby ad-
dress an inherent shortcoming of transformers for event datasets in a non-parametric manner, i.e. without the need of specific parametric or process assumptions such as in the transformer Hawkes process (Zuo et al., 2020). However, to use void events in transformers requires further adaptation, particularly during the training process where masking is also used. Next we present an effective approach for masking with void event labels.
3.2 MASKING STRATEGY & INPUT ENCODING
We consider a masked event model (MEM) pre-training device that operates on the modified event stream S′, where some events (t′i, y ′ i) are randomly masked for the task of prediction given history. Our masking approach is broadly similar to past work but specialized to our model. When an event epoch in the above expanded event stream S′ is masked, its timestamp is replaced with the value zero and its label is replaced with the value [MASK]. Further, for the choice of which tokens get masked, our model admits both the independent strategy used in BERT (Devlin et al., 2018) as well as the serially correlated temporal strategy used in time series (Zerveas et al., 2021). An ablation study shown later in Table 2 indicates that either of these strategies works well when combined with the proposed MEM model, and leads to improvements through transfer learning in both MSE and accuracy for predicting the next event time and label respectively. We also note that the results are worse without the proposed MEM model’s expansion of the event stream, i.e. without the injection of void event epochs. Our model differs from existing literature on masking in that both actual events and void events are admitted as candidates for masking. This combined approach leads to improved representations in experimental evaluation. The MEM model based representations are able to implicitly learn about both the event arrival rates due to masked learning with real epochs, as well as the inter-event empty spaces due to masked learning with void epochs.
In addition to the choice of masking strategy in transformer models, one also needs an encoding for the position information in the input sequence so that the uniqueness of each location is retained to some extent. Traditional positional encoding (PE) (Vaswani et al., 2017) used in transformers is not sufficient by itself for event stream data because events are associated with irregular time stamps, unlike natural language sequences. Similarly, temporal encoding (TE), such as proposed in prior work (Zuo et al., 2020), also proves inadequate by itself in our setting because our masking strategy replaces the time stamps of masked events with zero. Note that this would render indistinguishable any two distinct events (i.e. with distinct time stamps) of the same event type in the input event stream. As seen in Figure 3 in the Appendix, using TE alone leads to early plateauing of the loss function, and this is often a telltale signature of poor end-task performance. To address this issue, we propose the combined encoding strategy of using PE and TE together. We also show that the combined encoding strategy preserves the universal approximation results of standard transformers.
Theorem 1 Transformers with combined PE and TE are universal approximators for any continuous sequence-to-sequence function with compact domain, i.e. they approximate any continuous functions f: X→H with ϵ error w.r.t p-norm where 1 ≤ p <∞ and X,H ∈ Rd×L.
Please refer to the Appendix for a proof of the above result. Yun et al. (2019) establishes that transformers with PE are universal approximators for any continuous sequence-to-sequence function with compact support (Theorem 3 in their paper) and is applicable to language sequences. The afore mentioned result however applies uniquely to event streams. More importantly, it separates two distinct event epoch encodings to (potentially) distinguish representations and establishes the predictability and learnability of a transformer model (with a certain structure) for the MEM.
3.3 TEMPORAL UPPER TRIANGULAR ATTENTION
While masked language models such as BERT leverage contextual information from both prior and post tokens of interest, here we only consider prior tokens. This is because our main task of interest is event prediction given only the past, as typical in most real-world prediction problems, which prohibits us from using post token context. We apply an upper triangular attention so that a current event epoch only attends to prior events. In MEM, any representation in pre-training as shown in Figure 1 (left) for a masked event (regardless of whether the event is observed or void) only attends to history in the past. With these pieces that define the MEM model, we next describe the pre-training and fine-tuning steps that respectively produce and exploit the self-supervised representations.
3.4 PRE-TRAINING SCHEME
Pre-training using the MEM is conducted by first randomly injecting void events into event streams, masking some of the events, and then computing a self-supervised loss determined by predicting the masked events. In this fashion, the MEM is trained to not only predict the time and label of observed events, but also try to be as accurate as possible at determining when events do not happen. In the most general setting, suppose that the sequence of time stamps for void events, denoted τ , is randomly generated from some distribution P. A special case of this random injection is when exactly 1 void event is uniformly generated between each pair of consecutive events in S to create modified event stream S′. After randomly selecting a pre-determined percentage of events to mask, the loss for the self-supervised prediction task can be computed as:
L = Eτ∼P[Levent(t̂′m, ŷ′m; t′∗m, y′∗m)], (4)
where m denote the indices of the randomly selected masked events, similar conceptually to Devlin et al. (2018). The hat and star notation for t (y) refer to the model’s predicted time (label) and the ground truth time (label), respectively. Note that Eq. 4 will in practice be challenging to optimize, due to the stochastic objective and additional computation complexity from sampling and inserting void events between every two consecutive events in every event stream in a batch when performing stochastic gradient descent. The time complexity for such an insertion during training is O(KL) where K is the number of event streams and L is the maximum length by merging the two sorted lists.
To reduce the computational cost and improve efficiency, we propose a practical solution by adopting a simpler but just as effective sampling strategy for void events. Specifically, we only sample void events once from the original dataset as an approximation and then merge as a pre-processing step. Thus no additional computing cost occurs during training. Let NM be the total of number of masked event epochs. We use the following to measure the prediction loss for each masked event, whether it is observed or void:
Levent = 1
NM NM∑ i CE(softmax(H′mW y + by)i,:, y ′∗ m,i) + γMSE((H ′ mw t + bt)i, t ′∗ m,i), (5)
where H′m is the masked high level representation from the transformer model of a modified stream and Wy ∈ Rd×M , by ∈ RM , wt ∈ Rd and bt ∈ R are trainable weights and biases for label prediction cross entropy (CE) loss and time prediction mean square error (MSE). In addition, index i in the above equation implies a general instance of masked event epoch and i, : corresponds to the ith row of the output matrix. We use γ as the trade-off between the two loss terms. It is worth noting that we use only one hidden layer for masked event prediction; we avoid using deep feed forward networks to force the transformer model to learn a high quality representation H′m so that it facilitates the fine-tuning process for downstream tasks.
3.5 FINE-TUNING
After pre-training, MEM can then be applied to model any new event sequence S and be further fine-tuned to obtain a better representation Ĥi. Note that during fine-tuning, we do not include void events, which simplifies the training steps and is compatible with any existing approach. Each learned representation is then fed into a small feed forward neural network for downstream tasks involving event prediction. In other words, our model fine-tunes by consuming each individual event representation Ĥi and predicting the next label yi+1 as well as time ti+1. The power of this approach is primarily through the conversion of sequential prediction into tabular regression and classification. For an event dataset with K event streams, each with length nk, the loss in fine tuning step Lpred is the following:
Lpred = K∑ l=1 nk∑ i=1 CE(softmax(MLP (Ĥli)), y l i+1) + αMSE(MLP (Ĥ l i), t l i+1) (6)
where α is a similar trade-off between cross entropy and mean square error. Regression and classification share the same multi-layer neural network (MLP) for computational efficiency in our setting.
4 EXPERIMENTS
4.1 BASELINES
We use the following baselines for experiments. To focus attention on the potential benefits of selfsupervision rather than the choice of neural architecture, we replace non-transformer architectures in baselines with a suitable counterpart transformer.
Recurrent Marked Temporal Point Process (Du et al., 2016) and Event Recurrent Point Process (Xiao et al., 2017). We replace the RNNs in both originally proposed models with transformers. We note that the original implementation 1 only predicts the very last event (tn, yn) given prior events; for a fair comparison, we therefore modify the code to evaluate next inter-event time di where di = ti − ti−1 for i ∈ {2, 3, ...n}. Lognormal Mixture (Shchur et al., 2019). We replace the RNN with a transformer and take the expectation of the learned mixture model for next inter-event prediction.
Transformer Hawkes Process 2 (Zuo et al., 2020). This model is representative of the current stateof-the-art for event sequence modeling. It already involves a transformer architecture and therefore does not need any modification.
We use the following acronyms for the afore mentioned models, where the prefix ‘T-’ clarifies that some of these are transformer-based extensions: T-RMTPP, T-ERPP, T-LNM and THP. Following Zuo et al. (2020), we evaluate model performance on next event time prediction with root mean square error and on next event label prediction with accuracy.
4.2 SYNTHETIC EXPERIMENTS
We conduct experiments using synthetic data generated from two representative parametric families of multivariate temporal point processes: multivariate Hawkes processes (Bacry et al., 2015) and proximal graphical event models (Bhattacharjya et al., 2018). We aim to pre-train a masked event model on a set of datasets and fine-tune the model on a different dataset for the event prediction task.
Hawkes-Exp Dynamics. We generate 400 samples each from 10-dimensional Hawkes’ process dynamics for 5 datasets (A, B, C, D, E) with different parameters and combine them to form a pre-training dataset. We further split each into train-dev sets 75-25 and use the dev set for hyperparameter selection. We also generate 5 folds of a dataset F with different parameters as the target; each fold contains 500 event sequences and is further split into train-dev-test 60-20-20 subsets. Final evaluation is performed on the test subsets.
PGEM Dynamics. We generate 500 samples each from the proximal graphical event model (PGEM) Bhattacharjya et al. (2018) generator with 4 datasets (A, B, C, D) of different parameters where each contains 5 event labels. We combine these to form the pre-training dataset. Similarly, we generate an additional 5 folds of a dataset E with different parameters as the target, each of which contains 500 event sequences. Each fold is further split into train-dev-test 60-20-20 subsets, and as before, final evaluation is performed on the test subsets.
Results. As shown in Table 1, Event-former achieves best results for predicting both the next event time and event type as compared to all baselines. For the Hawkes-Exp generated data, it boosts prediction performance on average 4-5% compared to the best baseline result; on PGEM the increase is around 1-3%. The benefit of our approach along with its efficacy is its efficiency. While we only pre-train once, a typical fine-tuning procedure in this study involves a 20 fold smaller network – ∼ 1M trainable parameters compared to say the THP model with∼ 20M trainable parameters at its recommended setting. This suggests our model learns a suitable representation of the event dynamics, specially for Hawkes process. Figure 2 shows the T-SNE projection of learned representations of event data generated by the Hawkes-Exp model datasets A, B, C, D and E onto the 2D plane. Each model generates a unique fragment segment that somewhat overlaps with another model. The linear and curvaceous segment pattern observed here is not uncommon for projecting time-series embedding onto a 2D plane (Wong & Chung, 2019). This overlapping of the representations from
1https://github.com/woshiyyya/ERPP-RMTPP 2https://github.com/SimiaoZuo/Transformer-Hawkes-Process
pre-training data (A, B, C, D and E) with the target data F provides a visual explanation for how various datasets compare in terms of the learned representation.
4.2.1 ABLATIONAL MASKING EXPERIMENTS
A typical deployment of MEM involves 3 components: inserted random void epochs, random selection of masks and masking fraction. Our default setting is through the use of void events and uniformly randomly selecting 15% for masking during pre-training. We perform 3 ablation studies on the synthetic datasets: 1) void vs. no void events, 2) geometric vs. random mask and 3) mask fraction. Ablation 1 evaluates the effect of injected random void epochs in MEM on prediction. Clearly from Table 2, we notice a drop of type accuracy and increase of RMSE in time prediction for both cases; in particular, the predictive performance deteriorates to below the baselines on Hawkes-Exp. The injection of void epochs is justified for producing competitive results in random masking. Ablation 2 compares the impact of two masking strategies: geometric and random. We employ the former from Zerveas et al.
(2021), along with inserted void epochs. Geometric masks produce slightly deteriorated results particularly on PGEM data, suggesting masking consecutive segments may not aid in learning dynamics in the continuous-time setting. Ablation 3 compares the choice of fraction of randomly masked epochs. In general, we find no significant difference between 15% and 30% for event prediction.
4.3 REAL APPLICATIONS
Datasets. We perform transfer learning experiments on 3 real applications, as listed below. A descriptive summary of the 6 datasets used are shown in Table 3.
• Financial: Defi-Mainnet and Defi-Polygon are privately curated datasets involving userlevel crypto currency transactions from the Aave website3. Mainnet and Polygon represent two different protocols/deployments on the platform. We test the algorithms on Mainnet after pre-training on Polygon.
3aave.com
Dataset # classes # seqs. Avg. length # events Data Type
Defi-Mainnet 6 20539 32 654844 Financial Defi-Polygon 6 33597 85 2856453 Financial Electronics 4 9993 20 195726 E-Commerce Cosmetics 4 19301 39 752109 E-Commerce ACLED-India 6 111 17 1934 Political ACLED-Bangladesh 4 97 17 1697 Political
Results. As demonstrated by its highest accuracy and lowest RMSE in Table 4, transfer learning with Event-former in these datasets consistently improves upon all baselines, and the improvement ranges from 3% to 15%. The most impressive improvement of 15% is on time prediction in DefiMainnet where using a different protocol appears to be sufficient to help with learning the dynamics. This likely suggests that the dynamics of real applications have noticeable shared similarities that the state-of-the-art transformer model approaches are unable to exploit. In addition, we observe that the transfer performance in general increases with increasing size of pre-training data. As shown from Table 3, the 3 pairs of datasets have different amounts of data; the Defi datasets have the highest number of samples, while the ACLED datasets have the lowest number. This is an indication that Event-former may be able to generalize better with more pre-training data.
5 CONCLUSION
In this work, we propose a novel self-supervised paradigm for transfer learning in multivariate temporal point processes. We introduce the usage of void events for transformer architectures, which is unique in continuous-time event models, and design a masking strategy for predicting masked event epochs and void spaces in-between. We empirically demonstrate the potential of our approach using synthetic as well as various real-world datasets. In particular, improvement of prediction performance is noticeably significant on transferring tasks over many existing competitive transformerbased approaches. While this study focuses on the homogeneous transfer setting, our approach could potentially be extended to other more complex transfer settings, such as out-of-domain heterogeneous transfer (Zhuang et al., 2021) with datasets that contain non-overlapping event labels. We leave these more complex cases to future work.
4https://www.kaggle.com/datasets/mkechinov/ecommerce-events-history-in-electronics-store 5https://www.kaggle.com/mkechinov/ecommerce-events-history-in-cosmetics-shop 6https://www.kaggle.com/datasets/saimasharleen/acled-bangladesh 7https://www.kaggle.com/datasets/shivkumarganesh/riots-in-india-19972022-acled-dataset-50k
A APPENDIX
A.1 SYNTHETIC GENERATORS
We generated datasets from Hawkes process and Proximal Graphical Event Model. We describe the parameters used in our experiments to generate event datasets.
Hawkes-Exp. We use a standard library to generate datasets from the Hawkes dynamics 8.The parameters are baseline rate, decay coefficent, adjacency (infectivity matrix) and end time. We describe the four for models A, B, C, D, E and F in our study. A. baseline = [0.1097627 , 0.14303787, 0.12055268, 0.10897664, 0.08473096, 0.12917882, 0.08751744, 0.1783546 , 0.19273255, 0.0766883], decay = 2.5, infectivity = [[0.15037453, 0.10045448, 0.10789028, 0.17580114, 0.01349208, 0.01654871, 0.00384014, 0.1581418 , 0.14779747, 0.16524382], [0.1858717 , 0.1517864 , 0.08765006, 0.14824807, 0.02246419, 0.12154197, 0.02722749, 0.17942359, 0.0991161 , 0.07875789], [0.05024778, 0.14705235, 0.0866379 , 0.10796424, 0.0035688 , 0.11730922, 0.11625704, 0.11717598, 0.17924869, 0.12950002], [0.06828233, 0.08300669, 0.13250303, 0.01143879, 0.12664085, 0.12737611, 0.03995854, 0.02448733, 0.05991018, 0.0690806 ], [0.10829905, 0.0833048 , 0.18772458, 0.01938165, 0.03967254, 0.03063796, 0.12404668, 0.04810838, 0.0885677 , 0.04642443], [0.03019353, 0.02096386, 0.1246585 , 0.02624547, 0.03733743, 0.07003299, 0.15593352, 0.01844271, 0.1591532 , 0.01825224], [0.18546165, 0.08901222, 0.18551894, 0.11487999, 0.14041038, 0.00744305, 0.05371431, 0.02282927, 0.05624673, 0.02255028], [0.06039543, 0.07868212, 0.01218371, 0.13152315, 0.10761619, 0.05040616, 0.09938195, 0.01784238, 0.10939112, 0.1765038 ], [0.06050668, 0.1267631 , 0.02503273, 0.13605401, 0.0549677 , 0.03479404, 0.11139803, 0.00381908, 0.15744288, 0.00089182], [0.12873957, 0.05128336, 0.13963744, 0.18275114, 0.04724637, 0.10943116, 0.11244817, 0.10868939, 0.04237051, 0.18095826]] and end-time = 10.
B. Baseline = [8.34044009e-02, 1.44064899e-01, 2.28749635e-05, 6.04665145e-02, 2.93511782e02, 1.84677190e-02, 3.72520423e-02, 6.91121454e-02, 7.93534948e-02, 1.07763347e-01], decay = 2.5, adjacency = [[0. , 0. , 0.03514877, 0.15096311, 0. , 0.11526461, 0.07174169, 0.09604815, 0.02413487, 0. ], [0. , 0. , 0. , 0. , 0.15066599, 0.15379789, 0.01462053, 0.00671417, 0. , 0. ], [0.01690747, 0.07239546, 0.16467728, 0. , 0.11894528, 0. , 0.11802102, 0.14348615, 0.00314406, 0.12896239], [0.17000181, 0. , 0. , 0. , 0. , 0. , 0. , 0.0504772 , 0.04947341, 0. ], [0. , 0.11670321, 0.03638242, 0. , 0. , 0.00917392, 0. , 0.0252251 , 0.10131151, 0.1203002 ], [0. , 0.07118317, 0.11937903, 0.07120436, 0. , 0. , 0. , 0.08851807, 0.16239168, 0.10083865], [0. , 0.02363421, 0.02394394, 0.1388041 , 0.06836732, 0.02842716, 0.15945428, 0. , 0. , 0.12481123], [0.15185513, 0. , 0.1290996 , 0. , 0. , 0.15401787, 0. , 0.16587219, 0.11405672, 0.10687992], [0. , 0. , 0.07734744, 0.09943488, 0.07016556, 0.04074891, 0. , 0. , 0.00049346, 0.10609756], [0.05615574, 0.09061013, 0.15230831, 0.06142066, 0.15619243, 0.10716606, 0.00271994, 0.15978585, 0.11877677, 0.17145652]] and end-time = 20.
C. baseline = [0.08719898, 0.00518525, 0.1099325 , 0.08706448, 0.08407356, 0.06606696, 0.04092973, 0.12385419, 0.05993093, 0.05336546], decay = 2.5, adjacency = [[0. , 0.09623737, 0. , 0. , 0. , 0.14283231, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0.14434229, 0.10548788, 0. , 0. ], [0. , 0. , 0.16178088, 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0.14554646, 0. , 0. , 0. , 0. , 0.09531432, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.04899491], [0. , 0. , 0. , 0.07048057, 0. , 0.07546073, 0. , 0. , 0. , 0. ], [0.05697369, 0. , 0. , 0. , 0. , 0. , 0.11709833, 0. , 0.03100542, 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0.04016475, 0. , 0.10768499, 0.06297179]] and end-time = 20.
D. Baseline = [0.11015958, 0.14162956, 0.05818095, 0.10216552, 0.17858939, 0.17925862, 0.02511706, 0.04144858, 0.01029344, 0.08816197], decay = [[8., 9., 2., 7., 3., 3., 2., 4., 6., 9.], [2., 9., 8., 9., 2., 1., 6., 5., 2., 6.], [5., 8., 7., 1., 1., 3., 5., 6., 9., 9.], [8., 6., 2., 2., 2., 6., 6., 8., 5., 4.], [1., 1., 1., 1., 3., 3., 8., 1., 6., 1.], [2., 5., 2., 3., 3., 5., 9., 1., 7., 1.], [5., 2., 6., 2., 9., 9., 8., 1., 1., 2.], [8., 9., 8., 5., 1., 1., 5., 4., 1., 9.], [3., 8., 3., 2., 4., 3., 5., 2., 3., 3.], [8., 4., 5., 2., 7., 8., 2., 1., 1., 6.]], adjacency = [[0. , 0.14343731, 0. , 0.11247478, 0. , 0. , 0. , 0.09416725, 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.15805746, 0. , 0.08262718], [0. , 0. , 0.03018515, 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0.11090719, 0.08158544, 0. , 0.11109462, 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0.14264684, 0. , 0. , 0.11786607, 0. , 0. , 0.01101593, 0. ], [0.04954495, 0. , 0. , 0. , 0. , 0.12385743, 0. , 0. , 0.02375575, 0.05345351], [0. , 0.14941748, 0.02618691, 0. , 0.13608937, 0. , 0.06263167, 0. , 0.04097688, 0.14101171], [0. , 0.11902986, 0. , 0.04889382, 0. , 0. , 0.01569298, 0.03678315, 0. , 0. ], [0.05359555, 0. , 0. , 0.09188512, 0. , 0.14255311, 0. , 0. , 0. , 0. ], [0.12792415, 0.05843994, 0.16156482, 0.11931973, 0. , 0.00774966, 0.00947755, 0. , 0. , 0. ]], and end-time = 10.
8https://x-datainitiative.github.io/tick/modules/generated/tick.hawkes.SimuHawkesExpKernels.html
E. Baseline = [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2], decay = [[9., 4., 9., 9., 1., 6., 4., 6., 8., 7.], [1., 5., 8., 9., 2., 7., 3., 3., 2., 4.], [6., 9., 2., 9., 8., 9., 2., 1., 6., 5.], [2., 6., 5., 8., 7., 1., 1., 3., 5., 6.], [9., 9., 8., 6., 2., 2., 2., 6., 6., 8.], [5., 4., 1., 1., 1., 1., 3., 3., 8., 1.], [6., 1., 2., 5., 2., 3., 3., 5., 9., 1.], [7., 1., 5., 2., 6., 2., 9., 9., 8., 1.], [1., 2., 8., 9., 8., 5., 1., 1., 5., 4.], [1., 9., 3., 8., 3., 2., 4., 3., 5., 2.]], adjacency= [[0.02186539, 0.09356695, 0. , 0.16101355, 0.11527002, 0.09149395, 0. , 0. , 0.15672219, 0. ], [0. , 0. , 0.14241135, 0. , 0.11167029, 0. , 0. , 0. , 0.0934937 , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.15692693, 0. ], [0.08203618, 0. , 0. , 0.02996925, 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0.11011391, 0.08100189, 0. , 0.11029999, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0.14162654, 0. , 0. , 0.11702301, 0. , 0. , 0.01093714], [0. , 0.04919057, 0. , 0. , 0. , 0. , 0.12297152, 0. , 0. , 0.02358583], [0.05307117, 0. , 0.14834875, 0.02599961, 0. , 0.13511597, 0. , 0.06218368, 0. , 0.04068378], [0.1400031 , 0. , 0.11817848, 0. , 0.0485441 , 0. , 0. , 0.01558073, 0.03652006, 0. ], [0. , 0.05321219, 0. , 0. , 0.0912279 , 0. , 0.14153347, 0. , 0. , 0. ]] and end-time = 50.
F. Baseline = [0.21736198, 0.11134775, 0.16980704, 0.33791045, 0.00188754, 0.04862765, 0.26829963, 0.3303411 , 0.05468264, 0.23003733], decay = [[5., 1., 7., 3., 5., 2., 6., 4., 5., 5.], [4., 8., 2., 2., 8., 8., 1., 3., 4., 3.], [6., 9., 2., 1., 8., 7., 3., 1., 9., 3.], [6., 2., 9., 2., 6., 5., 3., 9., 4., 6.], [1., 4., 7., 4., 5., 8., 7., 4., 1., 5.], [5., 6., 8., 7., 7., 3., 5., 3., 8., 2.], [7., 7., 1., 8., 3., 4., 6., 5., 3., 5.], [4., 8., 1., 1., 6., 7., 7., 6., 7., 5.], [8., 4., 3., 4., 9., 8., 2., 6., 4., 1.], [7., 3., 4., 5., 9., 9., 6., 3., 8., 6.]], adjacency = [[0.15693854, 0.04896059, 0.0400508 , 0. , 0. , 0.13021228, 0. , 0.10699903, 0. , 0.15329807], [0.0784283 , 0. , 0. , 0. , 0. , 0. , 0.00310706, 0.0090892 , 0.07758874, 0. ], [0.01672489, 0. , 0. , 0.07851303, 0. , 0. , 0.12848331, 0.08859293, 0. , 0. ], [0.09984995, 0. , 0. , 0.10541925, 0. , 0.08032527, 0. , 0. , 0. , 0. ], [0.14642469, 0.06629365, 0. , 0. , 0. , 0. , 0.11891738, 0.04166225, 0.09808829, 0.17638655], [0.00976324, 0.1100343 , 0.02003261, 0. , 0. , 0.05993539, 0.09739541, 0. , 0. , 0. ], [0. , 0. , 0. , 0.04672133, 0.16916 , 0. , 0.17341419, 0.12078975, 0.14441602, 0. ], [0. , 0.17305542, 0.06927975, 0. , 0.03408974, 0. , 0.08457162, 0.03787486, 0.10863292, 0. ], [0.07186225, 0.05760593, 0. , 0. , 0.08042031, 0.04403479, 0.1033595 , 0.17046747, 0. , 0.05083523], [0. , 0.10029222, 0. , 0.1022067 , 0. , 0. , 0.0588527 , 0. , 0.03530513, 0. ]], and end-time = 40.
PGEM. We implement PGEM generator (Bhattacharjya et al., 2018) to generate 5-dimensional event datasets governed by the PGEM dynamics. The parameters are the conditional intensity (lambdas) for each event type given parental states, parental configuration (parents), windows for each parental state (windows) and end time. We describe the 5 for models A, B, C, D and E in our study. End time is 100 across all models.
A has the following parameters: ’parents’: ’A’: [], ’B’: [], ’C’: [’B’], ’D’: [’A’, ’B’], ’E’: [’C’], ’windows’: ’A’: [], ’B’: [], ’C’: [15], ’D’: [15, 30], ’E’: [15], ’lambdas’: ’A’: (): 0.2, ’B’: (): 0.05, ’C’: (0,): 0.2, (1,): 0.3, ’D’: (0, 0): 0.1, (0, 1): 0.05, (1, 0): 0.3, (1, 1): 0.2, ’E’: (0,): 0.1, (1,): 0.3
B has the follow parameters: ’parents’: ’A’: [’B’], ’B’: [’B’], ’C’: [’B’], ’D’: [’A’], ’E’: [’C’], ’windows’: ’A’: [15], ’B’: [30], ’C’: [15], ’D’: [30], ’E’: [30], ’lambdas’: ’A’: (0,): 0.3, (1,): 0.2, ’B’: (0,): 0.2, (1,): 0.4, ’C’: (0,): 0.4, (1,): 0.1, ’D’: (0,): 0.05, (1,): 0.2, ’E’: (0,): 0.1, (1,): 0.3.
C has the follow parameters: ’parents’: ’A’: [’B’, ’D’], ’B’: [], ’C’: [’B’, ’E’], ’D’: [’B’], ’E’: [’B’], ’windows’: ’A’: [15, 30], ’B’: [], ’C’: [15, 30], ’D’: [30], ’E’: [30], ’lambdas’: ’A’: (0, 0): 0.1, (0, 1): 0.05, (1, 0): 0.3, (1, 1): 0.2, ’B’: (): 0.2, ’C’: (0, 0): 0.2, (0, 1): 0.05, (1, 0): 0.4, (1, 1): 0.3, ’D’: (0,): 0.1, (1,): 0.2, ’E’: (0,): 0.1, (1,): 0.4.
D has the follow parameters: ’parents’: ’A’: [’B’], ’B’: [’C’], ’C’: [’A’], ’D’: [’A’, ’B’], ’E’: [’B’, ’C’], ’windows’: ’A’: [15], ’B’: [30], ’C’: [15], ’D’: [15, 30], ’E’: [30, 15], ’lambdas’: ’A’: (0,): 0.05, (1,): 0.2, ’B’: (0,): 0.1, (1,): 0.3, ’C’: (0,): 0.4, (1,): 0.2, ’D’: (0, 0): 0.1, (0, 1): 0.3, (1, 0): 0.05, (1, 1): 0.2, ’E’: (0, 0): 0.1, (0, 1): 0.02, (1, 0): 0.4, (1, 1): 0.1.
E has the follow parameters: ’parents’: ’A’: [’A’], ’B’: [’A’, ’C’], ’C’: [’C’], ’D’: [’A’, ’E’], ’E’: [’C’, ’D’], ’windows’: ’A’: [15], ’B’: [30, 30], ’C’: [15], ’D’: [15, 30], ’E’: [15, 30], ’lambdas’: ’A’: (0,): 0.1, (1,): 0.3, ’B’: (0, 0): 0.01, (0, 1): 0.05, (1, 0): 0.1, (1, 1): 0.5, ’C’: (0,): 0.2, (1,): 0.4, ’D’: (0, 0): 0.05, (0, 1): 0.02, (1, 0): 0.2, (1, 1): 0.1, ’E’: (0, 0): 0.1, (0, 1): 0.01, (1, 0): 0.3, (1, 1): 0.1.
A.2 REAL DATASETS
Cosmetics and Electronics contain user-level online transactions in an electronics and cosmetics store respectively. While the original cosmetics dataset from Kaggle contain multiple months of transactions, we used the one from Dec, 2019. Similarly, we also used electronics dataset from Dec, 2019. Both share the same four types of events: ’view’, ’cart’,’remove-from-cart’ and ’purchase. The two datasets involve transaction events in seconds. To optimize computation for transformer models, we filtered out sequences longer than 300 events and shorter than 30 and scaled the timestamps into [0,1] to avoid numerical issues.
DeFi-Mainnet and DeFi-Polygon. DeFi Mainnet is built on the more widely used ethereum blockchain. Polygon is a scalable sidechain of Ethereum that allows for much faster and low fee transactions than the original Ethereum Blockchain. The difference in fee structure produces quite different dynamics in the two different AAVE lending protocols. DeFi-Polygon has many more users and transactions per user, but much less total value locked than DeFi Mainnet. The polygon users are much more likely to engage in risky but potentially profitable ”Yield Farming” transactions. Foundation methods would be very useful in modeling the many other lending protocols in AAVE or other lending platforms which are new or less popular and thus have less transactions. The 6 types of actions user performs are : ’borrow’, ’collateral’, ’deposit’, ’liquidation’, ’redeem’, ’repay’. The origin timestamps are mined in Unix Timestamp. We filtered out sequences longer than 300 events and shorter than 30 and scaled the timestamps into [0,1] to avoid numerical issues.
ACLED-India and ACLED-Bangladesh contains sequences where each sequence involves an actor involving some armed conflict (i.e. riots and protests) in the respective country. The former involves events happening from 2016 to 2022, and the latter 2010-2021. The unit of each timestamp is ’days’. There are 6 types of events in the ACLED-India which are: ’Battles’, ’Explosions/Remote violence’, ’Riots’, ’Violence against civilians’,’Protests’,and ’Strategic developments’. The 4 overlapping types in ACLED-Bangladesh are ’Battles’, ’Explosions/Remote violence’, ’Riots’,and ’Violence against civilians’. We filtered out sequences longer than 300 events and shorter than 2 and scaled the timestamps into [0,1] to avoid numerical issues.
A.3 MODEL IMPLEMENTATION AND (PRE)TRAINING
Pretraining. Our pretrain model adapts codes from Zuo et al. (2020) 9. A full repo will be given upon acceptance. The procedure is fully described by Algorithm 1. We train our model via stochastic gradient descent and Adam Kingma & Ba (2014) optimizer is used for optimization. The default transformer architecture we employed are the following for pretraining: the number of blocks for multi-headed self-attention module is 4; the dimension of the value vector after attention has been applied is 512; the number attention heads is 4; the dimension of the hidden layer of the feed forward neural network 1024; the dimension of the value vector 512; the dimension of the key vector 512; dropout is 0.1. We train 100 epochs with a learning rate of 0.0001. γ is set to 1 for all experiments other than one on ACLED-india where we use 10.
9https://github.com/SimiaoZuo/Transformer-Hawkes-Process/tree/master/transformer
Fine-tuning. The fine tuning model consists of a feed forward network with 3 hidden layers of dimension 512. We train with Adam optimizer with learning rate of 0.001 with 100 epochs. α is set 0.01 for all experiments.
PE+TE. The combined TE + PE improves optimization when trained with only 2 samples. With TE only results in a compromised optimization (see Figure 3).
Algorithm 1: Pretraining of Event-former
Given dataset S with D sequences, each with length dl, {(ti, yi)}dli=1 , batch size b Insertion void epochs: S′ = [] for d← 1 to D do
seq’ = [] for i← 1 to dl − 1 do
t’i ∼ Unif(ti, ti+1) seq’.append((t’i, null)) end seqnew = Merge(seq, seq’) S’.append( seqnew )
end Masking to obtain S′m: for d← 1 to D do
for i← 1 to d′l − 1 do mask ∼ Bern(0.15) if mask == 1 then
(0, null)← (t′i, y′i) end
end end split(S’m) :=S’m,tr = {S′m,k}Kk=1, S′m,dev for epoch← 1 to N do
for iteration← 1 to ⌈Kb ⌉ do Sample a batch of sequences B′ from S′m,tr compute Levent(B′) (Eq. 5) back-propagate with gradient∇θ,ϕLevent(B′) ( θ: Transformer model parameters, ϕ:
Weights and bias for regression and classification.) update parameters of network θ, ϕ
end evaluate Levent(S′m,dev), stop training if not improving in 5 epochs
end Return: Optimal parameters ϕ∗, θ∗
A.4 BASELINES MODELS AND IMPLEMENTATION
T-RMTPP and T-ERPP. The original RMTPP and ERPP models are LSTM based. The transformer architecture we used for T-RMTPP and T-ERPP is adapted from Zuo et al. (2020). We used recommended set of parameters for training these two models.
T-LNM. The original LNM is RNN/LSTM/GRU based. The transformer architecture we used for T-LNM is adapted from Zuo et al. (2020). We used 64 mixtures of lognormal components to model that density of log (inter-event) times.
THP. We used the recommended set of parameters to train the model as it is.
A.5 PROOF OF THEOREM 1: TRANSFORMER WITH COMBINED TEMPORAL ENCODING AND POSITION ENCODING
Consider a general type of transformer architecture described by Yun et al. (2019). Our proof for a transformer with combined temporal encoding and position encoding as a universal approximating function for any continuous function for sequence-to-sequence with compact support follows similarly as proof of transformer with position encoding (Theorem 3 in Yun et al. (2019)). Without loss of generality, consider a sequence with timestamps {t1, t2, ..., tn} and let ti be integer-valued for i ∈ {1, 2, 3..., n} and ti < tj for all i < j. If ti are in decimals, we multiply by a constant to transform each given timestamp to an integer without affecting the dynamics of the event sequence. We choose a d-dimensional Temporal encoding for the n event epochs to be the following:
T = 0 t2 − t1 . . . tn − t1 0 t2 − t1 . . . tn − t1 ... ... ...
0 t2 − t1 . . . tn − t1 Similarly a d-dimensional position encoding for the n event epochs to be the following:
P = 0 1 . . . n 0 1 . . . n ... ... ...
0 1 . . . n The combined encoding is :
PE+TE = 0 t2 − t1 + 1 . . . tn − t1 + 1 0 t2 − t1 + 1 . . . tn − t1 + 1 ... ... ...
0 t2 − t1 + 1 . . . tn − t1 + 1 The strict temporal order of timestamps ti − t1 + 1 < ti+1 − t1 + 1for all i ∈ 1, 2, 3, ..., n− 1. This guarantees for all rows the coordinates are monotonically increasing and input values can be partitioned into cubes. The rest of the proof follows directly from the proof of Theorem 3 in Yun et al. (2019) by replacing n with tn by performing quantization by feed-forward layers, contextual mapping by attention layers and function value mapping by feed-forward layers. | 1. What is the focus and contribution of the paper regarding event sequence modeling?
2. What are the strengths and weaknesses of the proposed method, particularly in its technical design and objective choices?
3. Do you have any concerns regarding the paper's simplification of the problem and deviation from real-world issues in open domain event types and target domain event types?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper adapts "pretraining" technique of NLP community to event sequence modeling. They developed a event-mask technique (analogous to token masking in text modeling) that trains an event sequence model to represent event tokens, and those representations can be used in downstream tasks. In their experiments, they found that their pretrained models help downstream task performance.
Strengths And Weaknesses
Strength
The biggest strength of this paper is that they are trying to tackle an important problem: pretraining and transfer learning in event sequence modeling.
Their proposed method seems to be effective and they have positive results on multiple real-world datasets.
The writing is fairly clear.
Weakness
But the paper only restricts itself to an overly constrained and overly simplified setting where the pretraining open domains and target task domains share the same set of event types. That is too limited, and deviates the work from the real core issues in this area: open domain event types and target domain event types often do not match. Working around this real issue makes the problem overly simplified and unrealistic.
The technical design of their method components is not sound or convincing to me.
First, why not use the objective eqn (1) for pretraining? Why necessary to use the new objective eqn (4)(5)? The first paragraph justifies it like "try to be as accurate as possible at determining when events do not happen". But is that exactly what the integral term of eqn (1) is already doing? One may argue that their eqn (4)(5) are similar to what BERT does, but remember that BERT uses bidirectional context while this paper only considers left context. When only left context is considered, this paper's setting is more like what GPT (but not BERT) does, and GPT uses log-likelihood as their pretraining objective. Following this line of thought, it does seem more natural if this work still uses eqn (1).
Similarly, fine-tuning objective doesn't seem convincing as well.
Second, they proposed a "void event" masking to "provide additional information about the dynamics of the continuous-time process" since "in point process models, the inter-event duration between two successive events is just as important as the event epochs." The importance of inter-arrival times is well recognized by the community and many architectures take great deal of care to address this in their architecture design, including Du et al. 2016 and Mei et al. 2017 that they cited. In particular, they allow their hidden states to change with time t even though nothing happens at time t. It is not clear how the "void epochs" can add value on top of the existing architectural designs. Actually, these "void events" seem only useful when pretraining objective is switched from eqn (1) to eqn (4)(5)---but as I explained above, that design choice does not seem sound by itself.
Some terms seem to be incorrectly used. E.g., "epoch" seems to be interchangeable with "event" but I think "epoch" is actually more similar to "sequence".
Clarity, Quality, Novelty And Reproducibility
please see above |
ICLR | Title
Event-former: A Self-supervised Learning Paradigm for Temporal Point Processes
Abstract
Self-supervision is one of the hallmarks of representation learning in the increasingly popular suite of foundation models including large language models such as BERT and GPT-3, but it has not been pursued in the context of multivariate event streams, to the best of our knowledge. We introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Specifically, we design a novel pre-training strategy for the encoder where we not only mask random event epochs but also insert randomly sampled ‘void’ epochs where an event does not occur; this differs from the typical discrete-time pretext tasks such as word-masking in BERT but expands the effectiveness of masking to better capture continuous-time dynamics. The pre-trained model can subsequently be fine-tuned on a potentially much smaller event dataset, similar to other foundation models. We demonstrate the effectiveness of our proposed paradigm on the next-event prediction task using synthetic datasets and 3 real applications, observing a relative performance boost of as high as up to 15% compared to state-of-the art models.
N/A
Self-supervision is one of the hallmarks of representation learning in the increasingly popular suite of foundation models including large language models such as BERT and GPT-3, but it has not been pursued in the context of multivariate event streams, to the best of our knowledge. We introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Specifically, we design a novel pre-training strategy for the encoder where we not only mask random event epochs but also insert randomly sampled ‘void’ epochs where an event does not occur; this differs from the typical discrete-time pretext tasks such as word-masking in BERT but expands the effectiveness of masking to better capture continuous-time dynamics. The pre-trained model can subsequently be fine-tuned on a potentially much smaller event dataset, similar to other foundation models. We demonstrate the effectiveness of our proposed paradigm on the next-event prediction task using synthetic datasets and 3 real applications, observing a relative performance boost of as high as up to 15% compared to state-of-the art models.
1 INTRODUCTION
Transfer learning occurs when a model is pre-trained on a task, such as classification on a large labelled dataset such as ImageNet, and the model’s ‘knowledge’ is then applied to another task, such as classification on medical images. In the current era of AI, transfer in domains such as natural language processing and image processing often leverages self-supervised learning, where pre-training for representation learning is done using unlabeled data. Although the fundamental ideas of transfer are not new, there is a clear emerging paradigm around foundation models (Bommasani et al., 2021), such as BERT (Devlin et al., 2018) and GTP-3 (Brown et al., 2020), which are trained with diverse unlabeled data at scale using self-supervision. These pre-trained models are then fine-tuned and adapted to different downstream tasks that respectively come with limited labeled data. Recent progress has been possible primarily due to improvements in hardware, development of the attention mechanism (Vaswani et al., 2017), and availability of substantial unlabeled training data.
We extend and pursue self-supervised learning in the context of multivariate event streams, i.e. data involving irregular occurrences of different types of events. Event stream datasets are widely available across domains, for instance in the form of social network interactions, customer transactions, system logs, financial events, health episodes, etc. It is well known that temporal point processes provide a sound mathematical framework for modeling such datasets (Daley & Jones, 2003). In this paper, we introduce a new paradigm for self-supervised learning for temporal point processes using a transformer encoder. Although self-supervised learning has recently been explored for classical time series data (Zerveas et al., 2021), to the best of our knowledge, self-supervision has not yet been explored in the context of point processes.
Neural models for temporal point processes (e.g. Du et al., 2016; Mei & Eisner, 2016; Xiao et al., 2017) have advanced the state of the art in event modeling, particularly for the task of event prediction. The typical approach in this line of work is to train a neural network on a large amount of event data. Our proposed paradigm differs from current standard practices by taking a transfer learning approach analogous to foundation models: to first pre-train a neural model on a (potentially) large event dataset and then fine-tune the model for prediction on a limited event dataset.
Transfer learning with event models has many potential applications. For instance, there may be abundant data from electronic health records containing information around a particular patient population; this data could potentially be leveraged for another population whose data is either unavailable or harder to obtain. This is an issue relevant to health equity since there may be data related concerns for some under represented populations. Similarly, financial event data from an industrial sector could potentially be transferred to another. Electronic commerce is yet another illustrative domain where transfer learning techniques may help to transfer purchase behaviors across a large pool of user populations and/or product types.
Although there is some related work around multi-task learning with event streams, such as through deploying hierarchical Gaussian process models (Lian et al., 2015) or time-scale graphical event models (Monvoisin & Leray, 2019), this line of research typically considers learning by pooling together disparate data from the same population. In contrast, we tackle the more ambitious effort of transferring from one or multiple event datasets to another. Specifically, we consider the typical setting of homogeneous transfer learning (Zhuang et al., 2021), where all datasets that get pooled for pre-training involve the same set of event types. This allows for potentially leveraging different datasets even though there may be realistic variations with respect to parametric or structural dependencies present in the corresponding event streams across each dataset.
Contributions: We make the following major contributions:
• We introduce a self-supervised paradigm for transfer learning in temporal point processes. A crucial innovation is to explicitly incorporate information about the absence of events, which improves the modeling of temporal dynamics without burdening training efficiency.
• We propose a masked event model, which is a new way to derive a pretext task for selfsupervision targets in transformer models for event streams.
• Our empirical evaluation demonstrates improved transfer learning performance for event prediction on synthetic and real datasets relative to state of the art transformer event models.
2 BACKGROUND AND RELATED WORK
2.1 TEMPORAL POINT PROCESSES
Multivariate temporal point processes (MTPP) are elegant mathematical models for event streams where event types/labels from some discrete set occur in continuous time (Daley & Jones, 2003). A multi-dimensional MTPP generates sequences with timestamps and associated labels of the form S = {(ti, yi)}ni=1 where ti is the time of occurrence of ith event and yi is its label. The cardinality of label set L is M . A strict temporally ordered event stream assumes a period of events observed within [0, T ] for each ti ∈ [0, T ] for all i ∈ [1, 2, ..., n]. MTPPs are characterized by conditional intensity functions for each label representing the rate at which it occurs at any time t, λe(t) = lim∆t→0 E(Ne(t+∆t|ht)−Ne(t|ht)) ∆t , whereNe(t|ht) counts the number of occurrences of label e prior to historical occurrences (or simply history) ht.
Several MTPP models such as multivariate versions of the classic Hawkes process (Hawkes, 1971) and piecewise-constant models (Gunawardana & Meek, 2016; Bhattacharjya et al., 2018) assume some parametric form of the conditional intensity function. Neural MTPPs are more recent variants that capture the underlying dynamics using neural networks. Recent years have witnessed rising popularity of neural MTPPs due to their state-of-the-art performance on benchmark datasets for predictive tasks (Du et al., 2016; Mei & Eisner, 2016; Xiao et al., 2017; Omi et al., 2019; Shchur et al., 2019; Zuo et al., 2020). A common training objective in neural MTPPs involves minimizing the negative log-likelihood. The log-likelihood of observing a sequence S is the sum of log-likelihood of events and non-events and can be computed as:
log p(S) = n∑
i=1 M∑ e=1 logλe(ti)− ∫ T 0 M∑ e=1 λe(t)dt (1)
Many of these neural MTPPs assume some form of evolution dynamics between events in order to compute the second term in Eq. 1, such as recurrent neural network (RNN) evolution (Xiao et al.,
2017), exponential decay (Mei & Eisner, 2016), intensity-free modeling of the integral (Omi et al., 2019), or the usage of explicit epochs indicating absence of events (Gao et al., 2020).
2.2 TRANSFORMERS FOR EVENT DATA
Attention (Xiao et al., 2019) and transformer-based event models have shown promising results in recent years, including the self-attentive Hawkes process (SAHP) (Zhang et al., 2020), transformer Hawkes process (THP) (Zuo et al., 2020) and attentive neural point process (ANPP) (Gu, 2021). The self-attention mechanism, in our context, relates different event instances of a single stream in order to compute a representation of the stream. The architecture of transformers for MTPPs generally consists of an embedding layer and a self-attention layer. In the transformer Hawkes process (THP) (Zuo et al., 2020), for example, the embedding layer includes a time embedding and event-type embedding. Time embedding is achieved through:
[z(tj)]i =
{ cos(tj/10000 i−1 d ) if i is odd
sin(tj/10000 i d ) if i is even
(2)
where tj is a timestamp and d is the dimension of encoding. Time embedding and one-hot encoded types are combined to form the embedded input X. For sequence S = {ti, yi}Li=1, time embedding zi for each instance is specified in Eq. 2 and for the entire sequence with length L, the embedding is Z ∈ RM×L . Type embedding are through the product of a trainable embedding matrix U ∈ RM×K and one hot encoded vectors yi’s for all type instances, i.e. X = (UY + Z)T where Y = [y1,y2, ...,yL]. Q, K, V are query, key and value matrix; they are linear transformations of X, i.e. Q = XWQ, K = XWK , V = XWV where WQ, WK , WV are trainable weights. Attention output C is computed by the following:
C = softmax( QKT√ Mk )V = AsV (3)
where As denotes attention score matrix. The output C is then fed into a pointwise feed forward neural network (FFN) (commonly with residual connection) to learn a high level representation of the sequence for modeling conditional intensity functions.
2.3 SELF-SUPERVISION FOR SEQUENCE DATA
Sequence models such as RNNs have achieved much success in various applications, but more recent methods typically rely on transformer architectures (Vaswani et al., 2017) and the attention mechanism, especially in popular application of natural language process (NLP). With fine-tuning on the downstream tasks, these pre-training models lead to sizable improvement over previous state of the art. However, large-scale transformers are also bulky and resource-hungry, typically with billions of parameters (Brown et al., 2020) and cost millions of USD to train (Floridi & Chiriatti, 2020).
Self-supervision is typically achieved in sequence models by deriving an effective pretext task that is trained through supervised learning with a masking strategy for indicating self-supervision targets. For example, in BERT, about 15% of the words are randomly masked using an independent Bernoulli model of masking, and replaced with a new [MASK] label or a random word. In some recent work on self-supervision for time series data (Zerveas et al., 2021), the masking is done to ensure longer lengths of masked values, all replaced with the value 0, to get geometrically distributed run-lengths of masked values. Here we introduce a novel pretext task with a masking strategy specially tailored for asynchronous event data in continuous time. As we show later through experimental ablation studies, straightforward application of prior discrete-time sequence-based masking strategies proves inadequate for event data, because the intensity rates of events that represent continuous-time dynamics may vary in general between any two consecutively observed events. This aspect distinguishes event streams from discrete-time data such as time series and has not been addressed by masking in standard temporal transformers.
3 A SELF-SUPERVISED LEARNING PARADIGM
We introduce a self-supervised modeling paradigm with a transformer-based architecture for multivariate event streams that we refer to as Event-former. In particular, we present a novel pretext
training task specific to event data to learn a suitable representation for event streams. Such a representation can then be used by a small feed forward network for fine-tuning on a sequential next event prediction task. A high-level figurative scheme is shown in Figure 1; we provide details in the subsections to follow.
Our proposed pre-training paradigm from Figure 1 (left) involves three major aspects that distinguish it from prior work: 1) injecting void events (which are formalized in the next paragraph) to improve the representation learning of event dynamics in continuous time, 2) an effective masking strategy that uses both positional and temporal encoding on the above augmented event stream with void events, and 3) forcing the attention mechanism to adhere to the temporal order of events. We explain each aspect below before prescribing the full pre-training scheme, followed by a brief explanation of the fine-tuning procedure as depicted in Figure 1 (right).
3.1 VOID EVENTS IN TRANSFORMERS
Recall that an observed event stream is of the form S = {(ti, yi)}ni=1 where ti and yi are the ith event’s time stamp and label, respectively. We consider a modified stream where we inject a predetermined number of void events involving epochs where no event occurs; these are of the form (t′i, null) where ‘null’ is a new label signifying absence of an event occurrence. The modified stream is denoted S′ = {t′i, y′i}n ′
i=1 where S ⊂ S′ and y′i ∈ {L ∪ null} ∀i. The role of the void events is to provide additional information about the dynamics of the continuous-time process by explicitly indicating that no event occurs within two consecutively observed events.
Explicitly specifying selected epochs where events do not happen has been used previously in some related work; see for instance the notion of ‘fake epochs’ in Gao et al. (2020), which was originally developed for RNNs and helped boost the performance of a neural point process on a model fitting task. The main idea is that in point process models, the inter-event duration between two successive events is just as important as the event epochs themselves. This is seen from the integral terms in the conditional intensity based log-likelihood expression for event streams (see Eq. 1). Just as the internal hidden state of a recurrent neural network (such as an LSTM) only changes in discrete steps upon seeing the next token, the transformer based representations too behave in a similar manner. In reality, the conditional intensity rates can evolve continuously. Introducing the void epochs in the inter-event void space provides a convenient way to force the evolution of the transformer based internal representations inside the inter-event interval, and this in turn leads to improvement in both the pre-training and the fine-tuning steps for next event prediction related tasks. We thereby ad-
dress an inherent shortcoming of transformers for event datasets in a non-parametric manner, i.e. without the need of specific parametric or process assumptions such as in the transformer Hawkes process (Zuo et al., 2020). However, to use void events in transformers requires further adaptation, particularly during the training process where masking is also used. Next we present an effective approach for masking with void event labels.
3.2 MASKING STRATEGY & INPUT ENCODING
We consider a masked event model (MEM) pre-training device that operates on the modified event stream S′, where some events (t′i, y ′ i) are randomly masked for the task of prediction given history. Our masking approach is broadly similar to past work but specialized to our model. When an event epoch in the above expanded event stream S′ is masked, its timestamp is replaced with the value zero and its label is replaced with the value [MASK]. Further, for the choice of which tokens get masked, our model admits both the independent strategy used in BERT (Devlin et al., 2018) as well as the serially correlated temporal strategy used in time series (Zerveas et al., 2021). An ablation study shown later in Table 2 indicates that either of these strategies works well when combined with the proposed MEM model, and leads to improvements through transfer learning in both MSE and accuracy for predicting the next event time and label respectively. We also note that the results are worse without the proposed MEM model’s expansion of the event stream, i.e. without the injection of void event epochs. Our model differs from existing literature on masking in that both actual events and void events are admitted as candidates for masking. This combined approach leads to improved representations in experimental evaluation. The MEM model based representations are able to implicitly learn about both the event arrival rates due to masked learning with real epochs, as well as the inter-event empty spaces due to masked learning with void epochs.
In addition to the choice of masking strategy in transformer models, one also needs an encoding for the position information in the input sequence so that the uniqueness of each location is retained to some extent. Traditional positional encoding (PE) (Vaswani et al., 2017) used in transformers is not sufficient by itself for event stream data because events are associated with irregular time stamps, unlike natural language sequences. Similarly, temporal encoding (TE), such as proposed in prior work (Zuo et al., 2020), also proves inadequate by itself in our setting because our masking strategy replaces the time stamps of masked events with zero. Note that this would render indistinguishable any two distinct events (i.e. with distinct time stamps) of the same event type in the input event stream. As seen in Figure 3 in the Appendix, using TE alone leads to early plateauing of the loss function, and this is often a telltale signature of poor end-task performance. To address this issue, we propose the combined encoding strategy of using PE and TE together. We also show that the combined encoding strategy preserves the universal approximation results of standard transformers.
Theorem 1 Transformers with combined PE and TE are universal approximators for any continuous sequence-to-sequence function with compact domain, i.e. they approximate any continuous functions f: X→H with ϵ error w.r.t p-norm where 1 ≤ p <∞ and X,H ∈ Rd×L.
Please refer to the Appendix for a proof of the above result. Yun et al. (2019) establishes that transformers with PE are universal approximators for any continuous sequence-to-sequence function with compact support (Theorem 3 in their paper) and is applicable to language sequences. The afore mentioned result however applies uniquely to event streams. More importantly, it separates two distinct event epoch encodings to (potentially) distinguish representations and establishes the predictability and learnability of a transformer model (with a certain structure) for the MEM.
3.3 TEMPORAL UPPER TRIANGULAR ATTENTION
While masked language models such as BERT leverage contextual information from both prior and post tokens of interest, here we only consider prior tokens. This is because our main task of interest is event prediction given only the past, as typical in most real-world prediction problems, which prohibits us from using post token context. We apply an upper triangular attention so that a current event epoch only attends to prior events. In MEM, any representation in pre-training as shown in Figure 1 (left) for a masked event (regardless of whether the event is observed or void) only attends to history in the past. With these pieces that define the MEM model, we next describe the pre-training and fine-tuning steps that respectively produce and exploit the self-supervised representations.
3.4 PRE-TRAINING SCHEME
Pre-training using the MEM is conducted by first randomly injecting void events into event streams, masking some of the events, and then computing a self-supervised loss determined by predicting the masked events. In this fashion, the MEM is trained to not only predict the time and label of observed events, but also try to be as accurate as possible at determining when events do not happen. In the most general setting, suppose that the sequence of time stamps for void events, denoted τ , is randomly generated from some distribution P. A special case of this random injection is when exactly 1 void event is uniformly generated between each pair of consecutive events in S to create modified event stream S′. After randomly selecting a pre-determined percentage of events to mask, the loss for the self-supervised prediction task can be computed as:
L = Eτ∼P[Levent(t̂′m, ŷ′m; t′∗m, y′∗m)], (4)
where m denote the indices of the randomly selected masked events, similar conceptually to Devlin et al. (2018). The hat and star notation for t (y) refer to the model’s predicted time (label) and the ground truth time (label), respectively. Note that Eq. 4 will in practice be challenging to optimize, due to the stochastic objective and additional computation complexity from sampling and inserting void events between every two consecutive events in every event stream in a batch when performing stochastic gradient descent. The time complexity for such an insertion during training is O(KL) where K is the number of event streams and L is the maximum length by merging the two sorted lists.
To reduce the computational cost and improve efficiency, we propose a practical solution by adopting a simpler but just as effective sampling strategy for void events. Specifically, we only sample void events once from the original dataset as an approximation and then merge as a pre-processing step. Thus no additional computing cost occurs during training. Let NM be the total of number of masked event epochs. We use the following to measure the prediction loss for each masked event, whether it is observed or void:
Levent = 1
NM NM∑ i CE(softmax(H′mW y + by)i,:, y ′∗ m,i) + γMSE((H ′ mw t + bt)i, t ′∗ m,i), (5)
where H′m is the masked high level representation from the transformer model of a modified stream and Wy ∈ Rd×M , by ∈ RM , wt ∈ Rd and bt ∈ R are trainable weights and biases for label prediction cross entropy (CE) loss and time prediction mean square error (MSE). In addition, index i in the above equation implies a general instance of masked event epoch and i, : corresponds to the ith row of the output matrix. We use γ as the trade-off between the two loss terms. It is worth noting that we use only one hidden layer for masked event prediction; we avoid using deep feed forward networks to force the transformer model to learn a high quality representation H′m so that it facilitates the fine-tuning process for downstream tasks.
3.5 FINE-TUNING
After pre-training, MEM can then be applied to model any new event sequence S and be further fine-tuned to obtain a better representation Ĥi. Note that during fine-tuning, we do not include void events, which simplifies the training steps and is compatible with any existing approach. Each learned representation is then fed into a small feed forward neural network for downstream tasks involving event prediction. In other words, our model fine-tunes by consuming each individual event representation Ĥi and predicting the next label yi+1 as well as time ti+1. The power of this approach is primarily through the conversion of sequential prediction into tabular regression and classification. For an event dataset with K event streams, each with length nk, the loss in fine tuning step Lpred is the following:
Lpred = K∑ l=1 nk∑ i=1 CE(softmax(MLP (Ĥli)), y l i+1) + αMSE(MLP (Ĥ l i), t l i+1) (6)
where α is a similar trade-off between cross entropy and mean square error. Regression and classification share the same multi-layer neural network (MLP) for computational efficiency in our setting.
4 EXPERIMENTS
4.1 BASELINES
We use the following baselines for experiments. To focus attention on the potential benefits of selfsupervision rather than the choice of neural architecture, we replace non-transformer architectures in baselines with a suitable counterpart transformer.
Recurrent Marked Temporal Point Process (Du et al., 2016) and Event Recurrent Point Process (Xiao et al., 2017). We replace the RNNs in both originally proposed models with transformers. We note that the original implementation 1 only predicts the very last event (tn, yn) given prior events; for a fair comparison, we therefore modify the code to evaluate next inter-event time di where di = ti − ti−1 for i ∈ {2, 3, ...n}. Lognormal Mixture (Shchur et al., 2019). We replace the RNN with a transformer and take the expectation of the learned mixture model for next inter-event prediction.
Transformer Hawkes Process 2 (Zuo et al., 2020). This model is representative of the current stateof-the-art for event sequence modeling. It already involves a transformer architecture and therefore does not need any modification.
We use the following acronyms for the afore mentioned models, where the prefix ‘T-’ clarifies that some of these are transformer-based extensions: T-RMTPP, T-ERPP, T-LNM and THP. Following Zuo et al. (2020), we evaluate model performance on next event time prediction with root mean square error and on next event label prediction with accuracy.
4.2 SYNTHETIC EXPERIMENTS
We conduct experiments using synthetic data generated from two representative parametric families of multivariate temporal point processes: multivariate Hawkes processes (Bacry et al., 2015) and proximal graphical event models (Bhattacharjya et al., 2018). We aim to pre-train a masked event model on a set of datasets and fine-tune the model on a different dataset for the event prediction task.
Hawkes-Exp Dynamics. We generate 400 samples each from 10-dimensional Hawkes’ process dynamics for 5 datasets (A, B, C, D, E) with different parameters and combine them to form a pre-training dataset. We further split each into train-dev sets 75-25 and use the dev set for hyperparameter selection. We also generate 5 folds of a dataset F with different parameters as the target; each fold contains 500 event sequences and is further split into train-dev-test 60-20-20 subsets. Final evaluation is performed on the test subsets.
PGEM Dynamics. We generate 500 samples each from the proximal graphical event model (PGEM) Bhattacharjya et al. (2018) generator with 4 datasets (A, B, C, D) of different parameters where each contains 5 event labels. We combine these to form the pre-training dataset. Similarly, we generate an additional 5 folds of a dataset E with different parameters as the target, each of which contains 500 event sequences. Each fold is further split into train-dev-test 60-20-20 subsets, and as before, final evaluation is performed on the test subsets.
Results. As shown in Table 1, Event-former achieves best results for predicting both the next event time and event type as compared to all baselines. For the Hawkes-Exp generated data, it boosts prediction performance on average 4-5% compared to the best baseline result; on PGEM the increase is around 1-3%. The benefit of our approach along with its efficacy is its efficiency. While we only pre-train once, a typical fine-tuning procedure in this study involves a 20 fold smaller network – ∼ 1M trainable parameters compared to say the THP model with∼ 20M trainable parameters at its recommended setting. This suggests our model learns a suitable representation of the event dynamics, specially for Hawkes process. Figure 2 shows the T-SNE projection of learned representations of event data generated by the Hawkes-Exp model datasets A, B, C, D and E onto the 2D plane. Each model generates a unique fragment segment that somewhat overlaps with another model. The linear and curvaceous segment pattern observed here is not uncommon for projecting time-series embedding onto a 2D plane (Wong & Chung, 2019). This overlapping of the representations from
1https://github.com/woshiyyya/ERPP-RMTPP 2https://github.com/SimiaoZuo/Transformer-Hawkes-Process
pre-training data (A, B, C, D and E) with the target data F provides a visual explanation for how various datasets compare in terms of the learned representation.
4.2.1 ABLATIONAL MASKING EXPERIMENTS
A typical deployment of MEM involves 3 components: inserted random void epochs, random selection of masks and masking fraction. Our default setting is through the use of void events and uniformly randomly selecting 15% for masking during pre-training. We perform 3 ablation studies on the synthetic datasets: 1) void vs. no void events, 2) geometric vs. random mask and 3) mask fraction. Ablation 1 evaluates the effect of injected random void epochs in MEM on prediction. Clearly from Table 2, we notice a drop of type accuracy and increase of RMSE in time prediction for both cases; in particular, the predictive performance deteriorates to below the baselines on Hawkes-Exp. The injection of void epochs is justified for producing competitive results in random masking. Ablation 2 compares the impact of two masking strategies: geometric and random. We employ the former from Zerveas et al.
(2021), along with inserted void epochs. Geometric masks produce slightly deteriorated results particularly on PGEM data, suggesting masking consecutive segments may not aid in learning dynamics in the continuous-time setting. Ablation 3 compares the choice of fraction of randomly masked epochs. In general, we find no significant difference between 15% and 30% for event prediction.
4.3 REAL APPLICATIONS
Datasets. We perform transfer learning experiments on 3 real applications, as listed below. A descriptive summary of the 6 datasets used are shown in Table 3.
• Financial: Defi-Mainnet and Defi-Polygon are privately curated datasets involving userlevel crypto currency transactions from the Aave website3. Mainnet and Polygon represent two different protocols/deployments on the platform. We test the algorithms on Mainnet after pre-training on Polygon.
3aave.com
Dataset # classes # seqs. Avg. length # events Data Type
Defi-Mainnet 6 20539 32 654844 Financial Defi-Polygon 6 33597 85 2856453 Financial Electronics 4 9993 20 195726 E-Commerce Cosmetics 4 19301 39 752109 E-Commerce ACLED-India 6 111 17 1934 Political ACLED-Bangladesh 4 97 17 1697 Political
Results. As demonstrated by its highest accuracy and lowest RMSE in Table 4, transfer learning with Event-former in these datasets consistently improves upon all baselines, and the improvement ranges from 3% to 15%. The most impressive improvement of 15% is on time prediction in DefiMainnet where using a different protocol appears to be sufficient to help with learning the dynamics. This likely suggests that the dynamics of real applications have noticeable shared similarities that the state-of-the-art transformer model approaches are unable to exploit. In addition, we observe that the transfer performance in general increases with increasing size of pre-training data. As shown from Table 3, the 3 pairs of datasets have different amounts of data; the Defi datasets have the highest number of samples, while the ACLED datasets have the lowest number. This is an indication that Event-former may be able to generalize better with more pre-training data.
5 CONCLUSION
In this work, we propose a novel self-supervised paradigm for transfer learning in multivariate temporal point processes. We introduce the usage of void events for transformer architectures, which is unique in continuous-time event models, and design a masking strategy for predicting masked event epochs and void spaces in-between. We empirically demonstrate the potential of our approach using synthetic as well as various real-world datasets. In particular, improvement of prediction performance is noticeably significant on transferring tasks over many existing competitive transformerbased approaches. While this study focuses on the homogeneous transfer setting, our approach could potentially be extended to other more complex transfer settings, such as out-of-domain heterogeneous transfer (Zhuang et al., 2021) with datasets that contain non-overlapping event labels. We leave these more complex cases to future work.
4https://www.kaggle.com/datasets/mkechinov/ecommerce-events-history-in-electronics-store 5https://www.kaggle.com/mkechinov/ecommerce-events-history-in-cosmetics-shop 6https://www.kaggle.com/datasets/saimasharleen/acled-bangladesh 7https://www.kaggle.com/datasets/shivkumarganesh/riots-in-india-19972022-acled-dataset-50k
A APPENDIX
A.1 SYNTHETIC GENERATORS
We generated datasets from Hawkes process and Proximal Graphical Event Model. We describe the parameters used in our experiments to generate event datasets.
Hawkes-Exp. We use a standard library to generate datasets from the Hawkes dynamics 8.The parameters are baseline rate, decay coefficent, adjacency (infectivity matrix) and end time. We describe the four for models A, B, C, D, E and F in our study. A. baseline = [0.1097627 , 0.14303787, 0.12055268, 0.10897664, 0.08473096, 0.12917882, 0.08751744, 0.1783546 , 0.19273255, 0.0766883], decay = 2.5, infectivity = [[0.15037453, 0.10045448, 0.10789028, 0.17580114, 0.01349208, 0.01654871, 0.00384014, 0.1581418 , 0.14779747, 0.16524382], [0.1858717 , 0.1517864 , 0.08765006, 0.14824807, 0.02246419, 0.12154197, 0.02722749, 0.17942359, 0.0991161 , 0.07875789], [0.05024778, 0.14705235, 0.0866379 , 0.10796424, 0.0035688 , 0.11730922, 0.11625704, 0.11717598, 0.17924869, 0.12950002], [0.06828233, 0.08300669, 0.13250303, 0.01143879, 0.12664085, 0.12737611, 0.03995854, 0.02448733, 0.05991018, 0.0690806 ], [0.10829905, 0.0833048 , 0.18772458, 0.01938165, 0.03967254, 0.03063796, 0.12404668, 0.04810838, 0.0885677 , 0.04642443], [0.03019353, 0.02096386, 0.1246585 , 0.02624547, 0.03733743, 0.07003299, 0.15593352, 0.01844271, 0.1591532 , 0.01825224], [0.18546165, 0.08901222, 0.18551894, 0.11487999, 0.14041038, 0.00744305, 0.05371431, 0.02282927, 0.05624673, 0.02255028], [0.06039543, 0.07868212, 0.01218371, 0.13152315, 0.10761619, 0.05040616, 0.09938195, 0.01784238, 0.10939112, 0.1765038 ], [0.06050668, 0.1267631 , 0.02503273, 0.13605401, 0.0549677 , 0.03479404, 0.11139803, 0.00381908, 0.15744288, 0.00089182], [0.12873957, 0.05128336, 0.13963744, 0.18275114, 0.04724637, 0.10943116, 0.11244817, 0.10868939, 0.04237051, 0.18095826]] and end-time = 10.
B. Baseline = [8.34044009e-02, 1.44064899e-01, 2.28749635e-05, 6.04665145e-02, 2.93511782e02, 1.84677190e-02, 3.72520423e-02, 6.91121454e-02, 7.93534948e-02, 1.07763347e-01], decay = 2.5, adjacency = [[0. , 0. , 0.03514877, 0.15096311, 0. , 0.11526461, 0.07174169, 0.09604815, 0.02413487, 0. ], [0. , 0. , 0. , 0. , 0.15066599, 0.15379789, 0.01462053, 0.00671417, 0. , 0. ], [0.01690747, 0.07239546, 0.16467728, 0. , 0.11894528, 0. , 0.11802102, 0.14348615, 0.00314406, 0.12896239], [0.17000181, 0. , 0. , 0. , 0. , 0. , 0. , 0.0504772 , 0.04947341, 0. ], [0. , 0.11670321, 0.03638242, 0. , 0. , 0.00917392, 0. , 0.0252251 , 0.10131151, 0.1203002 ], [0. , 0.07118317, 0.11937903, 0.07120436, 0. , 0. , 0. , 0.08851807, 0.16239168, 0.10083865], [0. , 0.02363421, 0.02394394, 0.1388041 , 0.06836732, 0.02842716, 0.15945428, 0. , 0. , 0.12481123], [0.15185513, 0. , 0.1290996 , 0. , 0. , 0.15401787, 0. , 0.16587219, 0.11405672, 0.10687992], [0. , 0. , 0.07734744, 0.09943488, 0.07016556, 0.04074891, 0. , 0. , 0.00049346, 0.10609756], [0.05615574, 0.09061013, 0.15230831, 0.06142066, 0.15619243, 0.10716606, 0.00271994, 0.15978585, 0.11877677, 0.17145652]] and end-time = 20.
C. baseline = [0.08719898, 0.00518525, 0.1099325 , 0.08706448, 0.08407356, 0.06606696, 0.04092973, 0.12385419, 0.05993093, 0.05336546], decay = 2.5, adjacency = [[0. , 0.09623737, 0. , 0. , 0. , 0.14283231, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0.14434229, 0.10548788, 0. , 0. ], [0. , 0. , 0.16178088, 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0.14554646, 0. , 0. , 0. , 0. , 0.09531432, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.04899491], [0. , 0. , 0. , 0.07048057, 0. , 0.07546073, 0. , 0. , 0. , 0. ], [0.05697369, 0. , 0. , 0. , 0. , 0. , 0.11709833, 0. , 0.03100542, 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0.04016475, 0. , 0.10768499, 0.06297179]] and end-time = 20.
D. Baseline = [0.11015958, 0.14162956, 0.05818095, 0.10216552, 0.17858939, 0.17925862, 0.02511706, 0.04144858, 0.01029344, 0.08816197], decay = [[8., 9., 2., 7., 3., 3., 2., 4., 6., 9.], [2., 9., 8., 9., 2., 1., 6., 5., 2., 6.], [5., 8., 7., 1., 1., 3., 5., 6., 9., 9.], [8., 6., 2., 2., 2., 6., 6., 8., 5., 4.], [1., 1., 1., 1., 3., 3., 8., 1., 6., 1.], [2., 5., 2., 3., 3., 5., 9., 1., 7., 1.], [5., 2., 6., 2., 9., 9., 8., 1., 1., 2.], [8., 9., 8., 5., 1., 1., 5., 4., 1., 9.], [3., 8., 3., 2., 4., 3., 5., 2., 3., 3.], [8., 4., 5., 2., 7., 8., 2., 1., 1., 6.]], adjacency = [[0. , 0.14343731, 0. , 0.11247478, 0. , 0. , 0. , 0.09416725, 0. , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.15805746, 0. , 0.08262718], [0. , 0. , 0.03018515, 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0.11090719, 0.08158544, 0. , 0.11109462, 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0.14264684, 0. , 0. , 0.11786607, 0. , 0. , 0.01101593, 0. ], [0.04954495, 0. , 0. , 0. , 0. , 0.12385743, 0. , 0. , 0.02375575, 0.05345351], [0. , 0.14941748, 0.02618691, 0. , 0.13608937, 0. , 0.06263167, 0. , 0.04097688, 0.14101171], [0. , 0.11902986, 0. , 0.04889382, 0. , 0. , 0.01569298, 0.03678315, 0. , 0. ], [0.05359555, 0. , 0. , 0.09188512, 0. , 0.14255311, 0. , 0. , 0. , 0. ], [0.12792415, 0.05843994, 0.16156482, 0.11931973, 0. , 0.00774966, 0.00947755, 0. , 0. , 0. ]], and end-time = 10.
8https://x-datainitiative.github.io/tick/modules/generated/tick.hawkes.SimuHawkesExpKernels.html
E. Baseline = [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2], decay = [[9., 4., 9., 9., 1., 6., 4., 6., 8., 7.], [1., 5., 8., 9., 2., 7., 3., 3., 2., 4.], [6., 9., 2., 9., 8., 9., 2., 1., 6., 5.], [2., 6., 5., 8., 7., 1., 1., 3., 5., 6.], [9., 9., 8., 6., 2., 2., 2., 6., 6., 8.], [5., 4., 1., 1., 1., 1., 3., 3., 8., 1.], [6., 1., 2., 5., 2., 3., 3., 5., 9., 1.], [7., 1., 5., 2., 6., 2., 9., 9., 8., 1.], [1., 2., 8., 9., 8., 5., 1., 1., 5., 4.], [1., 9., 3., 8., 3., 2., 4., 3., 5., 2.]], adjacency= [[0.02186539, 0.09356695, 0. , 0.16101355, 0.11527002, 0.09149395, 0. , 0. , 0.15672219, 0. ], [0. , 0. , 0.14241135, 0. , 0.11167029, 0. , 0. , 0. , 0.0934937 , 0. ], [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.15692693, 0. ], [0.08203618, 0. , 0. , 0.02996925, 0. , 0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0.11011391, 0.08100189, 0. , 0.11029999, 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0.14162654, 0. , 0. , 0.11702301, 0. , 0. , 0.01093714], [0. , 0.04919057, 0. , 0. , 0. , 0. , 0.12297152, 0. , 0. , 0.02358583], [0.05307117, 0. , 0.14834875, 0.02599961, 0. , 0.13511597, 0. , 0.06218368, 0. , 0.04068378], [0.1400031 , 0. , 0.11817848, 0. , 0.0485441 , 0. , 0. , 0.01558073, 0.03652006, 0. ], [0. , 0.05321219, 0. , 0. , 0.0912279 , 0. , 0.14153347, 0. , 0. , 0. ]] and end-time = 50.
F. Baseline = [0.21736198, 0.11134775, 0.16980704, 0.33791045, 0.00188754, 0.04862765, 0.26829963, 0.3303411 , 0.05468264, 0.23003733], decay = [[5., 1., 7., 3., 5., 2., 6., 4., 5., 5.], [4., 8., 2., 2., 8., 8., 1., 3., 4., 3.], [6., 9., 2., 1., 8., 7., 3., 1., 9., 3.], [6., 2., 9., 2., 6., 5., 3., 9., 4., 6.], [1., 4., 7., 4., 5., 8., 7., 4., 1., 5.], [5., 6., 8., 7., 7., 3., 5., 3., 8., 2.], [7., 7., 1., 8., 3., 4., 6., 5., 3., 5.], [4., 8., 1., 1., 6., 7., 7., 6., 7., 5.], [8., 4., 3., 4., 9., 8., 2., 6., 4., 1.], [7., 3., 4., 5., 9., 9., 6., 3., 8., 6.]], adjacency = [[0.15693854, 0.04896059, 0.0400508 , 0. , 0. , 0.13021228, 0. , 0.10699903, 0. , 0.15329807], [0.0784283 , 0. , 0. , 0. , 0. , 0. , 0.00310706, 0.0090892 , 0.07758874, 0. ], [0.01672489, 0. , 0. , 0.07851303, 0. , 0. , 0.12848331, 0.08859293, 0. , 0. ], [0.09984995, 0. , 0. , 0.10541925, 0. , 0.08032527, 0. , 0. , 0. , 0. ], [0.14642469, 0.06629365, 0. , 0. , 0. , 0. , 0.11891738, 0.04166225, 0.09808829, 0.17638655], [0.00976324, 0.1100343 , 0.02003261, 0. , 0. , 0.05993539, 0.09739541, 0. , 0. , 0. ], [0. , 0. , 0. , 0.04672133, 0.16916 , 0. , 0.17341419, 0.12078975, 0.14441602, 0. ], [0. , 0.17305542, 0.06927975, 0. , 0.03408974, 0. , 0.08457162, 0.03787486, 0.10863292, 0. ], [0.07186225, 0.05760593, 0. , 0. , 0.08042031, 0.04403479, 0.1033595 , 0.17046747, 0. , 0.05083523], [0. , 0.10029222, 0. , 0.1022067 , 0. , 0. , 0.0588527 , 0. , 0.03530513, 0. ]], and end-time = 40.
PGEM. We implement PGEM generator (Bhattacharjya et al., 2018) to generate 5-dimensional event datasets governed by the PGEM dynamics. The parameters are the conditional intensity (lambdas) for each event type given parental states, parental configuration (parents), windows for each parental state (windows) and end time. We describe the 5 for models A, B, C, D and E in our study. End time is 100 across all models.
A has the following parameters: ’parents’: ’A’: [], ’B’: [], ’C’: [’B’], ’D’: [’A’, ’B’], ’E’: [’C’], ’windows’: ’A’: [], ’B’: [], ’C’: [15], ’D’: [15, 30], ’E’: [15], ’lambdas’: ’A’: (): 0.2, ’B’: (): 0.05, ’C’: (0,): 0.2, (1,): 0.3, ’D’: (0, 0): 0.1, (0, 1): 0.05, (1, 0): 0.3, (1, 1): 0.2, ’E’: (0,): 0.1, (1,): 0.3
B has the follow parameters: ’parents’: ’A’: [’B’], ’B’: [’B’], ’C’: [’B’], ’D’: [’A’], ’E’: [’C’], ’windows’: ’A’: [15], ’B’: [30], ’C’: [15], ’D’: [30], ’E’: [30], ’lambdas’: ’A’: (0,): 0.3, (1,): 0.2, ’B’: (0,): 0.2, (1,): 0.4, ’C’: (0,): 0.4, (1,): 0.1, ’D’: (0,): 0.05, (1,): 0.2, ’E’: (0,): 0.1, (1,): 0.3.
C has the follow parameters: ’parents’: ’A’: [’B’, ’D’], ’B’: [], ’C’: [’B’, ’E’], ’D’: [’B’], ’E’: [’B’], ’windows’: ’A’: [15, 30], ’B’: [], ’C’: [15, 30], ’D’: [30], ’E’: [30], ’lambdas’: ’A’: (0, 0): 0.1, (0, 1): 0.05, (1, 0): 0.3, (1, 1): 0.2, ’B’: (): 0.2, ’C’: (0, 0): 0.2, (0, 1): 0.05, (1, 0): 0.4, (1, 1): 0.3, ’D’: (0,): 0.1, (1,): 0.2, ’E’: (0,): 0.1, (1,): 0.4.
D has the follow parameters: ’parents’: ’A’: [’B’], ’B’: [’C’], ’C’: [’A’], ’D’: [’A’, ’B’], ’E’: [’B’, ’C’], ’windows’: ’A’: [15], ’B’: [30], ’C’: [15], ’D’: [15, 30], ’E’: [30, 15], ’lambdas’: ’A’: (0,): 0.05, (1,): 0.2, ’B’: (0,): 0.1, (1,): 0.3, ’C’: (0,): 0.4, (1,): 0.2, ’D’: (0, 0): 0.1, (0, 1): 0.3, (1, 0): 0.05, (1, 1): 0.2, ’E’: (0, 0): 0.1, (0, 1): 0.02, (1, 0): 0.4, (1, 1): 0.1.
E has the follow parameters: ’parents’: ’A’: [’A’], ’B’: [’A’, ’C’], ’C’: [’C’], ’D’: [’A’, ’E’], ’E’: [’C’, ’D’], ’windows’: ’A’: [15], ’B’: [30, 30], ’C’: [15], ’D’: [15, 30], ’E’: [15, 30], ’lambdas’: ’A’: (0,): 0.1, (1,): 0.3, ’B’: (0, 0): 0.01, (0, 1): 0.05, (1, 0): 0.1, (1, 1): 0.5, ’C’: (0,): 0.2, (1,): 0.4, ’D’: (0, 0): 0.05, (0, 1): 0.02, (1, 0): 0.2, (1, 1): 0.1, ’E’: (0, 0): 0.1, (0, 1): 0.01, (1, 0): 0.3, (1, 1): 0.1.
A.2 REAL DATASETS
Cosmetics and Electronics contain user-level online transactions in an electronics and cosmetics store respectively. While the original cosmetics dataset from Kaggle contain multiple months of transactions, we used the one from Dec, 2019. Similarly, we also used electronics dataset from Dec, 2019. Both share the same four types of events: ’view’, ’cart’,’remove-from-cart’ and ’purchase. The two datasets involve transaction events in seconds. To optimize computation for transformer models, we filtered out sequences longer than 300 events and shorter than 30 and scaled the timestamps into [0,1] to avoid numerical issues.
DeFi-Mainnet and DeFi-Polygon. DeFi Mainnet is built on the more widely used ethereum blockchain. Polygon is a scalable sidechain of Ethereum that allows for much faster and low fee transactions than the original Ethereum Blockchain. The difference in fee structure produces quite different dynamics in the two different AAVE lending protocols. DeFi-Polygon has many more users and transactions per user, but much less total value locked than DeFi Mainnet. The polygon users are much more likely to engage in risky but potentially profitable ”Yield Farming” transactions. Foundation methods would be very useful in modeling the many other lending protocols in AAVE or other lending platforms which are new or less popular and thus have less transactions. The 6 types of actions user performs are : ’borrow’, ’collateral’, ’deposit’, ’liquidation’, ’redeem’, ’repay’. The origin timestamps are mined in Unix Timestamp. We filtered out sequences longer than 300 events and shorter than 30 and scaled the timestamps into [0,1] to avoid numerical issues.
ACLED-India and ACLED-Bangladesh contains sequences where each sequence involves an actor involving some armed conflict (i.e. riots and protests) in the respective country. The former involves events happening from 2016 to 2022, and the latter 2010-2021. The unit of each timestamp is ’days’. There are 6 types of events in the ACLED-India which are: ’Battles’, ’Explosions/Remote violence’, ’Riots’, ’Violence against civilians’,’Protests’,and ’Strategic developments’. The 4 overlapping types in ACLED-Bangladesh are ’Battles’, ’Explosions/Remote violence’, ’Riots’,and ’Violence against civilians’. We filtered out sequences longer than 300 events and shorter than 2 and scaled the timestamps into [0,1] to avoid numerical issues.
A.3 MODEL IMPLEMENTATION AND (PRE)TRAINING
Pretraining. Our pretrain model adapts codes from Zuo et al. (2020) 9. A full repo will be given upon acceptance. The procedure is fully described by Algorithm 1. We train our model via stochastic gradient descent and Adam Kingma & Ba (2014) optimizer is used for optimization. The default transformer architecture we employed are the following for pretraining: the number of blocks for multi-headed self-attention module is 4; the dimension of the value vector after attention has been applied is 512; the number attention heads is 4; the dimension of the hidden layer of the feed forward neural network 1024; the dimension of the value vector 512; the dimension of the key vector 512; dropout is 0.1. We train 100 epochs with a learning rate of 0.0001. γ is set to 1 for all experiments other than one on ACLED-india where we use 10.
9https://github.com/SimiaoZuo/Transformer-Hawkes-Process/tree/master/transformer
Fine-tuning. The fine tuning model consists of a feed forward network with 3 hidden layers of dimension 512. We train with Adam optimizer with learning rate of 0.001 with 100 epochs. α is set 0.01 for all experiments.
PE+TE. The combined TE + PE improves optimization when trained with only 2 samples. With TE only results in a compromised optimization (see Figure 3).
Algorithm 1: Pretraining of Event-former
Given dataset S with D sequences, each with length dl, {(ti, yi)}dli=1 , batch size b Insertion void epochs: S′ = [] for d← 1 to D do
seq’ = [] for i← 1 to dl − 1 do
t’i ∼ Unif(ti, ti+1) seq’.append((t’i, null)) end seqnew = Merge(seq, seq’) S’.append( seqnew )
end Masking to obtain S′m: for d← 1 to D do
for i← 1 to d′l − 1 do mask ∼ Bern(0.15) if mask == 1 then
(0, null)← (t′i, y′i) end
end end split(S’m) :=S’m,tr = {S′m,k}Kk=1, S′m,dev for epoch← 1 to N do
for iteration← 1 to ⌈Kb ⌉ do Sample a batch of sequences B′ from S′m,tr compute Levent(B′) (Eq. 5) back-propagate with gradient∇θ,ϕLevent(B′) ( θ: Transformer model parameters, ϕ:
Weights and bias for regression and classification.) update parameters of network θ, ϕ
end evaluate Levent(S′m,dev), stop training if not improving in 5 epochs
end Return: Optimal parameters ϕ∗, θ∗
A.4 BASELINES MODELS AND IMPLEMENTATION
T-RMTPP and T-ERPP. The original RMTPP and ERPP models are LSTM based. The transformer architecture we used for T-RMTPP and T-ERPP is adapted from Zuo et al. (2020). We used recommended set of parameters for training these two models.
T-LNM. The original LNM is RNN/LSTM/GRU based. The transformer architecture we used for T-LNM is adapted from Zuo et al. (2020). We used 64 mixtures of lognormal components to model that density of log (inter-event) times.
THP. We used the recommended set of parameters to train the model as it is.
A.5 PROOF OF THEOREM 1: TRANSFORMER WITH COMBINED TEMPORAL ENCODING AND POSITION ENCODING
Consider a general type of transformer architecture described by Yun et al. (2019). Our proof for a transformer with combined temporal encoding and position encoding as a universal approximating function for any continuous function for sequence-to-sequence with compact support follows similarly as proof of transformer with position encoding (Theorem 3 in Yun et al. (2019)). Without loss of generality, consider a sequence with timestamps {t1, t2, ..., tn} and let ti be integer-valued for i ∈ {1, 2, 3..., n} and ti < tj for all i < j. If ti are in decimals, we multiply by a constant to transform each given timestamp to an integer without affecting the dynamics of the event sequence. We choose a d-dimensional Temporal encoding for the n event epochs to be the following:
T = 0 t2 − t1 . . . tn − t1 0 t2 − t1 . . . tn − t1 ... ... ...
0 t2 − t1 . . . tn − t1 Similarly a d-dimensional position encoding for the n event epochs to be the following:
P = 0 1 . . . n 0 1 . . . n ... ... ...
0 1 . . . n The combined encoding is :
PE+TE = 0 t2 − t1 + 1 . . . tn − t1 + 1 0 t2 − t1 + 1 . . . tn − t1 + 1 ... ... ...
0 t2 − t1 + 1 . . . tn − t1 + 1 The strict temporal order of timestamps ti − t1 + 1 < ti+1 − t1 + 1for all i ∈ 1, 2, 3, ..., n− 1. This guarantees for all rows the coordinates are monotonically increasing and input values can be partitioned into cubes. The rest of the proof follows directly from the proof of Theorem 3 in Yun et al. (2019) by replacing n with tn by performing quantization by feed-forward layers, contextual mapping by attention layers and function value mapping by feed-forward layers. | 1. What is the focus and contribution of the paper regarding transfer learning in multivariate temporal point processes?
2. What are the strengths of the proposed approach, particularly in terms of the masking strategy and void events?
3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work propose a novel self-supervised learning paradigm for transfer learning in multivariate temporal point processes. The authors introduce the usage of void events for transformer architectures and propose a masking strategy for predicting masked epochs and void spaces in-between. Empirical studies show the significant improvement over prediction tasks on real-world datasets.
Strengths And Weaknesses
Pros:
The paper is well-organized, and the writing is easy to follow. The proposed masking strategy and void events are reasonable for applications of the temporal point process.
Experimental results demonstrate the effectiveness of the proposed paradigm on prediction tasks over real-world datasets.
Cons:
The motivation for using the pre-training + fine-tuning paradigm is unclear. Unlike the image and text data which contains scarce labelled datasets. The event datasets naturally include the labels. Thus the motivation of leveraging a large amount of unlabeled data to attain better latent representation does not hold in this setting.
The novelty is limited. The idea of void events makes sense, but it is not novel (as the author mentioned). I am not clear the necessity for masked events in the pre-training phase. Why not follow the same event prediction loss in the fine-tuning phase, which is a typical loss function for self-supervised learning with sequential data?
For the experiments, are the baselines trained with both source and target datasets? It would be more promising to compare with typical transfer learning methods.
Clarity, Quality, Novelty And Reproducibility
Since the events data naturally include labels, it is unclear to follow the self-supervised learning paradigm. |
ICLR | Title
A Temporal Kernel Approach for Deep Learning with Continuous-time Information
Abstract
Sequential deep learning models such as RNN, causal CNN and attention mechanism do not readily consume continuous-time information. Discretizing the temporal data, as we show, causes inconsistency even for simple continuous-time processes. Current approaches often handle time in a heuristic manner to be consistent with the existing deep learning architectures and implementations. In this paper, we provide a principled way to characterize continuous-time systems using deep learning tools. Notably, the proposed approach applies to all the major deep learning architectures and requires little modifications to the implementation. The critical insight is to represent the continuous-time system by composing neural networks with a temporal kernel, where we gain our intuition from the recent advancements in understanding deep learning with Gaussian process and neural tangent kernel. To represent the temporal kernel, we introduce the random feature approach and convert the kernel learning problem to spectral density estimation under reparameterization. We further prove the convergence and consistency results even when the temporal kernel is non-stationary, and the spectral density is misspecified. The simulations and real-data experiments demonstrate the empirical effectiveness of our temporal kernel approach in a broad range of settings.
1 INTRODUCTION
Deep learning models have achieved remarkable performances in sequence learning tasks leveraging the powerful building blocks from recurrent neural networks (RNN) (Mikolov et al., 2010), longshort term memory (LSTM) (Hochreiter & Schmidhuber, 1997), causal convolution neural network (CausalCNN/WaveNet) (Oord et al., 2016) and attention mechanism (Bahdanau et al., 2014; Vaswani et al., 2017). Their applicability to the continuous-time data, on the other hand, is less explored due to the complication of incorporating time when the sequence is irregularly sampled (spaced). The widely-adopted workaround is to study the discretized counterpart instead, e.g. the temporal data is aggregated into bins and then treated as equally-spaced, with the hope to approximate the temporal signal using the sequence information. It is perhaps without surprise, as we show in Claim 1, that even for regular temporal sequence the discretization modifies the spectral structure. The gap can only be amplified for irregular data, so discretizing the temporal information will almost always
∗The work was done when the author was with Walmart Labs.
introduce intractable noise and perturbations, which emphasizes the importance to characterize the continuous-time information directly. Previous efforts to incorporate temporal information for deep learning include concatenating the time or timespan to feature vector (Choi et al., 2016; Lipton et al., 2016; Li et al., 2017b), learning the generative model of time series as missing data problems (Soleimani et al., 2017; Futoma et al., 2017), characterizing the representation of time (Xu et al., 2019; 2020; Du et al., 2016) and using neural point process (Mei & Eisner, 2017; Li et al., 2018). While they provide different tools to expand neural networks to coupe with time, the underlying continuous-time system and process are involved explicitly or implicitly. As a consequence, it remains unknown in what way and to what extend are the continuous-time signals interacting with the original deep learning model. Explicitly characterizing the continuous-time system (via differential equations), on the other hand, is the major pursuit of classical signal processing methods such as smoothing and filtering (Doucet & Johansen, 2009; Särkkä, 2013). The lack of connections is partly due to the compatibility issues between the signal processing methods and the auto-differential gradient computation framework of modern deep learning. Generally speaking, for continuous-time systems, model learning and parameter estimation often rely on the more complicated differential equation solvers (Raissi & Karniadakis, 2018; Raissi et al., 2018a). Although the intersection of neural network and differential equations is gaining popularity in recent years, the combined neural differential methods often require involved modifications to both the modelling part and implementation detail (Chen et al., 2018; Baydin et al., 2017).
Inspired by the recent advancement in understanding neural network with Gaussian process and the neural tangent kernel (Yang, 2019; Jacot et al., 2018), we discover a natural connection between the continuous-time system and the neural Gaussian process after composing with a temporal kernel. The significance of the temporal kernel is that it fills in the gap between signal processing and deep learning: we can explicitly characterize the continuous-time systems while maintaining the usual deep learning architectures and optimization procedures. While the kernel composition is also known for integrating signals from various domains (Shawe-Taylor et al., 2004), we face the additional complication of characterizing and learning the unknown temporal kernel in a data-adaptive fashion. Unlike the existing kernel learning methods where at least the parametric form of the kernel is given (Wilson et al., 2016), we have little context on the temporal kernel, and aggressively assuming the parametric form will risk altering the temporal structures implicitly just like discretization. Instead, we leverage the Bochner’s theorem and its extension (Bochner, 1948; Yaglom, 1987) to first covert the kernel learning problem to the more reasonable spectral domain where we can direct characterize the spectral properties with random (Fourier) features. Representing the temporal kernel by random features is favorable as we show they preserve the existing Gaussian process and NTK properties of neural networks. This is desired from the deep learning’s perspective since our approach will not violate the current understandings of deep learning. Then we apply the reparametrization trick (Kingma & Welling, 2013), which is a standard tool for generative models and Bayesian deep learning, to jointly optimize the spectral density estimator. Furthermore, we provide theoretical guarantees for the random-feature-based kernel learning approach when the temporal kernel is non-stationary, and the spectral density estimator is misspecified. These two scenarios are essential for practical usage but have not been studied in the previous literature. Finally, we conduct simulations and experiments on real-world continuous-time sequence data to show the effectiveness of the temporal kernel approach, which significantly improves the performance of both standard neural architectures and complicated domain-specific models. We summarize our contributions as follow.
• We study a novel connection between the continuous-time system and neural network via the composition with a temporal kernel.
• We propose an efficient kernel learning method based on random feature representation, spectral density estimation and reparameterization, and provide strong theoretical guarantees when the kernel is nonstationary and the spectral density is misspeficied.
• We analyze the empirical performance of our temporal kernel approach for both the standard and domain-specific deep learning models through real-data simulation and experiments.
2 NOTATIONS AND BACKGROUND
We use bold-font letters to denote vectors and matrices. We use xt and (x, t) interchangeably to denote a time-sensitive event occurred at time t, with t ∈ T ≡ [0, tmax]. Neural networks are denoted
by f(θ,x), where x ∈ X ⊂ Rd is the input where diameter(X ) ≤ l, and the network parameters θ are sampled i.i.d from the standard normal distribution at initialization. Without loss of generality, we study the standard L-layer feedforward neural network with its output at the hth hidden layer given by f (h) ∈ Rdh . We use and (t) to denote Gaussian noise and continuous-time Gaussian noise process. By convention, we use ⊗ and ◦ and to represent the tensor and outer product.
2.1 UNDERSTANDING THE STANDARD NEURAL NETWORK
We follow the settings from Jacot et al. (2018); Yang (2019) to briefly illustrate the limiting Gaussian behavior of f(θ,x) at initialization, and its training trajectory under weak optimization. As d1, . . . , dL → ∞, f (h) tend in law to i.i.d Gaussian processes with covariance Σh ∈ Rdh×dh : f (h) ∼ N(0,Σh), which we refer to as the neural network kernel to distinguish from the other kernel notions. Also, given a training dataset {xi, yi}ni=1, let f ( θ(s) ) = ( f(θ(s),x1), . . . , f(θ (s),xn) ) be the network outputs at the sth training step and y = (y1, . . . , yn). Using the squared loss for example, when training with infinitesimal learning rate, the outputs follows: df ( θ(s) ) /ds =
−Θ(s)× (f ( θ(s) ) − y), where Θ(s) is the neural tangent kernel (NTK). The detailed formulation
of Σh and Θ(s) are provided in Appendix A.2. We introduce the two concepts here because:
1. instead of incorporating time to f(θ,x), which is then subject to its specific structures, can we alternatively consider an universal approach which expands the Σh to the temporal domain such as by composing it with a time-aware kernel?
2. When jointly optimize the unknown temporal kernel and the model parameters, how can we preserve the results on the training trajectory with NTK?
In our paper, we show that both goals are achieved by representing a temporal kernel via random features.
2.2 DIFFERENCE BETWEEN CONTINUOUS-TIME AND ITS DISCRETIZATION
We now discuss the gap between continuous-time process and its equally-spaced discretization. We study the simple univariate continuous-time system f(t):
d2f(t) dt2 + a0 df(t) dt + a1f(t) = b0 (t). (1)
A discretization with a fixed interval is then given by: f[i] = f(i× interval) for i = 1, 2, . . .. Notice that f(t) is a second-order auto-regressive process, so both f(t) and f[i] are stationary. Recall that the covariance function for a stationary process is given by k(t) := cov ( f(t0), f(t0 + t) ) , and the
spectral density function (SDF) is defined as s(ω) = ∫∞ −∞ exp(−iωt)k(t)dt.
Claim 1. The spectral density function for f(t) and f[i] are different.
The proof is relegated to Appendix A.2.2. The key takeaway from the example is that the spectral density function, which characterizes the signal on the frequency domain, is altered implicitly even by regular discretization in this simple case. Hence, we should be cautious about the potential impact of the modelling assumption, which eventually motivates us to explicitly model the spectral distribution.
3 METHODOLOGY
We first explain our intuition using the above example. If we take the Fourier transform on (1) and rearrange terms, it becomes: f̃(iω) = ( b0
(iω)2 + a0(iω) + a1
) ̃(iω), where f̃(iω) and ̃(iω)
are the Fourier transform of f(t) and (t). Note that the spectral density of a Gaussian noise process is constant, i.e. |̃(iω)|2 = p0, so the spectral density of f(t) is given by: sθT (ω) = p0 ∣∣b0/((iω)2 + a0(iω) + a1)∣∣2, where we use θT = [a0, a1, b0] to denote the parameters of the linear dynamic system defined in (1). The subscript T is added to distinguish from the parameters of the neural network. The classical Wiener-Khinchin theorem (Wiener et al., 1930) states that the
covariance function for f(t), which is a Gaussian process since the linear differential equation is a linear operation on (t), is given by the inverse Fourier transform of the spectral density:
KT (t, t ′) := kθT (t ′ − t) = 1 2π
∫ sθT (ω) exp(iωt)dω, (2)
We defer the discussions on the inverse direction, that given a kernel kθT (t ′− t) we can also construct a continuous-time system, to Appendix A.3.1. Consequently, there is a correspondence between the parameterization of a stochastic ODE and the kernel of a Gaussian process. The mapping is not necessarily one-to-one; however, it may lead to a more convenient way to parameterize a continuoustime process alternatively using deep learning models, especially knowing the connections between neural network and Gaussian process (which we highlighted in Section 2.1).
To connect the neural network kernel Σ(h) (e.g. for the hth layer of the FFN) to a continuous-time system, the critical step is to understand the interplay between the neural network kernel and the temporal kernel (e.g. the kernel in (2)):
• the neural network kernel characterizes the covariance structures among the hidden representation of data (transformed by the neural network) at any fixed time point;
• the temporal kernel, which corresponds to some continuous-time system, tells how each static neural network kernel propagates forward in time. See Figure 1a for a visual illustration.
Continuing with Example 1, it is straightforward construct the integrated continuous-time system as:
a2(x) d2f(x, t)
dt2 +a1(x) df(x, t) dt
+a0(x)f(x, t) = b0(x) (x, t), (x, t = t0) ∼ N(0,Σ(h), ∀t0 ∈ T , (3)
where we use the neural network kernel Σ(h) to define the Gaussian process (x, t) on the feature dimension, so the ODE parameters are now functions of the data as well. To see that (3) generalizes the hth layer of a FFN to the temporal domain, we first consider a2(x) = a1(x) = 0 and a0(x) = b0(x). Then the continuous-time process f(x, t) exactly follows f (h) at any fixed time point t, and its trajectory on the time axis is simply a Gaussian process. When a1(x), a2(x) 6= 0, f(x, t) still matches f (h) at the initial point, but its propagation on the time axis becomes nontrivial and is now characterized by the constructed continuous-time system. We can easily the setting to incorporate higher-order terms:
an(x) dnf(x, t)
dtn + · · ·+ a0(x)f(x, t) = bm(x) dm (x, t) dtm + · · ·+ b0(x) (x, t). (4)
Keeping the heuristics in mind, an immediate question is what is the structure of the corresponding kernel function after we combine the continuous-time system with the neural network kernel?
Claim 2. The kernel function for f(x, t) in (4) is given by: Σ(h)T (x, t; x ′, t′) = kθT (x, t; x ′, t′) · Σ(h)(x,x′), where θT is the underlying parameterization of { ai(·) }n i=1 and { bi(·) }m i=1 as functions
of x. When { ai }n i=1 and { bi }m i=1 are scalars, Σ(h)T (x, t; x ′, t′) = kθT (t, t ′) ·Σ(h)(x,x′).
We defer the proof and the discussion on the inverse direction from temporal kernel to continuoustime system to Appendix A.3.1. Claim 2 shows that it is possible to expand any layer of a standard neural network to the temporal domain, as part of a continuous-time system using kernel composition. The composition is flexible and can happen at any hidden layer. In particular, given the temporal kernel KT and neural network kernel Σ(h), we obtain the neural-temporal kernel on X × T : Σ
(h) T = diag ( Σ(h) ⊗KT ) , where diag(·) is the partial diagonalization operation on X :
Σ (h) T (x, t; x ′, t′) = Σ(h)(x,x′) ·KT (x, t; x′, t′). (5) The above argument shows that instead of taking care of both the deep learning and continuous-time system, which remains challenging for general architectures, we can convert the problem to finding a suitable temporal kernel. We further point out that when using neural networks, we are parameterizing the hidden representation (feature lift) in the feature space rather than the kernel function in the kernel space. Therefore, to give a consistent characterization, we should also study the feature representation of the temporal kernel and then combine it with the hidden representations of the neural network.
3.1 THE RANDOM FEATURE REPRESENTATION FOR TEMPORAL KERNEL
We start by considering the simpler case where the temporal kernel is stationary and independent of features: KT (t, t′) = k(t′ − t), for some properly scaled positive even function k(·). The classical Bochner’s theorem (Bochner, 1948) states that:
ψ(t′ − t) = ∫ R e−i(t ′−t)ωds(ω), for some probability density function s on R, (6)
where s(·) is the spectral density function we highlighted in Section 2.2. To compute the integral, we may sample (ω1, . . . , ωm) from s(ω) and use the Monte Carlo method: ψ(t′ − t) ≈ 1
m
∑m i=1 e −i(t′−t)ω . Since e−i(t ′−t)ᵀω = cos ( (t′− t)ω ) − i sin ( (t′− t)ω ) , for the real part, we let:
φ(t) = 1√ m
[ cos(tω1), sin(tω1) . . . , cos(tωm), sin(tωm) ] , (7)
and it is easy to check that ψ(t′− t) ≈ 〈φ(t),φ(t′)〉. Since φ(t) is constructed from random samples, we refer to it as the random feature representation of KT . Random feature has been extensive studied in the kernel machine literature, however, we propose a novel application for random features to parametrize an unknown kernel function. A straightforward idea is to parametrize the spectral density function s(ω), whose pivotal role has been highlighted in Section 2.2 and Example 1.
Suppose θT is the distribution parameters for s(ω). Then φθT (t) is also (implicitly) parameterized by θT through the samples { ω(θT )i }m i=1
from s(ω). The idea resembles the reparameterization trick for training variational objectives (Kingma & Welling, 2013), which we formalize in the next section. For now, it remains unknown if we can also obtain the random feature representation for nonstationary kernels where Bochner’s theorem is not applicable. Note that for a general temporal kernel KT (x, t; x
′, t′), in practice, it is not reasonable to assume stationarity specially for the feature domain. In Proposition 1, we provide a rigorous result that generalizes the random feature representation for nonstationary kernels with convergence guarantee. Proposition 1. For any (scaled) continuous non-stationary PDS kernel KT on X × T , there exists a joint probability measure with spectral density function s(ω1,ω2), such that KT ( (x, t), (x′, t′) ) =
Es(ω1,ω2) [ φ(x, t)ᵀφ(x′, t′) ] where φ(x, t) is given by:
1
2 √ m
[ . . . , cos ( [x, t]ᵀω1,i ) + cos ( [x, t]ᵀω2,i ) , sin ( [x, t]ᵀω1,i ) + sin ( [x, t]ᵀω2,i ) . . . ] , (8)
Here, { (ω1,i,ω2,i) }m i=1
are the m samples from s(ω1,ω2). When the sample size m ≥ 8(d+1) ε2 log ( C(d) ( l2t2maxσp/ε ) 2d+2 d+3 /δ ) , with probability at least 1− δ, for any ε > 0,
sup (x,t),(x′,t′)
∣∣∣KT ((x, t), (x′, t′))− φ(x, t)ᵀφ(x′, t′)∣∣∣ ≤ ε, (9)
where σ2p is the second moment of the spectral density function s(ω1,ω2) and C(d) is a constant.
We defer the proof to Appendix A.3.2. It is obvious that the new random feature representation in (8) is a generalization of the stationary setting. There are two advantages for using the random feature representation:
• the composition in the kernel space suggested by (5) is equivalent to the computationally efficient operation f (h)(x) ◦ φ(x, t) in the feature space (Shawe-Taylor et al., 2004);
• we preserve a similar Gaussian process behavior and the neural tangent kernel results that we discussed in Section 2.1, and we defer the discussion and proof to Appendix A.3.3.
In the forward-passing computations, we simply replace the original hidden representation f (h)(x) by the time-aware representation f (h)(x) ◦ φ(x, t). Also, the existing methods and results on analyzing neural networks though Gaussian process and NTK, though not emphasized in this paper, can be directly carried out to the temporal setting as well (see Appendix A.3.3).
3.2 REPARAMETERIZATION WITH THE SPECTRAL DENSITY FUNCTION
We now present the gradient computation for the parameters of the spectral distribution using only their samples. We start from the well-studied case where s(ω) is given by a normal distribution N(µ,Σ) with parameters θT = [µ,Σ]. When computing the gradients of θT , instead of sampling from the intractable distribution s(ω), we reparameterize each sample ωi via: Σ1/2 + µ, where is sampled from a standard multivariate normal distribution. The gradient computations that relied on ω is now replaced using the easy-to-sample and θT = [µ,Σ] now become tractable parameters in the model given . We illustrate reparameterization in our setting in the following example. Example 1. Consider a single-dimension homogeneous linear model: f(θ, x) = f (0)(x) = θx. Without loss of generality, we use only a single sample ω1 from the s(ω) which corresponds to the feature-independent temporal kernel kθ(t, t′). Again, we assume s(ω) ∼ N(µ, σ). Then the time-aware hidden representation for this layer for datapoint (x1, t1) is given by:
f (0) θ,µ,σ(x1, t1) = 1/ √ 2 [ θx1 cos(t1ω1), θx1 sin(t1ω1) ] , ω1 ∼ N(µ, σ).
Using the reparameterization, given a sample 1 from the standard normal distribution, we have:
f (0) θ,µ,σ(x1, t1) = 1/ √ 2 [ θx1 cos ( t1(σ 1/2 1 + µ) ) , θx1 sin ( t1(σ 1/2 1 + µ) )] , (10)
so the gradients with respect to all the parameters (θ, µ, σ) can be computed in the usual way.
Despite the computation advantage, the spectral density is now learnt from the data instead of being given so the convergence result in Proposition 1 does not provide sample-consistency guarantee. In practice, we may also misspecify the spectral distribution and bring extra intractable factors.
To provide practical guarantees, we first introduce several notations: let KT (S) be the temporal kernel represented by random features such that KT (S) = E [ φᵀφ ] , where the expectation is taken with respect to the data distribution and the random feature vector φ has its samples {ωi}mi=1 drawn from the spectral distribution S. Without abuse of notation, we use φ ∼ S to denote the dependency of the random feature vector φ on the spectral distribution S provided in (8). Given a neural network kernel Σ(h), the neural temporal kernel is then denoted by: Σ(h)T (S) = Σ
(h) ⊗KT (S). So the sample version of Σ(h)T (S) for the dataset { (xi, ti) }n i=1 is given by:
Σ̂ (h) T (S) =
1 n(n− 1) ∑ i 6=j Σ(h)(xi,xj)φ(xi, ti) ᵀφ(xj , tj), φ ∼ S. (11)
If the spectral distribution S is fixed and given, then using standard techniques and Theorem 1 it is straightforward to show limn→∞ Σ̂ (h) T (S) → E [ Σ̂ (h) T (S) ] so the proposed learning schema is sample-consistent.
In our case, the spectral distribution is learnt from the data, so we need some restrictions on the spectral distribution in order to obtain any consistency guarantee. The intuition is that if SθT does not diverge from the true S, e.g. d(SθT ‖S) ≤ δ for some divergence measure, the guarantee on S can transfer to SθT with the rate only suffers a discount that does not depend on n.
Theorem 1. Consider the f-divergence such that d(SθT ‖S) = ∫ c ( dSθT /dS ) dS, with the generator
function c(x) = xk − 1 for any k > 0. Given the neural network kernel Σh, let M = ∥∥Σh∥∥∞, then
Pr (
sup d(SθT ||S)≤δ ∣∣Σ̂(h)T (SθT )→ E[Σ̂(h)T (SθT )∣∣ ≥ ε) ≤ √2 exp( −nε264 max{4,M}(δ + 1))+ C(ε), (12)
where C(ε) ∝ (
2l2t2maxσSθT /max{4,M}
) 2d+2 d+3 exp ( − dh 2
32 max{16,M2}(d+3)
) that does not depend on δ.
The proof is provided in Appendix A.3.4. The key takeaway from (12) is that as long as the divergence between the learnt SθT and the true spectral distribution is bounded, we still achieve sample consistency. Therefore, instead of specifying a distribution family which is more likely to suffer from misspecification, we are motivated to employ some universal distribution approximator such as the invertible neural network (INN) (Ardizzone et al., 2018). INN consists of a series of invertible operations that transform samples from a known auxiliary distribution (such as normal distribution) to arbitrarily complex distributions. The Jacobian that characterize the changes of distributions are made invertible by the INN, so the gradient flow is computationally tractable similar to the case in Example 1. We defer the detailed discussions to Appendix A.3.5. Remark 1. It is clear at this point that the temporal kernel approach applies to all the neural networks who have a Gaussian process behavior with a valid neural network kernel, which includes the major architectures such as CNN, RNN and attention mechanism (Yang, 2019).
For implementation, at each forward and backward computation, we first sample from the auxiliary distribution to construct the random feature representation φ using reparameterization, and then compose it with the selected hidden layer f (h) such as in (10). We illustrate the computation architecture in Figure 2, where we adapt the vanilla RNN to the proposed framework. In Algorithm 1, we provide the detailed forward and backward computations, using the L-layer FFN from the previous sections as an example.
4 RELATED WORK
The earliest work that discuss training continuous-time neural network dates back to LeCun et al. (1988); Pearlmutter (1995), but no feasible solution was proposed at that time. The proposed approach relates to several fields that are under active research.
ODE and neural network. Certain neural architecture such as the residual network has been interpreted as approximate ODE solvers (Lu et al., 2018). More direct approaches have been proposed to learn differential equations from data (Raissi & Karniadakis, 2018; Raissi et al., 2018a;
Long et al., 2018), and significant efforts have been spent on developing solvers that combine ODE and the back-propagation framework (Farrell et al., 2013; Carpenter et al., 2015; Chen et al., 2018). The closest literature to our work is from Raissi et al. (2018b) who design numerical Gaussian process resulting from temporal discretization of time-dependent partial differential equations.
Random feature and kernel machine learning. In supervised learning, the kernel trick provides a powerful tool to characterize non-linear data representations (Shawe-Taylor et al., 2004), but the computation complexity is overwhelming for large dataset. The random (Fourier) feature approach proposed by Rahimi & Recht (2008) provides substantial computation benefits. The existing literature on analyzing the random feature approach all assume the kernel function is fixed and stationary (Yang et al., 2012; Sutherland & Schneider, 2015; Sriperumbudur & Szabó, 2015; Avron et al., 2017).
Reparameterization and INN. Computing the gradient for intractable objectives using samples from auxiliary distribution dates back to the policy gradient method in reinforcement learning (Sutton et al., 2000). In recent years, the approach gains popularity for training generative models (Kingma & Welling, 2013), other variational objectives (Blei et al., 2017) and Bayesian neural networks (Snoek et al., 2015). INN are often employed to parameterize the normalizing flow that transforms a simple distribution into a complex one by applying a sequence of invertible transformation functions (Dinh et al., 2014; Ardizzone et al., 2018; Kingma & Dhariwal, 2018; Dinh et al., 2016).
Our approach characterizes the continuous-time ODE via the lens of kernel. It complements the existing neural ODE methods which are often restricted to specific architectures, relying on ODE solvers and lacking theoretical understandings. We also propose a novel deep kernel learning approach by parameterizing the spectral distribution under random feature representation, which is conceptually different from using temporal kernel for time-series classification (Li & Marlin, 2015). Our work is a extension of Xu et al. (2019; 2020), which study the case for self-attention mechanism.
Algorithm 1: Forward pass and parameter update, using the L-layer FFN as an example. Input: The FFN f(θ, ·) = {f (1)(θ, ·), . . . , f (L)(θ, ·)}; the invertible neural network g(ψ, ·);
the selected hidden layer h; the loss `i associated with each input (xi, ti); the auxilliary distribution P .
for each mini-batch do Sample { 1,j , 2,j }m j=1
from the auxilliary distribution P ; Compute the reparameterized samples ω using the INN g(ψ, ·), e.g. ω1,j(ψ) := g(ψ, 1,j); for sample i in the batch do
Construct the random feature representation φψ(xi, ti) using the reparameterized samples (so φ is now explicitly parameterized by ψ) according to eq. (8);
Forward pass: get f (h)(θ,xi), let f (h) ( (θ,ψ),xi, ti ) := f (h)(θ,xi) ◦ φψ(xi, ti),
then pass it to the following feedforward layers to obtain the final output ŷi ; Gradient computation: compute the gradients∇θ`i(ŷi) ∣∣ ,∇ψ`i(ŷi) ∣∣
for the FFN and INN respectively, conditioned on the samples from the auxiliary distribution;
end Update the parameters using the selected optimizer in a standard batch-wise fashion.
end
It is straightforward from Figure 2 and Algorithm 1 that the proposed approach serves as a plug-in module and do not modify the original network structures of the RNN and FFN.
5 EXPERIMENTS AND RESULTS
We focus on revealing the two major advantages of the proposed temporal kernel approach:
• the temporal kernel approach consistently improves the performance of deep learning models, both for the general architectures such as RNN, CausalCNN and attention mechanism as well as the domain-specific architectures, in the presence of continuous-time information;
• the improvement is not at the cost of computation efficiency and stability, and we outperform the alternative approaches who also applies to general deep learning models.
We point out that the neural point process and the ODE neural networks have only been shown to work for certain model architectures so we are unable to compare with them for all the settings.
Time series prediction with standard neural networks (real-data and simulation)
We conduct time series prediction task using the vanilla RNN, CausalCNN and self-attention mechanism with our temporal kernel approach (Figure A.1). We choose the classical Jena weather data for temperature prediction, and the Wikipedia traffic data to predict the number of visits of Wikipedia pages. Both datasets have vectorized features and are regular-sampled. To illustrate the advantage of leveraging the temporal information compared with using only sequential information, we first conduct the ordinary next-step prediction on the regular observations, which we refer to as Case1. To fully illustrate our capability of handling the irregular continuous-time information, we consider the two simulation setting that generate irregular continuous-time sequences for prediction:
Case2. we sample irregularly from the history, i.e. xt1 , . . . ,xtq , q ≤ k, to predict xtk+1 ; Case3. we use the full history to predict a dynamic future point, i.e. xtk+q for a random q.
We provide the complete data description, preprocessing, and implementation in Appendix B. We use the following two widely-adopted time-aware modifications for neural networks (denote by NN) as baselines, as well as the classical vectorized autoregression model (VAR). NN+time: we directly concatenate the timespan, e.g. tj − ti, to the feature vector. NN+trigo: we concatenate the learnable sine and cosine features, e.g. [sin(π1t), . . . , sin(πkt)], to the feature vector, where {πi}ki=1 are free model parameters. We denote our temporal kernel approach by T-NN. From Figure 3, we see that the temporal kernel outperforms the baselines in all cases when the time series is irregularly sampled (Case2 and Case3), suggesting the effectiveness of the temporal kernel approach in capturing and utilizing the continuous-time signals. Even for the regular Case1 reported in Table A.1, the temporal kernel approach gives the best results, which again emphasizes the advantage of directly characterize the temporal information over discretization. We also show in the ablation studies (Appendix B.5) that INN is necessary for achieving superior performance compared with specifying a distribution family. To demonstrate the stability and robustness, we provide sensitivity analysis in Appendix B.6 for model selection and INN structures.
Temporal sequence learning with complex domain models
Now we study the performance of our temporal kernel approach for the sequential recommendation task with more complicated domain-specific two-tower architectures (Appendix B.2). Temporal information is known to be critical for understanding customer intentions, so we choose the two public e-commerce dataset from Alibaba and Walmart.com, and examine the next-purchase recommendation. To illustrate our flexibility, we select the GRU-based, CNN-based and attention-based recommendation models from the recommender system domain (Hidasi et al., 2015; Li et al., 2017a) and equip them with the temporal kernel. The detailed settings, ablation studies and sensitivity analysis are all in Appendix B. The results are shown in Table A.2. We observe that the temporal kernel approach brings various degrees of improvements to the recommendation models by characterizing
the continuous-time information. The positives results from the recommendation task also suggests the potential of our approach for making impact in boarder domains.
6 DISCUSSION
In this paper, we discuss the insufficiency of existing work on characterizing continuous-time data with deep learning models and describe a principled temporal kernel approach that expands neural networks to characterize continuous-time data. The proposed learning approach has strong theoretical guarantees, and can be easily adapted to a broad range of applications such as deep spatial-temporal modelling, outlier and burst detection, and generative modelling for time series data.
Scope and limitation. Although the temporal kernel approach is motivated by the limiting-width Gaussian behavior of neural networks, in practice, it suffices to use regular widths as we did in our experiments (see Appendix B.2 for the configurations). Therefore, there are still gaps between our theoretical understandings and the observed empirical performance, which require more dedicated analysis. One possible direction is to apply the techniques in Daniely et al. (2016) to characterize the dual kernel view of finite-width neural networks. The technical detail, however, will be more involved. It is also arguably true that we build the connection between the temporal kernel view and continuoustime system in an indirect fashion, compared with the ODE neural networks. However, our approach is fully compatible with the deep learning subroutines while the end-to-end ODE neural networks require substantial modifications to the modelling and implementation. Nevertheless, ODE neural networks are (in theory) capable of modelling more complex systems where the continuous-time setting is a special case. Our work, on the other hand, is dedicated to the temporal setting.
A APPENDIX
We provide the omitted proofs, detailed discussions, extensions and complete numerical results.
A.1 NUMERICAL RESULTS FOR SECTION 5
A.2 SUPPLEMENTARY MATERIAL FOR SECTION 2
We discuss the detailed background for the Gaussian process behavior of neural network and the training trajectory under neural tangent kernel, as well as the proof for Claim 1.
A.2.1 GAUSSIAN PROCESS BEHAVIOR AND NEURAL TANGENT KERNEL FOR DEEP LEARNING MODELS
The Gaussian process (GP) view of neural networks at random initialization was originally discussed in (Neal, 2012). Recently, CNN and other standard neural architectures have all been recognized as functions drawn from GP in the limit of infinite network width (Novak et al., 2018; Yang, 2019). When trained by gradient descent under infinitesimal step schedule, the gradient flow of the standard neural architectures can be described by the notion of Neural Tangent Kernel (NTK) whose asymptotic behavior under infinite network width is known (Jacot et al., 2018). The discovery of NTK has led to several papers studying the training and generalization properties of neural networks (Allen-Zhu et al., 2019; Arora et al., 2019a;b).
For a L-layer FNN f(θ,x) = f (L) with hidden dimensions {dh}Lh=1 and recursively defined via:
f (L) = W(L)f (L)(x) + b(L), f (h)(x) = 1√ dh
W(h)σ ( f (h−1)(x) ) + b(h), f (0)(x) = x,
(A.1) for h = 1, 2, . . . , L− 1, where σ(.) is the activation function and the layer weights W(L) ∈ RdL−1 , W(h) ∈ Rdh−1×dh and intercepts are initialized by sampling independently from N (0, 1) (without loss of generality). As d1, . . . , dL →∞, f (h) tend in law to i.i.d Gaussian processes with covariance Σh defined recursively as shown by Neal (2012):
Σ(1)(x,x′) = 1
h1 xᵀx′ + 1, Σ(h)(x,x′) = Ef∼N (0,Σ(h−1))
[ σ ( f(x) ) σ ( f(x′) )] + 1. (A.2)
We also refer to Σ(h) as the neural network kernel to distinguish from the other kernel notions. Given a training dataset {xi, yi}ni=1, let f ( θ(s) ) = ( f(θ(s),x1), . . . , f(θ(s),xn) ) be the network outputs at the sth training step and y = (y1, . . . , yn).
When training the network by minimizing the squared loss `(θ) with infinitesimal learning rate, i.e. dθ(s) ds = −∇`(θ(s)), the network outputs at training step s follows the evolution (Jacot et al., 2018):
df ( θ(s) ) ds = −Θ(s)× (f ( θ(s) ) − y), [ Θ(s) ] ij = 〈∂f(θ(s), xi) ∂θ , ∂f(θ(s), xj) ∂θ 〉 . (A.3)
The above Θ(s) is referred to as the NTK, and recent results shows that when the network widths go to infinity (or sufficiently large), Θ(s) converges to a fixed Θ0 almost surely (or with high probability).
For a standard L-layer FFN, the NTK Θ0 = Θ (L) 0 for parameters {W(h),b(h)} on the hth-layer can also be computed recursively:
Θ (h) 0 (xi,xj) = Σ (h)(xi,xj) Σ̇(k)(xi,xj) = Ef∼N (0,Σ(k−1)) [ σ̇(f(xi))σ̇(f(xj)) ] ,
and Θ(k)0 (xi,xj) = Θ (k−1) 0 (xi,xj)Σ̇ (k)(xi,xj) + Σ (k)(xi,xj), k = h+ 1, . . . , L.
(A.4)
A number of optimization and generalization properties of neural networks can be studied using NTK, which we refer the interested readers to (Lee et al., 2019; Allen-Zhu et al., 2019; Arora et al., 2019a;b). We also point out that the above GP and NTK constructions can be carried out on all standard neural architectures including CNN, RNN and the attention mechanism (Yang, 2019).
A.2.2 PROOF FOR CLAIM 1
In this part, we denote the continuous-time system by X(t) in order to introduce the full set notations that are needed for our proof, where the length of the discretized interval is explicitly considered. Note that we especially construct the example in Section 2 so that the derivations are not too cumbersome. However, the techniques that we use here can be extended to prove the more complicated settings.
Proof. Consider X(t) to be the second-order continuous-time autoregressive process with covariance function k(t) and spectral density function (SDF) s(ω) such that s(ω) = ∫∞ −∞ exp(−iωt)k(t)dt. The covariance function of the discretization Xa[n] = X(na) with any fixed interval a > 0 is then given by ka[n] = k(na). According to standard results in time series, the SDF of X(t) is given by in the form of:
s(ω) = a1
ω2 + b21 + a2 ω2 + b2 , a1 + a2 = 0, a1b 2 2 + a2b 2 1 6= 0. (A.5)
We assume without loss of generality that b1, b2 are positive numbers. Note that the kernel function for Xa[i] can also be given by
ka[n] = ∫ ∞ −∞ exp(ianω)s(ω)dω
= 1
a ∞∑ k=−∞ ∫ (2k+1)π (2k−1)π exp(inω)s(ω/a)dω
= 1
a ∫ ∞ −∞ exp(inω) ∞∑ k=−∞ s (ω + 2kπ a ) dω,
(A.6)
which suggests that the SDF for the discretization Xa[n] can be given by:
sa(ω) = 1
a ∞∑ k=−∞ s (ω + 2kπ h ) = a1 2 ( e2ab1 − 1 b1|eab1 − eiω|2 − e 2ab2 − 1 b1|eab2 − eiω|2 )
= a1(d1 − 2d2 cos(ω))
2b1b2|(eab1 − eiω)(eab2 − eiω)|2 ,
(A.7)
where d2 = b2eab2(e2ab2 − 1)− b1eab1(e2ab2 − 1). By the definition of discrete-time auto-regressive process, Xa[n] is a second-order AR process only if d2 = 0, which happens if and only if: b2/b1 = (eab2 − e−ab2)/(eab1 − e−ab1). However, the function g(x) = exp(ax)− exp(−ax) is concave on [0,∞) (since the time interval a > 0) and g(0) = 0, the-above equality hold if b1 = b2. However, this contradicts with (A.5), since a1 + a2 = 0 and a1b22 + a2b 2 1 6= 0 suggests a1(b1 − b2)2 6= 0. Hence, Xa[n] cannot be a second-order discrete-time auto-regressive process.
A.3 SUPPLEMENTARY MATERIAL FOR SECTION 3
We first present the related discussions and proof for Claim 2 on the connection between continuoustime system and temporal kernel. Then we prove the convergence result in Theorem 1 regarding the random feature representation for non-stationary kernel. In the sequel, we show the new results for the Gaussian process behavior and neural tangent kernel under random feature representations, and discuss the potential usage of our results. Finally, we prove the sample-consistency result when the spectral distribution is misspecified.
A.3.1 PROOF AND DISCUSSIONS FOR CLAIM 2
Proof. Recall that we study the dynamic system given by:
an(x) dnf(x, t)
dtn + · · ·+ a0(x)f(x, t) = bm(x) dm (x, t) dtm + · · ·+ b0 (x, t), (A.8)
where (x, t = t0) ∼ N(0,Σ(h), ∀t0 ∈ T . The solution process to the above continuous-time system is also a Gaussian process, since (x, t) is a Gaussian process and the solution of a linear different equation is a linear operation on the input. For the sake of notation, we assume b0(x) = 1 and b1(x) = 0, . . . , bm(x) = 0, which does not change the arguments in the proof. We apply the Fourier transformation transformation on both sides and solve the for Fourier transform f̃(iωx, iω):
f̃(iωx, iω) = ( 1 an(x) · (iω)q + · · ·+ a1(x) · iω + a0(x) ) W (iω;ωx), (A.9)
where W (iω;ωx) is the Fourier transform of (x, t). If we do not make the assumption on {bj(x)}mj=1, they will simply show up on the numeration in the same fashion as {aj(x)}nj=1. Let
GθT (iω; x) = aq(x) · (iω)q + · · ·+ a1(x) · iω + a0(x), and p(ωx) = |W (iω;ωx)|2 be the spectral density of the Gaussian process corresponding to (its spectral density does not depend on ω because is a Gaussian white noise process on the time dimension). The dependency of G(· ; ·) on θT is because we defined θT to the underlying parameterization of {aj(·)}nj=1 in the statement of Claim 2. Then the spectral density of the process f(x, t) is given by
p(ω,ωx) = C · p(ωx)|GθT (iω; x)|2 ∝ p(ωx)pθT (ω; x), where C is constant that corresponds to the spectral density of the random Gaussian noise on the time dimension. Notice that the spectral density function obtained this way is regular, since it has the form of pθT (ω; x) = constant/(polynomial of ω 2).
Therefore, according to the classical Wiener-Khinchin theorem Brockwell et al. (1991), the covariance function of the solution process is given by the inverse Fourier transform of the spectral density:
ψ(x, t) = 1
2π
∫ p(ω,ωx) exp ( i[ω,ωx] ᵀ[t,x] ) d(ω,ωx)
∝ ∫ pθT (ω; x) exp(iωt)dω · ∫ p(ωx) exp(iω ᵀ xx)dωx
∝ KθT ( (x, t), (x, t) ) · Σ(h)(x,x).
(A.10)
And therefore we reach the conclusion in Claim 2 by taking Σ(h)T (x, t; x ′, t′) = ψ(x−x′, t− t′).
The inverse statement of Claim 2 may not be always true, since not all the neural-temporal kernel can find a exact corresponding continuous-time system in the form of (A.8). However, we may construct the continuous-time system that approximates the kernel (arbitrarily well) in the following way, using the polynomial approximation tools such as the Taylor expansion.
For a neural-temporal kernel Σ(h)T , we first compute it Fourier transform to obtain the spectral density p(ωx, ω). Note that p(ωx, ω) should be a rational function in the form of (polynomial in ω2)/(polynomial in ω2), or otherwise it does not have stable spectral factorization that leads to a linear dynamic system. To achieve the goal, we can always apply Taylor expansion or Pade approximants that recovers the p(ωx, ω) arbitrarily well.
Then we conduct a spectral factorization on p(ωx, ω) to find G(iωx, iωt) and p(ωx) such that p(ωx, ω) = G(iωx, iωt)p(ωx)G(−iωx,−iωt). Since p(ωx, ω) is now in a rational function form of ω2, we can find G(iωx, iωt) as:
bk(iωx) · (iω)k + · · ·+ b1(iωx) · (iω) + b0(iωx) aq(iωx) · (iω)q + · · ·+ a1(iωx) · (iω) + a0(iωx) .
Let αj(x) and βj(x) be the pseudo-differential operators of aj(iωx) and bj(iωx) defined in terms of their inverse Fourier transforms (Shubin, 1987), then the corresponding continuous-time system is given by:
αq(x) dqf(x, t)
dtq + · · ·+ α0(x)f(x, t) = βk(x) dk (t) dtk + · · ·+ β0(x) (t). (A.11)
For a concrete end-to-end example, we consider the simplified setting where the temporal kernel function is given by:
KθT (t, t ′) := kθ1,θ2,θ3(t− t′) = θ22
21−θ1
Γ(θ1)
(√ 2θ1 t− t′
θ3
)θ1 Bθ1 (√ 2θ1 t− t′
θ3
) ,
where Bθ1(·) is the Bessel function so KθT (t, t′) belongs to the well-known Matern family. It is straightforward to show that the spectral density function is given by:
s(ω) ∝ (2θ1 θ23 + ω2 )−(θ1+1/2) .
As a consequence, we see that s(ω) ∝ (√2θ1
θ3 + iω )−(θ1+1/2)(√2θ1 θ3 − iω )−(θ1+1/2) , so we
directly have GθT (ω) = (√2θ1
θ3 + iω
)−(θ1+1/2) instead of having to seek for polynomial approxi-
mation. Now we can easily expand GθT (ω) using the binomial formula to find the linear parameters for the continuous-time system. For instance, when θ1 = 3/2, we have:
d2f(t) dt2 + 2 √ 2θ1 θ3 df(t) dt + 2θ1 θ23 f(t) = (t).
A.3.2 PROOF FOR PROPOSITION 1 Proof. We first need to show that the random Fourier features for the non-stationary kernel KT ( (x, t), (x′, t′) ) can be given by (11), i.e.
φ(x, t) = 1
2 √ m
[ . . . , cos ( [x, t]ᵀω1,i ) +cos ( [x, t]ᵀω2,i ) , sin ( [x, t]ᵀω1,i ) +sin ( [x, t]ᵀω2,i ) . . . ] .
To simplify notations, we let z := [x, t] ∈ Rd+1 and Z = X × T . For non-stationary kernels, their corresponding Fourier transform can be characterized by the following lemma. Assume without loss of generality that KT is differentiable.
Lemma A.1 (Yaglom (1987)). A non-stationary kernel k(z1, z2) is positive definite in Rd if and only if after scaling, it has the form:
k(z1, z2) =
∫ exp ( i(ωᵀ1 z1 − ω ᵀ 2 z2) ) µ(dω1, dω2), (A.12)
where µ(dω1, dω2) is some positive-semidefinite probability measure with bounded variation.
The above lemma can be think of as the extension of the classical Bochner’s theorem underlies the random Fourier feature for stationary kernels. Notice that when covariance function for the measure µ only has non-zero diagonal elements and ω1 = ω2, then we recover the spectral representation stated in the Bochner’s theorem. Therefore, we can also approximate (A.12) with the Monte Carlo integral. However, we need to ensure the positive-semidefiniteness of the spectral density for µ(dω1, dω2), which we denote by p(ω1,ω2). It has been suggested in Remes et al. (2017) that we consider another density function q(ω1,ω2) and let p be taken on the product space of q and then symmetrise:
p(ω1,ω2) = 1
4
( q(ω1,ω2) + q(ω2,ω1) + q(ω1,ω1) + q(ω2,ω2) ) . (A.13)
Then (A.12) suggests that
k(z1, z2) = 1 4 Eq [ exp ( i(ωᵀ1 z1−ω ᵀ 2 z2) ) + exp ( i(ωᵀ2 z2 − ω ᵀ 1 z1) ) + exp ( i(ωᵀ1 z1 − ω ᵀ 1 z1) ) + exp ( i(ωᵀ2 z2 − ω ᵀ 2 z2) )] .
Recall that the real part of exp ( i(ωᵀ1 z1 − ω ᵀ 2 z2) ) is given by cos(ωᵀ1 z1 − ω ᵀ 2 z2). So with the
Trigonometric equalities, it is straightforward to verify that k(z1, z2) = Eq [ φ(z)ᵀφ(z) ] . Hence, the random Fourier features for non-stationary kernel can be given in the form of (11).
Then we show the uniform convergence result as the number of samples goes to infinity when computing Eq [ φ(z)ᵀφ(z) ] by the Monte Carlo integral. Let Z̃ = Z × Z , so Z̃ = {(x, t,x′, t, ) ∥∥x,x′ ∈ X ; t, t′ ∈ T }. Since diam(X ) = l and T = [0, tmax], we have diam(Z̃) = l2t2max. Let the approximation error be ∆(z, z′) = φ(z)ᵀφ(z′)−KT (z.z′). (A.14) The strategy is to use a -net covering for the input space Z̃ , which would require N =( 2l2t2max/r )d+1 balls of radius r. Let C = {ci}Ni=1 be the centers for each -ball. We first show the bound for |∆(ci)| and the Lipschitz constant L∆ of the error function ∆, and then combine them to get the desired result.
Since ∆ is continuous and differentiable w.r.t z, z′ according to the definition of φ, we have L∆ =∥∥∇∆(c∗)∥∥, where c∗ = arg maxc∈C ∥∥∇∆(c)∥∥. Let c∗ = (z̃, z̃′). By checking the regularity conditions for exchanging the integral and differential operation, we verify that E [ ∇φ(z)ᵀφ(z′) ] =
∇E [ φ(z)ᵀφ(z′) ] = ∇E [ KT (z, z ′) ] . We do not present the details here, since it is easy to check the regularity of φ(z)ᵀφ(z′) as it consists of the sine and cosine functions who are continuous, bounded and have continuous bounded derivatives. Hence, we have:
E [ L2∆ ] = Ez̃,z̃′ [∥∥∇φ(z̃)ᵀφ(z̃′)−∇KT (z̃, z̃′)∥∥2]
= Ez̃,z̃′ [ E‖∇φ(z̃)ᵀφ(z̃′)‖2 − 2‖∇KT (z̃, z̃′)‖ · ‖∇φ(z̃)ᵀφ(z̃′)‖+ ‖∇KT (z̃, z̃′)‖2 ] ≤ Ez̃,z̃′ [ E‖∇φ(z̃)ᵀφ(z̃′)‖2 − ‖∇KT (z̃, z̃′)‖2 ] (by Jensen’s inequality)
≤ E‖∇φ(z̃)ᵀφ(z̃′)‖2 = E ∥∥∥∇( cos(z̃ᵀω1) + cos(z̃ᵀω2))( cos((z̃′)ᵀω1) + cos((z̃′)ᵀω2))
+ ( sin(z̃ᵀω1) + sin(z̃ ᵀω2) )( sin((z̃′)ᵀω1) + sin((z̃ ′)ᵀω2) )∥∥∥2
= 2E ∥∥∥ω1( sin(z̃ᵀω1 − (z̃′)ᵀω1) + sin((z̃′)ᵀω2 − z̃ᵀω1))
+ ω2 ( sin(z̃ᵀω1 − (z̃′)ᵀω2) + sin((z̃′)ᵀω2 − z̃ᵀω2) )∥∥∥2
≤ 8E ∥∥[ω1,ω2]∥∥2 = 8σ2p.
(A.15)
Hence, by the Markov’s inequality, we have p ( L∆ ≥
2r
) ≤ 8σ2p (2r ) . (A.16)
Then we notice that for all c ∈ C, ∆(c) is the mean of m/2 terms bounded by [−1, 1], and the expectation is 0. So applying a union bound and the Hoeffding’s inequality on bounded random variables, we have:
p ( ∪Ni=1 |∆(ci)| ≥
2
) ≤ 2N exp ( − m 2
16
) . (A.17)
Combining the above results, we get
p (
sup (z,z′)∈C
∣∣∆(z, z′)∣∣ ≤ ) ≥1− 32σ2pr2 2 − 2r−(d+1) (2l2tmax
r
)d+1 exp ( − m 2
16 ) ≥ C(d) ( l2t2maxσp )2(d+1)/(d+3) exp ( − m 2
8(d+ 3)
) ,
(A.18)
where in the second inequality we optimize over r such that r∗ = (
(d+1)k1 k2
)1/(d+3) with
k1 = 2(2l 2t2max) d+1 exp(−m 2/16) and k2 = 32σ2p −2. The constant term is given by C(d) = 2 7d+9 d+3 (( d+1
2
)−d−1 d+3 + ( d 2 ) 2 d+3 ) .
A.3.3 THE GAUSSIAN PROCESS BEHAVIOR AND NEURAL TANGENT KERNEL AFTER COMPOSING WITH TEMPORAL KERNEL WITH THE RANDOM FEATURE REPRESENTATION
This section is dedicated to show the infinite-width Gaussian process behavior and neural tangent kernel properties, similar to what we discussed in Appendix A.2, when composing neural networks in the feature space with the random feature representation of the temporal kernel.
For brevity, we still consider the standard L-layer FFN of (A.1). Suppose we compose the FFN with the random feature representation φ(x, t) at the kth layer. It is easy to see that the neural network
kernel for the first k − 1 layer are unchanged, so we compute them in the usual way as in (A.2). For the kth layer, it is straightforward to verify that:
lim dk→∞ E [ 1 dk 〈 W(k)f (k−1)(θ,x) ◦ φ(x, t),W(k)f (k−1)(θ,x′) ◦ φ(x′, t′) 〉∣∣∣f (k−1)] → Σ(k)(x,x′) ·KT ( (x, t), (x′, t′) ) .
The intuition is that the randomness in W (thus f(θ, .)) and φ(., .) are independent, i.e. the former is caused by network parameter initializations and the later is induced by the random features. The covariance functions for the subsequent layers can be derived by induction, e.g. for the (k + 1)th layer we have:
Σ (k+1) T
( (x, t), (x′, t′) ) = E f∼N ( 0,Σ(k)⊗KT )[σ(f(x, t))σ(f(x′, t′))].
In summary, composing the FNN, at any given layer, with the temporal kernel using its random feature representation does not change the infinite-width Gaussian process behavior. The statement is true for all the deep learning models who also have the Gaussian process behavior, which includes most of the standard neural architectures including RNN, CNN and attention mechanism (Yang, 2019).
The derivations for the NTK, however, is more involved since the gradient on all the layers are affected. We summarize the result for the L-layer FFN in the following proposition and provide the derivations afterwards.
Proposition A.1. Suppose f (k) ( θ, (x, t) ) = vec(f (k)(θ,x) ◦φ(x, t)) in the standard L-layer FFN.
Let Σ(h)T = Σ (h) for h = 1, . . . , k, Σ(k)T = Σ (k) T ⊗KT and Σ (h) T = Ef∼N (0,Σ(h)T )[σ(f)σ(f)] + 1 for h = k + 1, . . . , L. If the activation functions σ have polynomially bounded weak derivatives, as the network widths d1, . . . , dL →∞, the neural tangent kernel Θ(L) converges almost surely to Θ
(L) T whose partial application on parameters {W(h),b(h)} in the hth-layer is given recursively by:
Θ (h) T = Σ (h) T , Θ (k) T = Θ (k−1) T ⊗ Σ̇ (k) T + Σ (k) T , k = h+ 1, . . . , L. (A.19)
Proof. The strategies for deriving the NTK and show the convergence has been discussed in Jacot et al. (2018); Yang (2019); Arora et al. (2019a). The key purpose for us presenting the derivations here is to show how the convergence results for the neural-temporal Gaussian (Section 4.2) affects the NTK. To avoid the cumbersome notations induced by the peripheral intercept terms, here we omit the intercept terms b in the FFN without loss of generality. We let g(h) = 1√ dh σ ( f (h)(x, t) ) , so the
FFN can be equivalently defined via the recursion: f (h) = W(h)g(h−1)(x, t). For the final output f ( θ, (x, t) ) := W(L)f (L)(x, t), the partial derivative to W(h) can be given by:
∂f ( θ, (x, t) ) ∂W(h) = z(h)(x, t) ( g(h−1)(x, t) )ᵀ , (A.20)
with z(h) defined by;
z(h)(x, t) =
{ 1, h = L,
1√ dh
D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t), h = 1, . . . , L− 1, (A.21)
where
D(h)(x, t) = diag ( σ̇ ( f (h)(x, t) )) , h = k, . . . , L− 1, diag ( σ̇ ( f (h)(x) )) , h = 1, . . . , k − 1.
Using the above definitions, we have:〈∂f(θ, (x, t)) ∂W(h) , ∂f ( θ, (x′, t′) ) ∂W(h) 〉 = 〈 z(h)(x, t) ( g(h−1)(x, t) )ᵀ , z(h)(x′, t′) ( g(h−1)(x′, t′) )ᵀ〉 = 〈 g(h−1)(x, t),g(h−1)(x′, t′) 〉 · 〈 z(h)(x, t), z(h)(x′, t′) 〉
We have established in Section 4.2 that〈 g(h−1)(x, t),g(h−1)(x′, t′) 〉 → Σ(h−1)T ( (x, t), (x′, t′) ) ,
where
Σ (h) T
( (x, t), (x′, t′) ) = Σ(h)(x,x′) h = 1, . . . , k Σ(h)(x,x′) ·KT ( (x, t), (x′, t′) ) h = k
E f∼N ( 0,Σ
(h−1) T )[σ(f(x, t))σ(f(x′, t′))] h = k + 1, . . . , L. (A.22)
By the definition of z(h), we get〈 z(h)(x, t), z(h)(x′, t′) 〉 = 1
dh
〈 D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t),D(h)(x′, t′) ( W(h+1) )ᵀ z(h+1)(x′, t′) 〉 ≈ 1 dh 〈 D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t),D(h)(x′, t′) ( W̃(h+1) )ᵀ z(h+1)(x′, t′)
〉 → 1
dh tr ( D(h)(x, t)D(h)(x′, t′) )〈 z(h+1)(x, t), z(h+1)(x′, t′) 〉 → Σ̇hT ( (x, t), (x′, t′) 〈 z(h+1)(x, t), z(h+1)(x′, t′) 〉 .
(A.23)
The approximation in the third line is made because the W(h+1) in the right half is replaced by its i.i.d copy under Gaussian initialization. This does not change the limit when dh → ∞ when the actionvation functions have polynomially bounded weak derivatives Yang (2019) such the ReLU activation. Carrying out (A.23) recursively, we see that
〈 z(h)(x, t), z(h)(x′, t′) 〉 → L−1∏ j=h Σ̇j ( (x, t), (x′, t′) ) .
Finally, we have:〈∂f(θ, (x, t)) ∂θ , ∂f ( θ, (x′, t′) ) ∂θ 〉 = L∑ h=1 〈∂f(θ, (x, t)) ∂W(h) , ∂f ( θ, (x′, t′) ) ∂W(h) 〉 =
L∑ h=1 ( Σ (h) T ( (x, t), (x′, t′) ) · L∏ j=h Σ̇j ( (x, t), (x′, t′) )) .
(A.24)
Notice that we use a more compact recursive formulation to state the results in Proposition 1. It is easy to verify that after expansion, we reach the desired results.
Compared with the original NTK before composing with the temporal kernel (given by (A.4)), the results in Proposition A.1 shares a similar recursion structure. As a consequence, the previous results for NTK can be directly adopted to our setting. We list two examples here.
• Following Jacot et al. (2018), given a training dataset {xi, ti, yi(ti)}ni=1, let fT ( θ(s) ) =(
f(θ(s),x1, t1), . . . , f(θ(s),xn, tn) )
be the network outputs at the sth training step and yT = ( y1(t1), . . . , yn(tn) ) . The analysis on the optimization trajectory under infinitesimal learning rate can be conducted via:
dfT ( θ(s) ) ds = −ΘT (s)× (fT ( θ(s) ) − yT ),
where ΘT (s) converges almost surely to the NTK Θ (L) T in Proposition A.1.
• Following Allen-Zhu et al. (2019) and Arora et al. (2019b), the generalization performance of the composed time-aware neural network can be explicitly characterized according to the properties of Θ(L)T .
A.3.4 PROOF FOR THEOREM 1
Proof. We first present a technical lemma that is crucial for establishing the duality result under the distributional constraint df (SθT ‖S) ≤ δ. Recall that the hidden dimension for the kth layer is dk.
Lemma A.2 (Ben-Tal et al. (2013)). Let f be any closed convex function with domain [0,+∞), and this conjugate is given by f∗(s) = supt≥0{ts− f(t)}. Then for any distribution S and any function g : Rdk+1 → R, we have
sup SθT :df (SθT ‖S)≤δ
∫ g(ω)dSθT (ω) = inf
λ≥0,η
{ λ ∫ f∗ (g(ω)− η
λ
) dS(ω) + δλ+ η } . (A.25)
We work with a scaled version of the f-divergence under f(t) = 1k (t k − 1) (because its dual function has a cleaner form), where the constraint set is now equivalent to {SθT : df (SθT ‖S) ≤ δ/k}. It is easy to check that f∗(s) = 1k′ [s] k′ + + 1 k with 1 k′ + 1 k = 1.
Similar to the proof for Proposition 1, we let z := [x, t] ∈ Rd+1 and Z = X × T to simplify the notations. To explicitly annotate the dependency of the random Fourier features on Ω, which is the random variable corresponding to ω, we define φ̃(z,Ω) such that φ̃(z,Ω) = [ cos(zᵀΩ1) + cos(zᵀΩ2), sin(z ᵀΩ1) + sin(z ᵀΩ2) ] , where Ω = [Ω1,Ω2]. Then the approximation error, when
replacing the sampled Fourier features φ by the original random variable φ̃(z,Ω), is given by:
∆n(Ω) := 1 n(n− 1) ∑ i 6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)
− E [ Σ(k)(Xi,Xj)KT,SθT ( (Xi, Ti), (Xj , Tj) )] = 1
n(n− 1) ∑ i 6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)− E [ Σ(k)(X,X′)φ̃(Z,Ω)ᵀφ̃(Z′,Ω) ] .
(A.26)
We first show that sub-Gaussianity of ∆n(Ω) . Let {x′i}ni=1 be an i.i.d copy of the observations except for one element j such that xj 6= x′j . Without loss of generality, we assume the last element is different, i.e. xn 6= x′n. Let ∆′n(Ω) be computed by replacing x and z with the above x′ and its corresponding z′. Note that
|∆n(Ω)−∆′n(Ω)|
= 1 n(n− 1) ∣∣∑ i6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)− Σ(k)(x′i,x′j)φ̃(z′i,Ω)ᵀφ̃(z′j ,Ω) ∣∣ ≤ 1 n(n− 1) (∑ i<n
∣∣Σ(k)(xi,xn)φ̃(zi,Ω)ᵀφ̃(zn,Ω)− Σ(k)(xi,x′n)φ̃(zi,Ω)ᵀφ̃(z′n,Ω)∣∣ + ∑ j<n
∣∣Σ(k)(xn,xj)φ̃(zn,Ω)ᵀφ̃(zj ,Ω)− Σ(k)(x′n,xj)φ̃(z′n,Ω)ᵀφ̃(zj ,Ω)∣∣) ≤ 4 max{1,M}
n ,
(A.27)
where the last inequality comes from the fact that the random Fourier features φ̃ are bounded by 1 and the infinity norm of Σ(k) is bounded by M . The above bounded difference property suggests that ∆n(Ω) is a 4 max{1,M} n -sub-Gaussian random variable.
To bound ∆n(Ω), we use:
sup SθT :df (SθT ||S) ∣∣∣ ∫ ∆n(Ω)dSθT ∣∣∣ ≤ sup SθT :df (SθT ||S) ∫ |∆n(Ω)|dSθT
≤ inf λ≥0 {λ1−k′ k′ ES [ |∆n(Ω)|k ′] + λ(δ + 1) k } (using Lemma 2)
= (δ + 1)1/kES [ |∆n(Ω)|k ′]1/k′ (solving for λ∗ from above) = √ δ + 1ES [ |∆n(Ω)|2 ]1/2 (let k = k′ = 1/2).
(A.28) Therefore, to bound supSθT :df (SθT ||S) ∣∣∣ ∫ ∆n(Ω)dSθT ∣∣∣ we simply need to bound [|∆n(Ω)|2]. Using the classical results for sub-Gaussian random variables (Boucheron et al., 2013), for λ ≤ n/8, we have
E [ exp ( λ∆n(Ω) )2] ≤ exp (− 1 2 log(1− 8 max{1,M}λ/n) ) .
Then we take the integral over ω p (∫ ∆n(ω) 2dS(ω)≥ 2
δ + 1 ) ≤ E [ exp ( λ ∫ ∆n(ω) 2dS(ω) )] exp ( − λ 2
δ + 1
) (Chernoff bound)
≤ exp ( − 1 2 log ( 1− 8 max{1,M}λ n ) − λ 2 δ + 1 ) (apply Jensen’s inequality).
(A.29)
Finally, let the true approximation error be ∆̂n(ω) = Σ̂(k)(SθT )− Σ(k)(SθT ). Notice that∣∣∆̂n(ω)∣∣ ≤ ∣∣∆n(Ω)∣∣+ 1 n(n− 1) ∑ i6=j Σ(k)(xi,xj) ∣∣φ̃(zi,Ω)ᵀφ̃(zj ,Ω)− φ(zi)ᵀφ(zj)∣∣.
From (A.28) and (A.29), we are able to bound supSθT :df (SθT ||S) ∆n(Ω). For the second term, recall from Proposition 1 that we have shown the stochastic uniform convergence bound for ∣∣φ̃(zi,Ω)ᵀφ̃(zj ,Ω) − φ(zi)ᵀφ(zj)∣∣ under any distributions SθT . The desired bound for p ( supSθT :df (SθT ||S) ∣∣∆̂n(ω)∣∣ ≥ ) is obtained after combining all the above results.
A.3.5 REPARAMETRIZATION WITH INVERTIBLE NEURAL NETWORK
In this part, we discuss the idea of constructing and sampling from an arbitrarily complex distribution from a known auxiliary distribution by a sequence of invertible transformations. Given an auxiliary random variable z following some know distribution q(z), suppose another random variable x is constructed via a one | 1. What is the main contribution of the paper, and how does it differ from other works in the field?
2. How does the proposed approach incorporate temporal information, and what are the benefits of this approach?
3. Can you provide more details about the algorithm used in the paper, especially regarding prediction and computational efficiency?
4. How does the paper address the issue of scalability for large datasets?
5. Could you explain the connection between the proposed approach and Bochner's theorem?
6. Are there any limitations or potential drawbacks to the proposed method that should be considered? | Review | Review
Post-rebuttal update
I've read the rebuttal and updated my score.
This paper proposes a deep learning model for incorporating temporal information by composing the NN-GP kernel and a temporal stationary kernel through a product. The temporal stationary kernel is represented using its spectral density, which is parameterized by an invertible neural network model. This kernel will be learned from the training data to characterize its temporal dynamics.
Originality & Significance
The modeling approach taken in this paper is original to my best knowledge. Although it is well-known that second-order stationary processes has a SDE correspondence, it is rare that this property is connected to NN as GPs and this work finds an application where such ideas can be potentially useful. However, I find it difficult to say anything about significance of this idea since it is not very clearly described. I encourage the authors to make a substantial revision to clarify the issues listed below.
Clarity
The clarity is low. Although the motivation and the high-level idea is clear, I find it very difficult to understand the actual approach taken by this work. There is no description of the actual algorithm and I can see many algorithmic and computational issues left without discussion:
How is prediction made at a specific (t, x)? Do you use a GP predictive mean conditioned on the training points?
If the prediction is made by GPs, how do you solve the scalability issue? When the training set is large, do you take a sparse approach? The temporal kernel is defined through a random feature representation, do you take advantage of it for fast computation?
or you just take a weight-space approach and compose the features (take pairwise product of the features of k_T and \Sigma to form the new features)?
Is NN-GP or NTK kernels used to compute the kernels? How do you compute them? Do you use a Monte-Carlo estimate or the closed-form (computed through a recursion)? I will be happy to raise the score if these questions are properly addressed.
Strengths
The modeling approach is novel.
The proposed method consistently outperforms other baselines in handling irregular continuous-time data.
Weaknesses
The method used is not clearly described.
The non-stationary extension to Bochner's theorem is a known result.
Although the performance is shown to outperform other NN-based approaches in experiments, there might be scalability issues to apply the approach to larger-scale problems with long sequences (assume a non-weight-space approach).
Minor
P16: A.4.1: "We the Fourier transformation transformation on both sides"? |
ICLR | Title
A Temporal Kernel Approach for Deep Learning with Continuous-time Information
Abstract
Sequential deep learning models such as RNN, causal CNN and attention mechanism do not readily consume continuous-time information. Discretizing the temporal data, as we show, causes inconsistency even for simple continuous-time processes. Current approaches often handle time in a heuristic manner to be consistent with the existing deep learning architectures and implementations. In this paper, we provide a principled way to characterize continuous-time systems using deep learning tools. Notably, the proposed approach applies to all the major deep learning architectures and requires little modifications to the implementation. The critical insight is to represent the continuous-time system by composing neural networks with a temporal kernel, where we gain our intuition from the recent advancements in understanding deep learning with Gaussian process and neural tangent kernel. To represent the temporal kernel, we introduce the random feature approach and convert the kernel learning problem to spectral density estimation under reparameterization. We further prove the convergence and consistency results even when the temporal kernel is non-stationary, and the spectral density is misspecified. The simulations and real-data experiments demonstrate the empirical effectiveness of our temporal kernel approach in a broad range of settings.
1 INTRODUCTION
Deep learning models have achieved remarkable performances in sequence learning tasks leveraging the powerful building blocks from recurrent neural networks (RNN) (Mikolov et al., 2010), longshort term memory (LSTM) (Hochreiter & Schmidhuber, 1997), causal convolution neural network (CausalCNN/WaveNet) (Oord et al., 2016) and attention mechanism (Bahdanau et al., 2014; Vaswani et al., 2017). Their applicability to the continuous-time data, on the other hand, is less explored due to the complication of incorporating time when the sequence is irregularly sampled (spaced). The widely-adopted workaround is to study the discretized counterpart instead, e.g. the temporal data is aggregated into bins and then treated as equally-spaced, with the hope to approximate the temporal signal using the sequence information. It is perhaps without surprise, as we show in Claim 1, that even for regular temporal sequence the discretization modifies the spectral structure. The gap can only be amplified for irregular data, so discretizing the temporal information will almost always
∗The work was done when the author was with Walmart Labs.
introduce intractable noise and perturbations, which emphasizes the importance to characterize the continuous-time information directly. Previous efforts to incorporate temporal information for deep learning include concatenating the time or timespan to feature vector (Choi et al., 2016; Lipton et al., 2016; Li et al., 2017b), learning the generative model of time series as missing data problems (Soleimani et al., 2017; Futoma et al., 2017), characterizing the representation of time (Xu et al., 2019; 2020; Du et al., 2016) and using neural point process (Mei & Eisner, 2017; Li et al., 2018). While they provide different tools to expand neural networks to coupe with time, the underlying continuous-time system and process are involved explicitly or implicitly. As a consequence, it remains unknown in what way and to what extend are the continuous-time signals interacting with the original deep learning model. Explicitly characterizing the continuous-time system (via differential equations), on the other hand, is the major pursuit of classical signal processing methods such as smoothing and filtering (Doucet & Johansen, 2009; Särkkä, 2013). The lack of connections is partly due to the compatibility issues between the signal processing methods and the auto-differential gradient computation framework of modern deep learning. Generally speaking, for continuous-time systems, model learning and parameter estimation often rely on the more complicated differential equation solvers (Raissi & Karniadakis, 2018; Raissi et al., 2018a). Although the intersection of neural network and differential equations is gaining popularity in recent years, the combined neural differential methods often require involved modifications to both the modelling part and implementation detail (Chen et al., 2018; Baydin et al., 2017).
Inspired by the recent advancement in understanding neural network with Gaussian process and the neural tangent kernel (Yang, 2019; Jacot et al., 2018), we discover a natural connection between the continuous-time system and the neural Gaussian process after composing with a temporal kernel. The significance of the temporal kernel is that it fills in the gap between signal processing and deep learning: we can explicitly characterize the continuous-time systems while maintaining the usual deep learning architectures and optimization procedures. While the kernel composition is also known for integrating signals from various domains (Shawe-Taylor et al., 2004), we face the additional complication of characterizing and learning the unknown temporal kernel in a data-adaptive fashion. Unlike the existing kernel learning methods where at least the parametric form of the kernel is given (Wilson et al., 2016), we have little context on the temporal kernel, and aggressively assuming the parametric form will risk altering the temporal structures implicitly just like discretization. Instead, we leverage the Bochner’s theorem and its extension (Bochner, 1948; Yaglom, 1987) to first covert the kernel learning problem to the more reasonable spectral domain where we can direct characterize the spectral properties with random (Fourier) features. Representing the temporal kernel by random features is favorable as we show they preserve the existing Gaussian process and NTK properties of neural networks. This is desired from the deep learning’s perspective since our approach will not violate the current understandings of deep learning. Then we apply the reparametrization trick (Kingma & Welling, 2013), which is a standard tool for generative models and Bayesian deep learning, to jointly optimize the spectral density estimator. Furthermore, we provide theoretical guarantees for the random-feature-based kernel learning approach when the temporal kernel is non-stationary, and the spectral density estimator is misspecified. These two scenarios are essential for practical usage but have not been studied in the previous literature. Finally, we conduct simulations and experiments on real-world continuous-time sequence data to show the effectiveness of the temporal kernel approach, which significantly improves the performance of both standard neural architectures and complicated domain-specific models. We summarize our contributions as follow.
• We study a novel connection between the continuous-time system and neural network via the composition with a temporal kernel.
• We propose an efficient kernel learning method based on random feature representation, spectral density estimation and reparameterization, and provide strong theoretical guarantees when the kernel is nonstationary and the spectral density is misspeficied.
• We analyze the empirical performance of our temporal kernel approach for both the standard and domain-specific deep learning models through real-data simulation and experiments.
2 NOTATIONS AND BACKGROUND
We use bold-font letters to denote vectors and matrices. We use xt and (x, t) interchangeably to denote a time-sensitive event occurred at time t, with t ∈ T ≡ [0, tmax]. Neural networks are denoted
by f(θ,x), where x ∈ X ⊂ Rd is the input where diameter(X ) ≤ l, and the network parameters θ are sampled i.i.d from the standard normal distribution at initialization. Without loss of generality, we study the standard L-layer feedforward neural network with its output at the hth hidden layer given by f (h) ∈ Rdh . We use and (t) to denote Gaussian noise and continuous-time Gaussian noise process. By convention, we use ⊗ and ◦ and to represent the tensor and outer product.
2.1 UNDERSTANDING THE STANDARD NEURAL NETWORK
We follow the settings from Jacot et al. (2018); Yang (2019) to briefly illustrate the limiting Gaussian behavior of f(θ,x) at initialization, and its training trajectory under weak optimization. As d1, . . . , dL → ∞, f (h) tend in law to i.i.d Gaussian processes with covariance Σh ∈ Rdh×dh : f (h) ∼ N(0,Σh), which we refer to as the neural network kernel to distinguish from the other kernel notions. Also, given a training dataset {xi, yi}ni=1, let f ( θ(s) ) = ( f(θ(s),x1), . . . , f(θ (s),xn) ) be the network outputs at the sth training step and y = (y1, . . . , yn). Using the squared loss for example, when training with infinitesimal learning rate, the outputs follows: df ( θ(s) ) /ds =
−Θ(s)× (f ( θ(s) ) − y), where Θ(s) is the neural tangent kernel (NTK). The detailed formulation
of Σh and Θ(s) are provided in Appendix A.2. We introduce the two concepts here because:
1. instead of incorporating time to f(θ,x), which is then subject to its specific structures, can we alternatively consider an universal approach which expands the Σh to the temporal domain such as by composing it with a time-aware kernel?
2. When jointly optimize the unknown temporal kernel and the model parameters, how can we preserve the results on the training trajectory with NTK?
In our paper, we show that both goals are achieved by representing a temporal kernel via random features.
2.2 DIFFERENCE BETWEEN CONTINUOUS-TIME AND ITS DISCRETIZATION
We now discuss the gap between continuous-time process and its equally-spaced discretization. We study the simple univariate continuous-time system f(t):
d2f(t) dt2 + a0 df(t) dt + a1f(t) = b0 (t). (1)
A discretization with a fixed interval is then given by: f[i] = f(i× interval) for i = 1, 2, . . .. Notice that f(t) is a second-order auto-regressive process, so both f(t) and f[i] are stationary. Recall that the covariance function for a stationary process is given by k(t) := cov ( f(t0), f(t0 + t) ) , and the
spectral density function (SDF) is defined as s(ω) = ∫∞ −∞ exp(−iωt)k(t)dt.
Claim 1. The spectral density function for f(t) and f[i] are different.
The proof is relegated to Appendix A.2.2. The key takeaway from the example is that the spectral density function, which characterizes the signal on the frequency domain, is altered implicitly even by regular discretization in this simple case. Hence, we should be cautious about the potential impact of the modelling assumption, which eventually motivates us to explicitly model the spectral distribution.
3 METHODOLOGY
We first explain our intuition using the above example. If we take the Fourier transform on (1) and rearrange terms, it becomes: f̃(iω) = ( b0
(iω)2 + a0(iω) + a1
) ̃(iω), where f̃(iω) and ̃(iω)
are the Fourier transform of f(t) and (t). Note that the spectral density of a Gaussian noise process is constant, i.e. |̃(iω)|2 = p0, so the spectral density of f(t) is given by: sθT (ω) = p0 ∣∣b0/((iω)2 + a0(iω) + a1)∣∣2, where we use θT = [a0, a1, b0] to denote the parameters of the linear dynamic system defined in (1). The subscript T is added to distinguish from the parameters of the neural network. The classical Wiener-Khinchin theorem (Wiener et al., 1930) states that the
covariance function for f(t), which is a Gaussian process since the linear differential equation is a linear operation on (t), is given by the inverse Fourier transform of the spectral density:
KT (t, t ′) := kθT (t ′ − t) = 1 2π
∫ sθT (ω) exp(iωt)dω, (2)
We defer the discussions on the inverse direction, that given a kernel kθT (t ′− t) we can also construct a continuous-time system, to Appendix A.3.1. Consequently, there is a correspondence between the parameterization of a stochastic ODE and the kernel of a Gaussian process. The mapping is not necessarily one-to-one; however, it may lead to a more convenient way to parameterize a continuoustime process alternatively using deep learning models, especially knowing the connections between neural network and Gaussian process (which we highlighted in Section 2.1).
To connect the neural network kernel Σ(h) (e.g. for the hth layer of the FFN) to a continuous-time system, the critical step is to understand the interplay between the neural network kernel and the temporal kernel (e.g. the kernel in (2)):
• the neural network kernel characterizes the covariance structures among the hidden representation of data (transformed by the neural network) at any fixed time point;
• the temporal kernel, which corresponds to some continuous-time system, tells how each static neural network kernel propagates forward in time. See Figure 1a for a visual illustration.
Continuing with Example 1, it is straightforward construct the integrated continuous-time system as:
a2(x) d2f(x, t)
dt2 +a1(x) df(x, t) dt
+a0(x)f(x, t) = b0(x) (x, t), (x, t = t0) ∼ N(0,Σ(h), ∀t0 ∈ T , (3)
where we use the neural network kernel Σ(h) to define the Gaussian process (x, t) on the feature dimension, so the ODE parameters are now functions of the data as well. To see that (3) generalizes the hth layer of a FFN to the temporal domain, we first consider a2(x) = a1(x) = 0 and a0(x) = b0(x). Then the continuous-time process f(x, t) exactly follows f (h) at any fixed time point t, and its trajectory on the time axis is simply a Gaussian process. When a1(x), a2(x) 6= 0, f(x, t) still matches f (h) at the initial point, but its propagation on the time axis becomes nontrivial and is now characterized by the constructed continuous-time system. We can easily the setting to incorporate higher-order terms:
an(x) dnf(x, t)
dtn + · · ·+ a0(x)f(x, t) = bm(x) dm (x, t) dtm + · · ·+ b0(x) (x, t). (4)
Keeping the heuristics in mind, an immediate question is what is the structure of the corresponding kernel function after we combine the continuous-time system with the neural network kernel?
Claim 2. The kernel function for f(x, t) in (4) is given by: Σ(h)T (x, t; x ′, t′) = kθT (x, t; x ′, t′) · Σ(h)(x,x′), where θT is the underlying parameterization of { ai(·) }n i=1 and { bi(·) }m i=1 as functions
of x. When { ai }n i=1 and { bi }m i=1 are scalars, Σ(h)T (x, t; x ′, t′) = kθT (t, t ′) ·Σ(h)(x,x′).
We defer the proof and the discussion on the inverse direction from temporal kernel to continuoustime system to Appendix A.3.1. Claim 2 shows that it is possible to expand any layer of a standard neural network to the temporal domain, as part of a continuous-time system using kernel composition. The composition is flexible and can happen at any hidden layer. In particular, given the temporal kernel KT and neural network kernel Σ(h), we obtain the neural-temporal kernel on X × T : Σ
(h) T = diag ( Σ(h) ⊗KT ) , where diag(·) is the partial diagonalization operation on X :
Σ (h) T (x, t; x ′, t′) = Σ(h)(x,x′) ·KT (x, t; x′, t′). (5) The above argument shows that instead of taking care of both the deep learning and continuous-time system, which remains challenging for general architectures, we can convert the problem to finding a suitable temporal kernel. We further point out that when using neural networks, we are parameterizing the hidden representation (feature lift) in the feature space rather than the kernel function in the kernel space. Therefore, to give a consistent characterization, we should also study the feature representation of the temporal kernel and then combine it with the hidden representations of the neural network.
3.1 THE RANDOM FEATURE REPRESENTATION FOR TEMPORAL KERNEL
We start by considering the simpler case where the temporal kernel is stationary and independent of features: KT (t, t′) = k(t′ − t), for some properly scaled positive even function k(·). The classical Bochner’s theorem (Bochner, 1948) states that:
ψ(t′ − t) = ∫ R e−i(t ′−t)ωds(ω), for some probability density function s on R, (6)
where s(·) is the spectral density function we highlighted in Section 2.2. To compute the integral, we may sample (ω1, . . . , ωm) from s(ω) and use the Monte Carlo method: ψ(t′ − t) ≈ 1
m
∑m i=1 e −i(t′−t)ω . Since e−i(t ′−t)ᵀω = cos ( (t′− t)ω ) − i sin ( (t′− t)ω ) , for the real part, we let:
φ(t) = 1√ m
[ cos(tω1), sin(tω1) . . . , cos(tωm), sin(tωm) ] , (7)
and it is easy to check that ψ(t′− t) ≈ 〈φ(t),φ(t′)〉. Since φ(t) is constructed from random samples, we refer to it as the random feature representation of KT . Random feature has been extensive studied in the kernel machine literature, however, we propose a novel application for random features to parametrize an unknown kernel function. A straightforward idea is to parametrize the spectral density function s(ω), whose pivotal role has been highlighted in Section 2.2 and Example 1.
Suppose θT is the distribution parameters for s(ω). Then φθT (t) is also (implicitly) parameterized by θT through the samples { ω(θT )i }m i=1
from s(ω). The idea resembles the reparameterization trick for training variational objectives (Kingma & Welling, 2013), which we formalize in the next section. For now, it remains unknown if we can also obtain the random feature representation for nonstationary kernels where Bochner’s theorem is not applicable. Note that for a general temporal kernel KT (x, t; x
′, t′), in practice, it is not reasonable to assume stationarity specially for the feature domain. In Proposition 1, we provide a rigorous result that generalizes the random feature representation for nonstationary kernels with convergence guarantee. Proposition 1. For any (scaled) continuous non-stationary PDS kernel KT on X × T , there exists a joint probability measure with spectral density function s(ω1,ω2), such that KT ( (x, t), (x′, t′) ) =
Es(ω1,ω2) [ φ(x, t)ᵀφ(x′, t′) ] where φ(x, t) is given by:
1
2 √ m
[ . . . , cos ( [x, t]ᵀω1,i ) + cos ( [x, t]ᵀω2,i ) , sin ( [x, t]ᵀω1,i ) + sin ( [x, t]ᵀω2,i ) . . . ] , (8)
Here, { (ω1,i,ω2,i) }m i=1
are the m samples from s(ω1,ω2). When the sample size m ≥ 8(d+1) ε2 log ( C(d) ( l2t2maxσp/ε ) 2d+2 d+3 /δ ) , with probability at least 1− δ, for any ε > 0,
sup (x,t),(x′,t′)
∣∣∣KT ((x, t), (x′, t′))− φ(x, t)ᵀφ(x′, t′)∣∣∣ ≤ ε, (9)
where σ2p is the second moment of the spectral density function s(ω1,ω2) and C(d) is a constant.
We defer the proof to Appendix A.3.2. It is obvious that the new random feature representation in (8) is a generalization of the stationary setting. There are two advantages for using the random feature representation:
• the composition in the kernel space suggested by (5) is equivalent to the computationally efficient operation f (h)(x) ◦ φ(x, t) in the feature space (Shawe-Taylor et al., 2004);
• we preserve a similar Gaussian process behavior and the neural tangent kernel results that we discussed in Section 2.1, and we defer the discussion and proof to Appendix A.3.3.
In the forward-passing computations, we simply replace the original hidden representation f (h)(x) by the time-aware representation f (h)(x) ◦ φ(x, t). Also, the existing methods and results on analyzing neural networks though Gaussian process and NTK, though not emphasized in this paper, can be directly carried out to the temporal setting as well (see Appendix A.3.3).
3.2 REPARAMETERIZATION WITH THE SPECTRAL DENSITY FUNCTION
We now present the gradient computation for the parameters of the spectral distribution using only their samples. We start from the well-studied case where s(ω) is given by a normal distribution N(µ,Σ) with parameters θT = [µ,Σ]. When computing the gradients of θT , instead of sampling from the intractable distribution s(ω), we reparameterize each sample ωi via: Σ1/2 + µ, where is sampled from a standard multivariate normal distribution. The gradient computations that relied on ω is now replaced using the easy-to-sample and θT = [µ,Σ] now become tractable parameters in the model given . We illustrate reparameterization in our setting in the following example. Example 1. Consider a single-dimension homogeneous linear model: f(θ, x) = f (0)(x) = θx. Without loss of generality, we use only a single sample ω1 from the s(ω) which corresponds to the feature-independent temporal kernel kθ(t, t′). Again, we assume s(ω) ∼ N(µ, σ). Then the time-aware hidden representation for this layer for datapoint (x1, t1) is given by:
f (0) θ,µ,σ(x1, t1) = 1/ √ 2 [ θx1 cos(t1ω1), θx1 sin(t1ω1) ] , ω1 ∼ N(µ, σ).
Using the reparameterization, given a sample 1 from the standard normal distribution, we have:
f (0) θ,µ,σ(x1, t1) = 1/ √ 2 [ θx1 cos ( t1(σ 1/2 1 + µ) ) , θx1 sin ( t1(σ 1/2 1 + µ) )] , (10)
so the gradients with respect to all the parameters (θ, µ, σ) can be computed in the usual way.
Despite the computation advantage, the spectral density is now learnt from the data instead of being given so the convergence result in Proposition 1 does not provide sample-consistency guarantee. In practice, we may also misspecify the spectral distribution and bring extra intractable factors.
To provide practical guarantees, we first introduce several notations: let KT (S) be the temporal kernel represented by random features such that KT (S) = E [ φᵀφ ] , where the expectation is taken with respect to the data distribution and the random feature vector φ has its samples {ωi}mi=1 drawn from the spectral distribution S. Without abuse of notation, we use φ ∼ S to denote the dependency of the random feature vector φ on the spectral distribution S provided in (8). Given a neural network kernel Σ(h), the neural temporal kernel is then denoted by: Σ(h)T (S) = Σ
(h) ⊗KT (S). So the sample version of Σ(h)T (S) for the dataset { (xi, ti) }n i=1 is given by:
Σ̂ (h) T (S) =
1 n(n− 1) ∑ i 6=j Σ(h)(xi,xj)φ(xi, ti) ᵀφ(xj , tj), φ ∼ S. (11)
If the spectral distribution S is fixed and given, then using standard techniques and Theorem 1 it is straightforward to show limn→∞ Σ̂ (h) T (S) → E [ Σ̂ (h) T (S) ] so the proposed learning schema is sample-consistent.
In our case, the spectral distribution is learnt from the data, so we need some restrictions on the spectral distribution in order to obtain any consistency guarantee. The intuition is that if SθT does not diverge from the true S, e.g. d(SθT ‖S) ≤ δ for some divergence measure, the guarantee on S can transfer to SθT with the rate only suffers a discount that does not depend on n.
Theorem 1. Consider the f-divergence such that d(SθT ‖S) = ∫ c ( dSθT /dS ) dS, with the generator
function c(x) = xk − 1 for any k > 0. Given the neural network kernel Σh, let M = ∥∥Σh∥∥∞, then
Pr (
sup d(SθT ||S)≤δ ∣∣Σ̂(h)T (SθT )→ E[Σ̂(h)T (SθT )∣∣ ≥ ε) ≤ √2 exp( −nε264 max{4,M}(δ + 1))+ C(ε), (12)
where C(ε) ∝ (
2l2t2maxσSθT /max{4,M}
) 2d+2 d+3 exp ( − dh 2
32 max{16,M2}(d+3)
) that does not depend on δ.
The proof is provided in Appendix A.3.4. The key takeaway from (12) is that as long as the divergence between the learnt SθT and the true spectral distribution is bounded, we still achieve sample consistency. Therefore, instead of specifying a distribution family which is more likely to suffer from misspecification, we are motivated to employ some universal distribution approximator such as the invertible neural network (INN) (Ardizzone et al., 2018). INN consists of a series of invertible operations that transform samples from a known auxiliary distribution (such as normal distribution) to arbitrarily complex distributions. The Jacobian that characterize the changes of distributions are made invertible by the INN, so the gradient flow is computationally tractable similar to the case in Example 1. We defer the detailed discussions to Appendix A.3.5. Remark 1. It is clear at this point that the temporal kernel approach applies to all the neural networks who have a Gaussian process behavior with a valid neural network kernel, which includes the major architectures such as CNN, RNN and attention mechanism (Yang, 2019).
For implementation, at each forward and backward computation, we first sample from the auxiliary distribution to construct the random feature representation φ using reparameterization, and then compose it with the selected hidden layer f (h) such as in (10). We illustrate the computation architecture in Figure 2, where we adapt the vanilla RNN to the proposed framework. In Algorithm 1, we provide the detailed forward and backward computations, using the L-layer FFN from the previous sections as an example.
4 RELATED WORK
The earliest work that discuss training continuous-time neural network dates back to LeCun et al. (1988); Pearlmutter (1995), but no feasible solution was proposed at that time. The proposed approach relates to several fields that are under active research.
ODE and neural network. Certain neural architecture such as the residual network has been interpreted as approximate ODE solvers (Lu et al., 2018). More direct approaches have been proposed to learn differential equations from data (Raissi & Karniadakis, 2018; Raissi et al., 2018a;
Long et al., 2018), and significant efforts have been spent on developing solvers that combine ODE and the back-propagation framework (Farrell et al., 2013; Carpenter et al., 2015; Chen et al., 2018). The closest literature to our work is from Raissi et al. (2018b) who design numerical Gaussian process resulting from temporal discretization of time-dependent partial differential equations.
Random feature and kernel machine learning. In supervised learning, the kernel trick provides a powerful tool to characterize non-linear data representations (Shawe-Taylor et al., 2004), but the computation complexity is overwhelming for large dataset. The random (Fourier) feature approach proposed by Rahimi & Recht (2008) provides substantial computation benefits. The existing literature on analyzing the random feature approach all assume the kernel function is fixed and stationary (Yang et al., 2012; Sutherland & Schneider, 2015; Sriperumbudur & Szabó, 2015; Avron et al., 2017).
Reparameterization and INN. Computing the gradient for intractable objectives using samples from auxiliary distribution dates back to the policy gradient method in reinforcement learning (Sutton et al., 2000). In recent years, the approach gains popularity for training generative models (Kingma & Welling, 2013), other variational objectives (Blei et al., 2017) and Bayesian neural networks (Snoek et al., 2015). INN are often employed to parameterize the normalizing flow that transforms a simple distribution into a complex one by applying a sequence of invertible transformation functions (Dinh et al., 2014; Ardizzone et al., 2018; Kingma & Dhariwal, 2018; Dinh et al., 2016).
Our approach characterizes the continuous-time ODE via the lens of kernel. It complements the existing neural ODE methods which are often restricted to specific architectures, relying on ODE solvers and lacking theoretical understandings. We also propose a novel deep kernel learning approach by parameterizing the spectral distribution under random feature representation, which is conceptually different from using temporal kernel for time-series classification (Li & Marlin, 2015). Our work is a extension of Xu et al. (2019; 2020), which study the case for self-attention mechanism.
Algorithm 1: Forward pass and parameter update, using the L-layer FFN as an example. Input: The FFN f(θ, ·) = {f (1)(θ, ·), . . . , f (L)(θ, ·)}; the invertible neural network g(ψ, ·);
the selected hidden layer h; the loss `i associated with each input (xi, ti); the auxilliary distribution P .
for each mini-batch do Sample { 1,j , 2,j }m j=1
from the auxilliary distribution P ; Compute the reparameterized samples ω using the INN g(ψ, ·), e.g. ω1,j(ψ) := g(ψ, 1,j); for sample i in the batch do
Construct the random feature representation φψ(xi, ti) using the reparameterized samples (so φ is now explicitly parameterized by ψ) according to eq. (8);
Forward pass: get f (h)(θ,xi), let f (h) ( (θ,ψ),xi, ti ) := f (h)(θ,xi) ◦ φψ(xi, ti),
then pass it to the following feedforward layers to obtain the final output ŷi ; Gradient computation: compute the gradients∇θ`i(ŷi) ∣∣ ,∇ψ`i(ŷi) ∣∣
for the FFN and INN respectively, conditioned on the samples from the auxiliary distribution;
end Update the parameters using the selected optimizer in a standard batch-wise fashion.
end
It is straightforward from Figure 2 and Algorithm 1 that the proposed approach serves as a plug-in module and do not modify the original network structures of the RNN and FFN.
5 EXPERIMENTS AND RESULTS
We focus on revealing the two major advantages of the proposed temporal kernel approach:
• the temporal kernel approach consistently improves the performance of deep learning models, both for the general architectures such as RNN, CausalCNN and attention mechanism as well as the domain-specific architectures, in the presence of continuous-time information;
• the improvement is not at the cost of computation efficiency and stability, and we outperform the alternative approaches who also applies to general deep learning models.
We point out that the neural point process and the ODE neural networks have only been shown to work for certain model architectures so we are unable to compare with them for all the settings.
Time series prediction with standard neural networks (real-data and simulation)
We conduct time series prediction task using the vanilla RNN, CausalCNN and self-attention mechanism with our temporal kernel approach (Figure A.1). We choose the classical Jena weather data for temperature prediction, and the Wikipedia traffic data to predict the number of visits of Wikipedia pages. Both datasets have vectorized features and are regular-sampled. To illustrate the advantage of leveraging the temporal information compared with using only sequential information, we first conduct the ordinary next-step prediction on the regular observations, which we refer to as Case1. To fully illustrate our capability of handling the irregular continuous-time information, we consider the two simulation setting that generate irregular continuous-time sequences for prediction:
Case2. we sample irregularly from the history, i.e. xt1 , . . . ,xtq , q ≤ k, to predict xtk+1 ; Case3. we use the full history to predict a dynamic future point, i.e. xtk+q for a random q.
We provide the complete data description, preprocessing, and implementation in Appendix B. We use the following two widely-adopted time-aware modifications for neural networks (denote by NN) as baselines, as well as the classical vectorized autoregression model (VAR). NN+time: we directly concatenate the timespan, e.g. tj − ti, to the feature vector. NN+trigo: we concatenate the learnable sine and cosine features, e.g. [sin(π1t), . . . , sin(πkt)], to the feature vector, where {πi}ki=1 are free model parameters. We denote our temporal kernel approach by T-NN. From Figure 3, we see that the temporal kernel outperforms the baselines in all cases when the time series is irregularly sampled (Case2 and Case3), suggesting the effectiveness of the temporal kernel approach in capturing and utilizing the continuous-time signals. Even for the regular Case1 reported in Table A.1, the temporal kernel approach gives the best results, which again emphasizes the advantage of directly characterize the temporal information over discretization. We also show in the ablation studies (Appendix B.5) that INN is necessary for achieving superior performance compared with specifying a distribution family. To demonstrate the stability and robustness, we provide sensitivity analysis in Appendix B.6 for model selection and INN structures.
Temporal sequence learning with complex domain models
Now we study the performance of our temporal kernel approach for the sequential recommendation task with more complicated domain-specific two-tower architectures (Appendix B.2). Temporal information is known to be critical for understanding customer intentions, so we choose the two public e-commerce dataset from Alibaba and Walmart.com, and examine the next-purchase recommendation. To illustrate our flexibility, we select the GRU-based, CNN-based and attention-based recommendation models from the recommender system domain (Hidasi et al., 2015; Li et al., 2017a) and equip them with the temporal kernel. The detailed settings, ablation studies and sensitivity analysis are all in Appendix B. The results are shown in Table A.2. We observe that the temporal kernel approach brings various degrees of improvements to the recommendation models by characterizing
the continuous-time information. The positives results from the recommendation task also suggests the potential of our approach for making impact in boarder domains.
6 DISCUSSION
In this paper, we discuss the insufficiency of existing work on characterizing continuous-time data with deep learning models and describe a principled temporal kernel approach that expands neural networks to characterize continuous-time data. The proposed learning approach has strong theoretical guarantees, and can be easily adapted to a broad range of applications such as deep spatial-temporal modelling, outlier and burst detection, and generative modelling for time series data.
Scope and limitation. Although the temporal kernel approach is motivated by the limiting-width Gaussian behavior of neural networks, in practice, it suffices to use regular widths as we did in our experiments (see Appendix B.2 for the configurations). Therefore, there are still gaps between our theoretical understandings and the observed empirical performance, which require more dedicated analysis. One possible direction is to apply the techniques in Daniely et al. (2016) to characterize the dual kernel view of finite-width neural networks. The technical detail, however, will be more involved. It is also arguably true that we build the connection between the temporal kernel view and continuoustime system in an indirect fashion, compared with the ODE neural networks. However, our approach is fully compatible with the deep learning subroutines while the end-to-end ODE neural networks require substantial modifications to the modelling and implementation. Nevertheless, ODE neural networks are (in theory) capable of modelling more complex systems where the continuous-time setting is a special case. Our work, on the other hand, is dedicated to the temporal setting.
A APPENDIX
We provide the omitted proofs, detailed discussions, extensions and complete numerical results.
A.1 NUMERICAL RESULTS FOR SECTION 5
A.2 SUPPLEMENTARY MATERIAL FOR SECTION 2
We discuss the detailed background for the Gaussian process behavior of neural network and the training trajectory under neural tangent kernel, as well as the proof for Claim 1.
A.2.1 GAUSSIAN PROCESS BEHAVIOR AND NEURAL TANGENT KERNEL FOR DEEP LEARNING MODELS
The Gaussian process (GP) view of neural networks at random initialization was originally discussed in (Neal, 2012). Recently, CNN and other standard neural architectures have all been recognized as functions drawn from GP in the limit of infinite network width (Novak et al., 2018; Yang, 2019). When trained by gradient descent under infinitesimal step schedule, the gradient flow of the standard neural architectures can be described by the notion of Neural Tangent Kernel (NTK) whose asymptotic behavior under infinite network width is known (Jacot et al., 2018). The discovery of NTK has led to several papers studying the training and generalization properties of neural networks (Allen-Zhu et al., 2019; Arora et al., 2019a;b).
For a L-layer FNN f(θ,x) = f (L) with hidden dimensions {dh}Lh=1 and recursively defined via:
f (L) = W(L)f (L)(x) + b(L), f (h)(x) = 1√ dh
W(h)σ ( f (h−1)(x) ) + b(h), f (0)(x) = x,
(A.1) for h = 1, 2, . . . , L− 1, where σ(.) is the activation function and the layer weights W(L) ∈ RdL−1 , W(h) ∈ Rdh−1×dh and intercepts are initialized by sampling independently from N (0, 1) (without loss of generality). As d1, . . . , dL →∞, f (h) tend in law to i.i.d Gaussian processes with covariance Σh defined recursively as shown by Neal (2012):
Σ(1)(x,x′) = 1
h1 xᵀx′ + 1, Σ(h)(x,x′) = Ef∼N (0,Σ(h−1))
[ σ ( f(x) ) σ ( f(x′) )] + 1. (A.2)
We also refer to Σ(h) as the neural network kernel to distinguish from the other kernel notions. Given a training dataset {xi, yi}ni=1, let f ( θ(s) ) = ( f(θ(s),x1), . . . , f(θ(s),xn) ) be the network outputs at the sth training step and y = (y1, . . . , yn).
When training the network by minimizing the squared loss `(θ) with infinitesimal learning rate, i.e. dθ(s) ds = −∇`(θ(s)), the network outputs at training step s follows the evolution (Jacot et al., 2018):
df ( θ(s) ) ds = −Θ(s)× (f ( θ(s) ) − y), [ Θ(s) ] ij = 〈∂f(θ(s), xi) ∂θ , ∂f(θ(s), xj) ∂θ 〉 . (A.3)
The above Θ(s) is referred to as the NTK, and recent results shows that when the network widths go to infinity (or sufficiently large), Θ(s) converges to a fixed Θ0 almost surely (or with high probability).
For a standard L-layer FFN, the NTK Θ0 = Θ (L) 0 for parameters {W(h),b(h)} on the hth-layer can also be computed recursively:
Θ (h) 0 (xi,xj) = Σ (h)(xi,xj) Σ̇(k)(xi,xj) = Ef∼N (0,Σ(k−1)) [ σ̇(f(xi))σ̇(f(xj)) ] ,
and Θ(k)0 (xi,xj) = Θ (k−1) 0 (xi,xj)Σ̇ (k)(xi,xj) + Σ (k)(xi,xj), k = h+ 1, . . . , L.
(A.4)
A number of optimization and generalization properties of neural networks can be studied using NTK, which we refer the interested readers to (Lee et al., 2019; Allen-Zhu et al., 2019; Arora et al., 2019a;b). We also point out that the above GP and NTK constructions can be carried out on all standard neural architectures including CNN, RNN and the attention mechanism (Yang, 2019).
A.2.2 PROOF FOR CLAIM 1
In this part, we denote the continuous-time system by X(t) in order to introduce the full set notations that are needed for our proof, where the length of the discretized interval is explicitly considered. Note that we especially construct the example in Section 2 so that the derivations are not too cumbersome. However, the techniques that we use here can be extended to prove the more complicated settings.
Proof. Consider X(t) to be the second-order continuous-time autoregressive process with covariance function k(t) and spectral density function (SDF) s(ω) such that s(ω) = ∫∞ −∞ exp(−iωt)k(t)dt. The covariance function of the discretization Xa[n] = X(na) with any fixed interval a > 0 is then given by ka[n] = k(na). According to standard results in time series, the SDF of X(t) is given by in the form of:
s(ω) = a1
ω2 + b21 + a2 ω2 + b2 , a1 + a2 = 0, a1b 2 2 + a2b 2 1 6= 0. (A.5)
We assume without loss of generality that b1, b2 are positive numbers. Note that the kernel function for Xa[i] can also be given by
ka[n] = ∫ ∞ −∞ exp(ianω)s(ω)dω
= 1
a ∞∑ k=−∞ ∫ (2k+1)π (2k−1)π exp(inω)s(ω/a)dω
= 1
a ∫ ∞ −∞ exp(inω) ∞∑ k=−∞ s (ω + 2kπ a ) dω,
(A.6)
which suggests that the SDF for the discretization Xa[n] can be given by:
sa(ω) = 1
a ∞∑ k=−∞ s (ω + 2kπ h ) = a1 2 ( e2ab1 − 1 b1|eab1 − eiω|2 − e 2ab2 − 1 b1|eab2 − eiω|2 )
= a1(d1 − 2d2 cos(ω))
2b1b2|(eab1 − eiω)(eab2 − eiω)|2 ,
(A.7)
where d2 = b2eab2(e2ab2 − 1)− b1eab1(e2ab2 − 1). By the definition of discrete-time auto-regressive process, Xa[n] is a second-order AR process only if d2 = 0, which happens if and only if: b2/b1 = (eab2 − e−ab2)/(eab1 − e−ab1). However, the function g(x) = exp(ax)− exp(−ax) is concave on [0,∞) (since the time interval a > 0) and g(0) = 0, the-above equality hold if b1 = b2. However, this contradicts with (A.5), since a1 + a2 = 0 and a1b22 + a2b 2 1 6= 0 suggests a1(b1 − b2)2 6= 0. Hence, Xa[n] cannot be a second-order discrete-time auto-regressive process.
A.3 SUPPLEMENTARY MATERIAL FOR SECTION 3
We first present the related discussions and proof for Claim 2 on the connection between continuoustime system and temporal kernel. Then we prove the convergence result in Theorem 1 regarding the random feature representation for non-stationary kernel. In the sequel, we show the new results for the Gaussian process behavior and neural tangent kernel under random feature representations, and discuss the potential usage of our results. Finally, we prove the sample-consistency result when the spectral distribution is misspecified.
A.3.1 PROOF AND DISCUSSIONS FOR CLAIM 2
Proof. Recall that we study the dynamic system given by:
an(x) dnf(x, t)
dtn + · · ·+ a0(x)f(x, t) = bm(x) dm (x, t) dtm + · · ·+ b0 (x, t), (A.8)
where (x, t = t0) ∼ N(0,Σ(h), ∀t0 ∈ T . The solution process to the above continuous-time system is also a Gaussian process, since (x, t) is a Gaussian process and the solution of a linear different equation is a linear operation on the input. For the sake of notation, we assume b0(x) = 1 and b1(x) = 0, . . . , bm(x) = 0, which does not change the arguments in the proof. We apply the Fourier transformation transformation on both sides and solve the for Fourier transform f̃(iωx, iω):
f̃(iωx, iω) = ( 1 an(x) · (iω)q + · · ·+ a1(x) · iω + a0(x) ) W (iω;ωx), (A.9)
where W (iω;ωx) is the Fourier transform of (x, t). If we do not make the assumption on {bj(x)}mj=1, they will simply show up on the numeration in the same fashion as {aj(x)}nj=1. Let
GθT (iω; x) = aq(x) · (iω)q + · · ·+ a1(x) · iω + a0(x), and p(ωx) = |W (iω;ωx)|2 be the spectral density of the Gaussian process corresponding to (its spectral density does not depend on ω because is a Gaussian white noise process on the time dimension). The dependency of G(· ; ·) on θT is because we defined θT to the underlying parameterization of {aj(·)}nj=1 in the statement of Claim 2. Then the spectral density of the process f(x, t) is given by
p(ω,ωx) = C · p(ωx)|GθT (iω; x)|2 ∝ p(ωx)pθT (ω; x), where C is constant that corresponds to the spectral density of the random Gaussian noise on the time dimension. Notice that the spectral density function obtained this way is regular, since it has the form of pθT (ω; x) = constant/(polynomial of ω 2).
Therefore, according to the classical Wiener-Khinchin theorem Brockwell et al. (1991), the covariance function of the solution process is given by the inverse Fourier transform of the spectral density:
ψ(x, t) = 1
2π
∫ p(ω,ωx) exp ( i[ω,ωx] ᵀ[t,x] ) d(ω,ωx)
∝ ∫ pθT (ω; x) exp(iωt)dω · ∫ p(ωx) exp(iω ᵀ xx)dωx
∝ KθT ( (x, t), (x, t) ) · Σ(h)(x,x).
(A.10)
And therefore we reach the conclusion in Claim 2 by taking Σ(h)T (x, t; x ′, t′) = ψ(x−x′, t− t′).
The inverse statement of Claim 2 may not be always true, since not all the neural-temporal kernel can find a exact corresponding continuous-time system in the form of (A.8). However, we may construct the continuous-time system that approximates the kernel (arbitrarily well) in the following way, using the polynomial approximation tools such as the Taylor expansion.
For a neural-temporal kernel Σ(h)T , we first compute it Fourier transform to obtain the spectral density p(ωx, ω). Note that p(ωx, ω) should be a rational function in the form of (polynomial in ω2)/(polynomial in ω2), or otherwise it does not have stable spectral factorization that leads to a linear dynamic system. To achieve the goal, we can always apply Taylor expansion or Pade approximants that recovers the p(ωx, ω) arbitrarily well.
Then we conduct a spectral factorization on p(ωx, ω) to find G(iωx, iωt) and p(ωx) such that p(ωx, ω) = G(iωx, iωt)p(ωx)G(−iωx,−iωt). Since p(ωx, ω) is now in a rational function form of ω2, we can find G(iωx, iωt) as:
bk(iωx) · (iω)k + · · ·+ b1(iωx) · (iω) + b0(iωx) aq(iωx) · (iω)q + · · ·+ a1(iωx) · (iω) + a0(iωx) .
Let αj(x) and βj(x) be the pseudo-differential operators of aj(iωx) and bj(iωx) defined in terms of their inverse Fourier transforms (Shubin, 1987), then the corresponding continuous-time system is given by:
αq(x) dqf(x, t)
dtq + · · ·+ α0(x)f(x, t) = βk(x) dk (t) dtk + · · ·+ β0(x) (t). (A.11)
For a concrete end-to-end example, we consider the simplified setting where the temporal kernel function is given by:
KθT (t, t ′) := kθ1,θ2,θ3(t− t′) = θ22
21−θ1
Γ(θ1)
(√ 2θ1 t− t′
θ3
)θ1 Bθ1 (√ 2θ1 t− t′
θ3
) ,
where Bθ1(·) is the Bessel function so KθT (t, t′) belongs to the well-known Matern family. It is straightforward to show that the spectral density function is given by:
s(ω) ∝ (2θ1 θ23 + ω2 )−(θ1+1/2) .
As a consequence, we see that s(ω) ∝ (√2θ1
θ3 + iω )−(θ1+1/2)(√2θ1 θ3 − iω )−(θ1+1/2) , so we
directly have GθT (ω) = (√2θ1
θ3 + iω
)−(θ1+1/2) instead of having to seek for polynomial approxi-
mation. Now we can easily expand GθT (ω) using the binomial formula to find the linear parameters for the continuous-time system. For instance, when θ1 = 3/2, we have:
d2f(t) dt2 + 2 √ 2θ1 θ3 df(t) dt + 2θ1 θ23 f(t) = (t).
A.3.2 PROOF FOR PROPOSITION 1 Proof. We first need to show that the random Fourier features for the non-stationary kernel KT ( (x, t), (x′, t′) ) can be given by (11), i.e.
φ(x, t) = 1
2 √ m
[ . . . , cos ( [x, t]ᵀω1,i ) +cos ( [x, t]ᵀω2,i ) , sin ( [x, t]ᵀω1,i ) +sin ( [x, t]ᵀω2,i ) . . . ] .
To simplify notations, we let z := [x, t] ∈ Rd+1 and Z = X × T . For non-stationary kernels, their corresponding Fourier transform can be characterized by the following lemma. Assume without loss of generality that KT is differentiable.
Lemma A.1 (Yaglom (1987)). A non-stationary kernel k(z1, z2) is positive definite in Rd if and only if after scaling, it has the form:
k(z1, z2) =
∫ exp ( i(ωᵀ1 z1 − ω ᵀ 2 z2) ) µ(dω1, dω2), (A.12)
where µ(dω1, dω2) is some positive-semidefinite probability measure with bounded variation.
The above lemma can be think of as the extension of the classical Bochner’s theorem underlies the random Fourier feature for stationary kernels. Notice that when covariance function for the measure µ only has non-zero diagonal elements and ω1 = ω2, then we recover the spectral representation stated in the Bochner’s theorem. Therefore, we can also approximate (A.12) with the Monte Carlo integral. However, we need to ensure the positive-semidefiniteness of the spectral density for µ(dω1, dω2), which we denote by p(ω1,ω2). It has been suggested in Remes et al. (2017) that we consider another density function q(ω1,ω2) and let p be taken on the product space of q and then symmetrise:
p(ω1,ω2) = 1
4
( q(ω1,ω2) + q(ω2,ω1) + q(ω1,ω1) + q(ω2,ω2) ) . (A.13)
Then (A.12) suggests that
k(z1, z2) = 1 4 Eq [ exp ( i(ωᵀ1 z1−ω ᵀ 2 z2) ) + exp ( i(ωᵀ2 z2 − ω ᵀ 1 z1) ) + exp ( i(ωᵀ1 z1 − ω ᵀ 1 z1) ) + exp ( i(ωᵀ2 z2 − ω ᵀ 2 z2) )] .
Recall that the real part of exp ( i(ωᵀ1 z1 − ω ᵀ 2 z2) ) is given by cos(ωᵀ1 z1 − ω ᵀ 2 z2). So with the
Trigonometric equalities, it is straightforward to verify that k(z1, z2) = Eq [ φ(z)ᵀφ(z) ] . Hence, the random Fourier features for non-stationary kernel can be given in the form of (11).
Then we show the uniform convergence result as the number of samples goes to infinity when computing Eq [ φ(z)ᵀφ(z) ] by the Monte Carlo integral. Let Z̃ = Z × Z , so Z̃ = {(x, t,x′, t, ) ∥∥x,x′ ∈ X ; t, t′ ∈ T }. Since diam(X ) = l and T = [0, tmax], we have diam(Z̃) = l2t2max. Let the approximation error be ∆(z, z′) = φ(z)ᵀφ(z′)−KT (z.z′). (A.14) The strategy is to use a -net covering for the input space Z̃ , which would require N =( 2l2t2max/r )d+1 balls of radius r. Let C = {ci}Ni=1 be the centers for each -ball. We first show the bound for |∆(ci)| and the Lipschitz constant L∆ of the error function ∆, and then combine them to get the desired result.
Since ∆ is continuous and differentiable w.r.t z, z′ according to the definition of φ, we have L∆ =∥∥∇∆(c∗)∥∥, where c∗ = arg maxc∈C ∥∥∇∆(c)∥∥. Let c∗ = (z̃, z̃′). By checking the regularity conditions for exchanging the integral and differential operation, we verify that E [ ∇φ(z)ᵀφ(z′) ] =
∇E [ φ(z)ᵀφ(z′) ] = ∇E [ KT (z, z ′) ] . We do not present the details here, since it is easy to check the regularity of φ(z)ᵀφ(z′) as it consists of the sine and cosine functions who are continuous, bounded and have continuous bounded derivatives. Hence, we have:
E [ L2∆ ] = Ez̃,z̃′ [∥∥∇φ(z̃)ᵀφ(z̃′)−∇KT (z̃, z̃′)∥∥2]
= Ez̃,z̃′ [ E‖∇φ(z̃)ᵀφ(z̃′)‖2 − 2‖∇KT (z̃, z̃′)‖ · ‖∇φ(z̃)ᵀφ(z̃′)‖+ ‖∇KT (z̃, z̃′)‖2 ] ≤ Ez̃,z̃′ [ E‖∇φ(z̃)ᵀφ(z̃′)‖2 − ‖∇KT (z̃, z̃′)‖2 ] (by Jensen’s inequality)
≤ E‖∇φ(z̃)ᵀφ(z̃′)‖2 = E ∥∥∥∇( cos(z̃ᵀω1) + cos(z̃ᵀω2))( cos((z̃′)ᵀω1) + cos((z̃′)ᵀω2))
+ ( sin(z̃ᵀω1) + sin(z̃ ᵀω2) )( sin((z̃′)ᵀω1) + sin((z̃ ′)ᵀω2) )∥∥∥2
= 2E ∥∥∥ω1( sin(z̃ᵀω1 − (z̃′)ᵀω1) + sin((z̃′)ᵀω2 − z̃ᵀω1))
+ ω2 ( sin(z̃ᵀω1 − (z̃′)ᵀω2) + sin((z̃′)ᵀω2 − z̃ᵀω2) )∥∥∥2
≤ 8E ∥∥[ω1,ω2]∥∥2 = 8σ2p.
(A.15)
Hence, by the Markov’s inequality, we have p ( L∆ ≥
2r
) ≤ 8σ2p (2r ) . (A.16)
Then we notice that for all c ∈ C, ∆(c) is the mean of m/2 terms bounded by [−1, 1], and the expectation is 0. So applying a union bound and the Hoeffding’s inequality on bounded random variables, we have:
p ( ∪Ni=1 |∆(ci)| ≥
2
) ≤ 2N exp ( − m 2
16
) . (A.17)
Combining the above results, we get
p (
sup (z,z′)∈C
∣∣∆(z, z′)∣∣ ≤ ) ≥1− 32σ2pr2 2 − 2r−(d+1) (2l2tmax
r
)d+1 exp ( − m 2
16 ) ≥ C(d) ( l2t2maxσp )2(d+1)/(d+3) exp ( − m 2
8(d+ 3)
) ,
(A.18)
where in the second inequality we optimize over r such that r∗ = (
(d+1)k1 k2
)1/(d+3) with
k1 = 2(2l 2t2max) d+1 exp(−m 2/16) and k2 = 32σ2p −2. The constant term is given by C(d) = 2 7d+9 d+3 (( d+1
2
)−d−1 d+3 + ( d 2 ) 2 d+3 ) .
A.3.3 THE GAUSSIAN PROCESS BEHAVIOR AND NEURAL TANGENT KERNEL AFTER COMPOSING WITH TEMPORAL KERNEL WITH THE RANDOM FEATURE REPRESENTATION
This section is dedicated to show the infinite-width Gaussian process behavior and neural tangent kernel properties, similar to what we discussed in Appendix A.2, when composing neural networks in the feature space with the random feature representation of the temporal kernel.
For brevity, we still consider the standard L-layer FFN of (A.1). Suppose we compose the FFN with the random feature representation φ(x, t) at the kth layer. It is easy to see that the neural network
kernel for the first k − 1 layer are unchanged, so we compute them in the usual way as in (A.2). For the kth layer, it is straightforward to verify that:
lim dk→∞ E [ 1 dk 〈 W(k)f (k−1)(θ,x) ◦ φ(x, t),W(k)f (k−1)(θ,x′) ◦ φ(x′, t′) 〉∣∣∣f (k−1)] → Σ(k)(x,x′) ·KT ( (x, t), (x′, t′) ) .
The intuition is that the randomness in W (thus f(θ, .)) and φ(., .) are independent, i.e. the former is caused by network parameter initializations and the later is induced by the random features. The covariance functions for the subsequent layers can be derived by induction, e.g. for the (k + 1)th layer we have:
Σ (k+1) T
( (x, t), (x′, t′) ) = E f∼N ( 0,Σ(k)⊗KT )[σ(f(x, t))σ(f(x′, t′))].
In summary, composing the FNN, at any given layer, with the temporal kernel using its random feature representation does not change the infinite-width Gaussian process behavior. The statement is true for all the deep learning models who also have the Gaussian process behavior, which includes most of the standard neural architectures including RNN, CNN and attention mechanism (Yang, 2019).
The derivations for the NTK, however, is more involved since the gradient on all the layers are affected. We summarize the result for the L-layer FFN in the following proposition and provide the derivations afterwards.
Proposition A.1. Suppose f (k) ( θ, (x, t) ) = vec(f (k)(θ,x) ◦φ(x, t)) in the standard L-layer FFN.
Let Σ(h)T = Σ (h) for h = 1, . . . , k, Σ(k)T = Σ (k) T ⊗KT and Σ (h) T = Ef∼N (0,Σ(h)T )[σ(f)σ(f)] + 1 for h = k + 1, . . . , L. If the activation functions σ have polynomially bounded weak derivatives, as the network widths d1, . . . , dL →∞, the neural tangent kernel Θ(L) converges almost surely to Θ
(L) T whose partial application on parameters {W(h),b(h)} in the hth-layer is given recursively by:
Θ (h) T = Σ (h) T , Θ (k) T = Θ (k−1) T ⊗ Σ̇ (k) T + Σ (k) T , k = h+ 1, . . . , L. (A.19)
Proof. The strategies for deriving the NTK and show the convergence has been discussed in Jacot et al. (2018); Yang (2019); Arora et al. (2019a). The key purpose for us presenting the derivations here is to show how the convergence results for the neural-temporal Gaussian (Section 4.2) affects the NTK. To avoid the cumbersome notations induced by the peripheral intercept terms, here we omit the intercept terms b in the FFN without loss of generality. We let g(h) = 1√ dh σ ( f (h)(x, t) ) , so the
FFN can be equivalently defined via the recursion: f (h) = W(h)g(h−1)(x, t). For the final output f ( θ, (x, t) ) := W(L)f (L)(x, t), the partial derivative to W(h) can be given by:
∂f ( θ, (x, t) ) ∂W(h) = z(h)(x, t) ( g(h−1)(x, t) )ᵀ , (A.20)
with z(h) defined by;
z(h)(x, t) =
{ 1, h = L,
1√ dh
D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t), h = 1, . . . , L− 1, (A.21)
where
D(h)(x, t) = diag ( σ̇ ( f (h)(x, t) )) , h = k, . . . , L− 1, diag ( σ̇ ( f (h)(x) )) , h = 1, . . . , k − 1.
Using the above definitions, we have:〈∂f(θ, (x, t)) ∂W(h) , ∂f ( θ, (x′, t′) ) ∂W(h) 〉 = 〈 z(h)(x, t) ( g(h−1)(x, t) )ᵀ , z(h)(x′, t′) ( g(h−1)(x′, t′) )ᵀ〉 = 〈 g(h−1)(x, t),g(h−1)(x′, t′) 〉 · 〈 z(h)(x, t), z(h)(x′, t′) 〉
We have established in Section 4.2 that〈 g(h−1)(x, t),g(h−1)(x′, t′) 〉 → Σ(h−1)T ( (x, t), (x′, t′) ) ,
where
Σ (h) T
( (x, t), (x′, t′) ) = Σ(h)(x,x′) h = 1, . . . , k Σ(h)(x,x′) ·KT ( (x, t), (x′, t′) ) h = k
E f∼N ( 0,Σ
(h−1) T )[σ(f(x, t))σ(f(x′, t′))] h = k + 1, . . . , L. (A.22)
By the definition of z(h), we get〈 z(h)(x, t), z(h)(x′, t′) 〉 = 1
dh
〈 D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t),D(h)(x′, t′) ( W(h+1) )ᵀ z(h+1)(x′, t′) 〉 ≈ 1 dh 〈 D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t),D(h)(x′, t′) ( W̃(h+1) )ᵀ z(h+1)(x′, t′)
〉 → 1
dh tr ( D(h)(x, t)D(h)(x′, t′) )〈 z(h+1)(x, t), z(h+1)(x′, t′) 〉 → Σ̇hT ( (x, t), (x′, t′) 〈 z(h+1)(x, t), z(h+1)(x′, t′) 〉 .
(A.23)
The approximation in the third line is made because the W(h+1) in the right half is replaced by its i.i.d copy under Gaussian initialization. This does not change the limit when dh → ∞ when the actionvation functions have polynomially bounded weak derivatives Yang (2019) such the ReLU activation. Carrying out (A.23) recursively, we see that
〈 z(h)(x, t), z(h)(x′, t′) 〉 → L−1∏ j=h Σ̇j ( (x, t), (x′, t′) ) .
Finally, we have:〈∂f(θ, (x, t)) ∂θ , ∂f ( θ, (x′, t′) ) ∂θ 〉 = L∑ h=1 〈∂f(θ, (x, t)) ∂W(h) , ∂f ( θ, (x′, t′) ) ∂W(h) 〉 =
L∑ h=1 ( Σ (h) T ( (x, t), (x′, t′) ) · L∏ j=h Σ̇j ( (x, t), (x′, t′) )) .
(A.24)
Notice that we use a more compact recursive formulation to state the results in Proposition 1. It is easy to verify that after expansion, we reach the desired results.
Compared with the original NTK before composing with the temporal kernel (given by (A.4)), the results in Proposition A.1 shares a similar recursion structure. As a consequence, the previous results for NTK can be directly adopted to our setting. We list two examples here.
• Following Jacot et al. (2018), given a training dataset {xi, ti, yi(ti)}ni=1, let fT ( θ(s) ) =(
f(θ(s),x1, t1), . . . , f(θ(s),xn, tn) )
be the network outputs at the sth training step and yT = ( y1(t1), . . . , yn(tn) ) . The analysis on the optimization trajectory under infinitesimal learning rate can be conducted via:
dfT ( θ(s) ) ds = −ΘT (s)× (fT ( θ(s) ) − yT ),
where ΘT (s) converges almost surely to the NTK Θ (L) T in Proposition A.1.
• Following Allen-Zhu et al. (2019) and Arora et al. (2019b), the generalization performance of the composed time-aware neural network can be explicitly characterized according to the properties of Θ(L)T .
A.3.4 PROOF FOR THEOREM 1
Proof. We first present a technical lemma that is crucial for establishing the duality result under the distributional constraint df (SθT ‖S) ≤ δ. Recall that the hidden dimension for the kth layer is dk.
Lemma A.2 (Ben-Tal et al. (2013)). Let f be any closed convex function with domain [0,+∞), and this conjugate is given by f∗(s) = supt≥0{ts− f(t)}. Then for any distribution S and any function g : Rdk+1 → R, we have
sup SθT :df (SθT ‖S)≤δ
∫ g(ω)dSθT (ω) = inf
λ≥0,η
{ λ ∫ f∗ (g(ω)− η
λ
) dS(ω) + δλ+ η } . (A.25)
We work with a scaled version of the f-divergence under f(t) = 1k (t k − 1) (because its dual function has a cleaner form), where the constraint set is now equivalent to {SθT : df (SθT ‖S) ≤ δ/k}. It is easy to check that f∗(s) = 1k′ [s] k′ + + 1 k with 1 k′ + 1 k = 1.
Similar to the proof for Proposition 1, we let z := [x, t] ∈ Rd+1 and Z = X × T to simplify the notations. To explicitly annotate the dependency of the random Fourier features on Ω, which is the random variable corresponding to ω, we define φ̃(z,Ω) such that φ̃(z,Ω) = [ cos(zᵀΩ1) + cos(zᵀΩ2), sin(z ᵀΩ1) + sin(z ᵀΩ2) ] , where Ω = [Ω1,Ω2]. Then the approximation error, when
replacing the sampled Fourier features φ by the original random variable φ̃(z,Ω), is given by:
∆n(Ω) := 1 n(n− 1) ∑ i 6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)
− E [ Σ(k)(Xi,Xj)KT,SθT ( (Xi, Ti), (Xj , Tj) )] = 1
n(n− 1) ∑ i 6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)− E [ Σ(k)(X,X′)φ̃(Z,Ω)ᵀφ̃(Z′,Ω) ] .
(A.26)
We first show that sub-Gaussianity of ∆n(Ω) . Let {x′i}ni=1 be an i.i.d copy of the observations except for one element j such that xj 6= x′j . Without loss of generality, we assume the last element is different, i.e. xn 6= x′n. Let ∆′n(Ω) be computed by replacing x and z with the above x′ and its corresponding z′. Note that
|∆n(Ω)−∆′n(Ω)|
= 1 n(n− 1) ∣∣∑ i6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)− Σ(k)(x′i,x′j)φ̃(z′i,Ω)ᵀφ̃(z′j ,Ω) ∣∣ ≤ 1 n(n− 1) (∑ i<n
∣∣Σ(k)(xi,xn)φ̃(zi,Ω)ᵀφ̃(zn,Ω)− Σ(k)(xi,x′n)φ̃(zi,Ω)ᵀφ̃(z′n,Ω)∣∣ + ∑ j<n
∣∣Σ(k)(xn,xj)φ̃(zn,Ω)ᵀφ̃(zj ,Ω)− Σ(k)(x′n,xj)φ̃(z′n,Ω)ᵀφ̃(zj ,Ω)∣∣) ≤ 4 max{1,M}
n ,
(A.27)
where the last inequality comes from the fact that the random Fourier features φ̃ are bounded by 1 and the infinity norm of Σ(k) is bounded by M . The above bounded difference property suggests that ∆n(Ω) is a 4 max{1,M} n -sub-Gaussian random variable.
To bound ∆n(Ω), we use:
sup SθT :df (SθT ||S) ∣∣∣ ∫ ∆n(Ω)dSθT ∣∣∣ ≤ sup SθT :df (SθT ||S) ∫ |∆n(Ω)|dSθT
≤ inf λ≥0 {λ1−k′ k′ ES [ |∆n(Ω)|k ′] + λ(δ + 1) k } (using Lemma 2)
= (δ + 1)1/kES [ |∆n(Ω)|k ′]1/k′ (solving for λ∗ from above) = √ δ + 1ES [ |∆n(Ω)|2 ]1/2 (let k = k′ = 1/2).
(A.28) Therefore, to bound supSθT :df (SθT ||S) ∣∣∣ ∫ ∆n(Ω)dSθT ∣∣∣ we simply need to bound [|∆n(Ω)|2]. Using the classical results for sub-Gaussian random variables (Boucheron et al., 2013), for λ ≤ n/8, we have
E [ exp ( λ∆n(Ω) )2] ≤ exp (− 1 2 log(1− 8 max{1,M}λ/n) ) .
Then we take the integral over ω p (∫ ∆n(ω) 2dS(ω)≥ 2
δ + 1 ) ≤ E [ exp ( λ ∫ ∆n(ω) 2dS(ω) )] exp ( − λ 2
δ + 1
) (Chernoff bound)
≤ exp ( − 1 2 log ( 1− 8 max{1,M}λ n ) − λ 2 δ + 1 ) (apply Jensen’s inequality).
(A.29)
Finally, let the true approximation error be ∆̂n(ω) = Σ̂(k)(SθT )− Σ(k)(SθT ). Notice that∣∣∆̂n(ω)∣∣ ≤ ∣∣∆n(Ω)∣∣+ 1 n(n− 1) ∑ i6=j Σ(k)(xi,xj) ∣∣φ̃(zi,Ω)ᵀφ̃(zj ,Ω)− φ(zi)ᵀφ(zj)∣∣.
From (A.28) and (A.29), we are able to bound supSθT :df (SθT ||S) ∆n(Ω). For the second term, recall from Proposition 1 that we have shown the stochastic uniform convergence bound for ∣∣φ̃(zi,Ω)ᵀφ̃(zj ,Ω) − φ(zi)ᵀφ(zj)∣∣ under any distributions SθT . The desired bound for p ( supSθT :df (SθT ||S) ∣∣∆̂n(ω)∣∣ ≥ ) is obtained after combining all the above results.
A.3.5 REPARAMETRIZATION WITH INVERTIBLE NEURAL NETWORK
In this part, we discuss the idea of constructing and sampling from an arbitrarily complex distribution from a known auxiliary distribution by a sequence of invertible transformations. Given an auxiliary random variable z following some know distribution q(z), suppose another random variable x is constructed via a one | 1. What is the main contribution of the paper in terms of adapting neural networks to continuous-time data?
2. What are the strengths of the paper regarding its clarity, illustration, and connection to existing works?
3. What are the weaknesses of the paper, particularly in terms of its reliance on well-known concepts and potential confusion for some readers?
4. How does the reviewer suggest improving the paper, such as adding diagrams and clarifying the difference from existing literature? | Review | Review
This article proposes a methodology to adapt NNs to continuous-time data though the use of a (temporal) reproducing kernel. I enjoyed reading the paper, the message is clear, illustrative and the connection with other existing works is to the point. Although I am unfortunately unable to assess the theoretical novelty of the paper (I am unaware of the details of the state of the art in the subject) the contribution of the paper relates to the study of a kernel, given by an ODE, attached to the input of the NN. This kernel is also represented using Fourier feature expansions.
Though the paper heavily relies on well-known concepts (standard NN, GPs, Fourier features), I see that is has a contribution.
I suggest the following amendments: -for some readers, the general proposed architecture might be confusing. Perhaps a diagrams (similar to that in Fig A.1) would be useful in the first pages of the paper. How does the kernel turn the continuous-time data into NN-ready? -much useful material is relegated to the appendix, if key results, scope and more are only in the appendix, they might not receive the deserved attention. -please better clarify how different your work is from the existing literature: NTK, deep kernel learning, neural ODEs, etc |
ICLR | Title
A Temporal Kernel Approach for Deep Learning with Continuous-time Information
Abstract
Sequential deep learning models such as RNN, causal CNN and attention mechanism do not readily consume continuous-time information. Discretizing the temporal data, as we show, causes inconsistency even for simple continuous-time processes. Current approaches often handle time in a heuristic manner to be consistent with the existing deep learning architectures and implementations. In this paper, we provide a principled way to characterize continuous-time systems using deep learning tools. Notably, the proposed approach applies to all the major deep learning architectures and requires little modifications to the implementation. The critical insight is to represent the continuous-time system by composing neural networks with a temporal kernel, where we gain our intuition from the recent advancements in understanding deep learning with Gaussian process and neural tangent kernel. To represent the temporal kernel, we introduce the random feature approach and convert the kernel learning problem to spectral density estimation under reparameterization. We further prove the convergence and consistency results even when the temporal kernel is non-stationary, and the spectral density is misspecified. The simulations and real-data experiments demonstrate the empirical effectiveness of our temporal kernel approach in a broad range of settings.
1 INTRODUCTION
Deep learning models have achieved remarkable performances in sequence learning tasks leveraging the powerful building blocks from recurrent neural networks (RNN) (Mikolov et al., 2010), longshort term memory (LSTM) (Hochreiter & Schmidhuber, 1997), causal convolution neural network (CausalCNN/WaveNet) (Oord et al., 2016) and attention mechanism (Bahdanau et al., 2014; Vaswani et al., 2017). Their applicability to the continuous-time data, on the other hand, is less explored due to the complication of incorporating time when the sequence is irregularly sampled (spaced). The widely-adopted workaround is to study the discretized counterpart instead, e.g. the temporal data is aggregated into bins and then treated as equally-spaced, with the hope to approximate the temporal signal using the sequence information. It is perhaps without surprise, as we show in Claim 1, that even for regular temporal sequence the discretization modifies the spectral structure. The gap can only be amplified for irregular data, so discretizing the temporal information will almost always
∗The work was done when the author was with Walmart Labs.
introduce intractable noise and perturbations, which emphasizes the importance to characterize the continuous-time information directly. Previous efforts to incorporate temporal information for deep learning include concatenating the time or timespan to feature vector (Choi et al., 2016; Lipton et al., 2016; Li et al., 2017b), learning the generative model of time series as missing data problems (Soleimani et al., 2017; Futoma et al., 2017), characterizing the representation of time (Xu et al., 2019; 2020; Du et al., 2016) and using neural point process (Mei & Eisner, 2017; Li et al., 2018). While they provide different tools to expand neural networks to coupe with time, the underlying continuous-time system and process are involved explicitly or implicitly. As a consequence, it remains unknown in what way and to what extend are the continuous-time signals interacting with the original deep learning model. Explicitly characterizing the continuous-time system (via differential equations), on the other hand, is the major pursuit of classical signal processing methods such as smoothing and filtering (Doucet & Johansen, 2009; Särkkä, 2013). The lack of connections is partly due to the compatibility issues between the signal processing methods and the auto-differential gradient computation framework of modern deep learning. Generally speaking, for continuous-time systems, model learning and parameter estimation often rely on the more complicated differential equation solvers (Raissi & Karniadakis, 2018; Raissi et al., 2018a). Although the intersection of neural network and differential equations is gaining popularity in recent years, the combined neural differential methods often require involved modifications to both the modelling part and implementation detail (Chen et al., 2018; Baydin et al., 2017).
Inspired by the recent advancement in understanding neural network with Gaussian process and the neural tangent kernel (Yang, 2019; Jacot et al., 2018), we discover a natural connection between the continuous-time system and the neural Gaussian process after composing with a temporal kernel. The significance of the temporal kernel is that it fills in the gap between signal processing and deep learning: we can explicitly characterize the continuous-time systems while maintaining the usual deep learning architectures and optimization procedures. While the kernel composition is also known for integrating signals from various domains (Shawe-Taylor et al., 2004), we face the additional complication of characterizing and learning the unknown temporal kernel in a data-adaptive fashion. Unlike the existing kernel learning methods where at least the parametric form of the kernel is given (Wilson et al., 2016), we have little context on the temporal kernel, and aggressively assuming the parametric form will risk altering the temporal structures implicitly just like discretization. Instead, we leverage the Bochner’s theorem and its extension (Bochner, 1948; Yaglom, 1987) to first covert the kernel learning problem to the more reasonable spectral domain where we can direct characterize the spectral properties with random (Fourier) features. Representing the temporal kernel by random features is favorable as we show they preserve the existing Gaussian process and NTK properties of neural networks. This is desired from the deep learning’s perspective since our approach will not violate the current understandings of deep learning. Then we apply the reparametrization trick (Kingma & Welling, 2013), which is a standard tool for generative models and Bayesian deep learning, to jointly optimize the spectral density estimator. Furthermore, we provide theoretical guarantees for the random-feature-based kernel learning approach when the temporal kernel is non-stationary, and the spectral density estimator is misspecified. These two scenarios are essential for practical usage but have not been studied in the previous literature. Finally, we conduct simulations and experiments on real-world continuous-time sequence data to show the effectiveness of the temporal kernel approach, which significantly improves the performance of both standard neural architectures and complicated domain-specific models. We summarize our contributions as follow.
• We study a novel connection between the continuous-time system and neural network via the composition with a temporal kernel.
• We propose an efficient kernel learning method based on random feature representation, spectral density estimation and reparameterization, and provide strong theoretical guarantees when the kernel is nonstationary and the spectral density is misspeficied.
• We analyze the empirical performance of our temporal kernel approach for both the standard and domain-specific deep learning models through real-data simulation and experiments.
2 NOTATIONS AND BACKGROUND
We use bold-font letters to denote vectors and matrices. We use xt and (x, t) interchangeably to denote a time-sensitive event occurred at time t, with t ∈ T ≡ [0, tmax]. Neural networks are denoted
by f(θ,x), where x ∈ X ⊂ Rd is the input where diameter(X ) ≤ l, and the network parameters θ are sampled i.i.d from the standard normal distribution at initialization. Without loss of generality, we study the standard L-layer feedforward neural network with its output at the hth hidden layer given by f (h) ∈ Rdh . We use and (t) to denote Gaussian noise and continuous-time Gaussian noise process. By convention, we use ⊗ and ◦ and to represent the tensor and outer product.
2.1 UNDERSTANDING THE STANDARD NEURAL NETWORK
We follow the settings from Jacot et al. (2018); Yang (2019) to briefly illustrate the limiting Gaussian behavior of f(θ,x) at initialization, and its training trajectory under weak optimization. As d1, . . . , dL → ∞, f (h) tend in law to i.i.d Gaussian processes with covariance Σh ∈ Rdh×dh : f (h) ∼ N(0,Σh), which we refer to as the neural network kernel to distinguish from the other kernel notions. Also, given a training dataset {xi, yi}ni=1, let f ( θ(s) ) = ( f(θ(s),x1), . . . , f(θ (s),xn) ) be the network outputs at the sth training step and y = (y1, . . . , yn). Using the squared loss for example, when training with infinitesimal learning rate, the outputs follows: df ( θ(s) ) /ds =
−Θ(s)× (f ( θ(s) ) − y), where Θ(s) is the neural tangent kernel (NTK). The detailed formulation
of Σh and Θ(s) are provided in Appendix A.2. We introduce the two concepts here because:
1. instead of incorporating time to f(θ,x), which is then subject to its specific structures, can we alternatively consider an universal approach which expands the Σh to the temporal domain such as by composing it with a time-aware kernel?
2. When jointly optimize the unknown temporal kernel and the model parameters, how can we preserve the results on the training trajectory with NTK?
In our paper, we show that both goals are achieved by representing a temporal kernel via random features.
2.2 DIFFERENCE BETWEEN CONTINUOUS-TIME AND ITS DISCRETIZATION
We now discuss the gap between continuous-time process and its equally-spaced discretization. We study the simple univariate continuous-time system f(t):
d2f(t) dt2 + a0 df(t) dt + a1f(t) = b0 (t). (1)
A discretization with a fixed interval is then given by: f[i] = f(i× interval) for i = 1, 2, . . .. Notice that f(t) is a second-order auto-regressive process, so both f(t) and f[i] are stationary. Recall that the covariance function for a stationary process is given by k(t) := cov ( f(t0), f(t0 + t) ) , and the
spectral density function (SDF) is defined as s(ω) = ∫∞ −∞ exp(−iωt)k(t)dt.
Claim 1. The spectral density function for f(t) and f[i] are different.
The proof is relegated to Appendix A.2.2. The key takeaway from the example is that the spectral density function, which characterizes the signal on the frequency domain, is altered implicitly even by regular discretization in this simple case. Hence, we should be cautious about the potential impact of the modelling assumption, which eventually motivates us to explicitly model the spectral distribution.
3 METHODOLOGY
We first explain our intuition using the above example. If we take the Fourier transform on (1) and rearrange terms, it becomes: f̃(iω) = ( b0
(iω)2 + a0(iω) + a1
) ̃(iω), where f̃(iω) and ̃(iω)
are the Fourier transform of f(t) and (t). Note that the spectral density of a Gaussian noise process is constant, i.e. |̃(iω)|2 = p0, so the spectral density of f(t) is given by: sθT (ω) = p0 ∣∣b0/((iω)2 + a0(iω) + a1)∣∣2, where we use θT = [a0, a1, b0] to denote the parameters of the linear dynamic system defined in (1). The subscript T is added to distinguish from the parameters of the neural network. The classical Wiener-Khinchin theorem (Wiener et al., 1930) states that the
covariance function for f(t), which is a Gaussian process since the linear differential equation is a linear operation on (t), is given by the inverse Fourier transform of the spectral density:
KT (t, t ′) := kθT (t ′ − t) = 1 2π
∫ sθT (ω) exp(iωt)dω, (2)
We defer the discussions on the inverse direction, that given a kernel kθT (t ′− t) we can also construct a continuous-time system, to Appendix A.3.1. Consequently, there is a correspondence between the parameterization of a stochastic ODE and the kernel of a Gaussian process. The mapping is not necessarily one-to-one; however, it may lead to a more convenient way to parameterize a continuoustime process alternatively using deep learning models, especially knowing the connections between neural network and Gaussian process (which we highlighted in Section 2.1).
To connect the neural network kernel Σ(h) (e.g. for the hth layer of the FFN) to a continuous-time system, the critical step is to understand the interplay between the neural network kernel and the temporal kernel (e.g. the kernel in (2)):
• the neural network kernel characterizes the covariance structures among the hidden representation of data (transformed by the neural network) at any fixed time point;
• the temporal kernel, which corresponds to some continuous-time system, tells how each static neural network kernel propagates forward in time. See Figure 1a for a visual illustration.
Continuing with Example 1, it is straightforward construct the integrated continuous-time system as:
a2(x) d2f(x, t)
dt2 +a1(x) df(x, t) dt
+a0(x)f(x, t) = b0(x) (x, t), (x, t = t0) ∼ N(0,Σ(h), ∀t0 ∈ T , (3)
where we use the neural network kernel Σ(h) to define the Gaussian process (x, t) on the feature dimension, so the ODE parameters are now functions of the data as well. To see that (3) generalizes the hth layer of a FFN to the temporal domain, we first consider a2(x) = a1(x) = 0 and a0(x) = b0(x). Then the continuous-time process f(x, t) exactly follows f (h) at any fixed time point t, and its trajectory on the time axis is simply a Gaussian process. When a1(x), a2(x) 6= 0, f(x, t) still matches f (h) at the initial point, but its propagation on the time axis becomes nontrivial and is now characterized by the constructed continuous-time system. We can easily the setting to incorporate higher-order terms:
an(x) dnf(x, t)
dtn + · · ·+ a0(x)f(x, t) = bm(x) dm (x, t) dtm + · · ·+ b0(x) (x, t). (4)
Keeping the heuristics in mind, an immediate question is what is the structure of the corresponding kernel function after we combine the continuous-time system with the neural network kernel?
Claim 2. The kernel function for f(x, t) in (4) is given by: Σ(h)T (x, t; x ′, t′) = kθT (x, t; x ′, t′) · Σ(h)(x,x′), where θT is the underlying parameterization of { ai(·) }n i=1 and { bi(·) }m i=1 as functions
of x. When { ai }n i=1 and { bi }m i=1 are scalars, Σ(h)T (x, t; x ′, t′) = kθT (t, t ′) ·Σ(h)(x,x′).
We defer the proof and the discussion on the inverse direction from temporal kernel to continuoustime system to Appendix A.3.1. Claim 2 shows that it is possible to expand any layer of a standard neural network to the temporal domain, as part of a continuous-time system using kernel composition. The composition is flexible and can happen at any hidden layer. In particular, given the temporal kernel KT and neural network kernel Σ(h), we obtain the neural-temporal kernel on X × T : Σ
(h) T = diag ( Σ(h) ⊗KT ) , where diag(·) is the partial diagonalization operation on X :
Σ (h) T (x, t; x ′, t′) = Σ(h)(x,x′) ·KT (x, t; x′, t′). (5) The above argument shows that instead of taking care of both the deep learning and continuous-time system, which remains challenging for general architectures, we can convert the problem to finding a suitable temporal kernel. We further point out that when using neural networks, we are parameterizing the hidden representation (feature lift) in the feature space rather than the kernel function in the kernel space. Therefore, to give a consistent characterization, we should also study the feature representation of the temporal kernel and then combine it with the hidden representations of the neural network.
3.1 THE RANDOM FEATURE REPRESENTATION FOR TEMPORAL KERNEL
We start by considering the simpler case where the temporal kernel is stationary and independent of features: KT (t, t′) = k(t′ − t), for some properly scaled positive even function k(·). The classical Bochner’s theorem (Bochner, 1948) states that:
ψ(t′ − t) = ∫ R e−i(t ′−t)ωds(ω), for some probability density function s on R, (6)
where s(·) is the spectral density function we highlighted in Section 2.2. To compute the integral, we may sample (ω1, . . . , ωm) from s(ω) and use the Monte Carlo method: ψ(t′ − t) ≈ 1
m
∑m i=1 e −i(t′−t)ω . Since e−i(t ′−t)ᵀω = cos ( (t′− t)ω ) − i sin ( (t′− t)ω ) , for the real part, we let:
φ(t) = 1√ m
[ cos(tω1), sin(tω1) . . . , cos(tωm), sin(tωm) ] , (7)
and it is easy to check that ψ(t′− t) ≈ 〈φ(t),φ(t′)〉. Since φ(t) is constructed from random samples, we refer to it as the random feature representation of KT . Random feature has been extensive studied in the kernel machine literature, however, we propose a novel application for random features to parametrize an unknown kernel function. A straightforward idea is to parametrize the spectral density function s(ω), whose pivotal role has been highlighted in Section 2.2 and Example 1.
Suppose θT is the distribution parameters for s(ω). Then φθT (t) is also (implicitly) parameterized by θT through the samples { ω(θT )i }m i=1
from s(ω). The idea resembles the reparameterization trick for training variational objectives (Kingma & Welling, 2013), which we formalize in the next section. For now, it remains unknown if we can also obtain the random feature representation for nonstationary kernels where Bochner’s theorem is not applicable. Note that for a general temporal kernel KT (x, t; x
′, t′), in practice, it is not reasonable to assume stationarity specially for the feature domain. In Proposition 1, we provide a rigorous result that generalizes the random feature representation for nonstationary kernels with convergence guarantee. Proposition 1. For any (scaled) continuous non-stationary PDS kernel KT on X × T , there exists a joint probability measure with spectral density function s(ω1,ω2), such that KT ( (x, t), (x′, t′) ) =
Es(ω1,ω2) [ φ(x, t)ᵀφ(x′, t′) ] where φ(x, t) is given by:
1
2 √ m
[ . . . , cos ( [x, t]ᵀω1,i ) + cos ( [x, t]ᵀω2,i ) , sin ( [x, t]ᵀω1,i ) + sin ( [x, t]ᵀω2,i ) . . . ] , (8)
Here, { (ω1,i,ω2,i) }m i=1
are the m samples from s(ω1,ω2). When the sample size m ≥ 8(d+1) ε2 log ( C(d) ( l2t2maxσp/ε ) 2d+2 d+3 /δ ) , with probability at least 1− δ, for any ε > 0,
sup (x,t),(x′,t′)
∣∣∣KT ((x, t), (x′, t′))− φ(x, t)ᵀφ(x′, t′)∣∣∣ ≤ ε, (9)
where σ2p is the second moment of the spectral density function s(ω1,ω2) and C(d) is a constant.
We defer the proof to Appendix A.3.2. It is obvious that the new random feature representation in (8) is a generalization of the stationary setting. There are two advantages for using the random feature representation:
• the composition in the kernel space suggested by (5) is equivalent to the computationally efficient operation f (h)(x) ◦ φ(x, t) in the feature space (Shawe-Taylor et al., 2004);
• we preserve a similar Gaussian process behavior and the neural tangent kernel results that we discussed in Section 2.1, and we defer the discussion and proof to Appendix A.3.3.
In the forward-passing computations, we simply replace the original hidden representation f (h)(x) by the time-aware representation f (h)(x) ◦ φ(x, t). Also, the existing methods and results on analyzing neural networks though Gaussian process and NTK, though not emphasized in this paper, can be directly carried out to the temporal setting as well (see Appendix A.3.3).
3.2 REPARAMETERIZATION WITH THE SPECTRAL DENSITY FUNCTION
We now present the gradient computation for the parameters of the spectral distribution using only their samples. We start from the well-studied case where s(ω) is given by a normal distribution N(µ,Σ) with parameters θT = [µ,Σ]. When computing the gradients of θT , instead of sampling from the intractable distribution s(ω), we reparameterize each sample ωi via: Σ1/2 + µ, where is sampled from a standard multivariate normal distribution. The gradient computations that relied on ω is now replaced using the easy-to-sample and θT = [µ,Σ] now become tractable parameters in the model given . We illustrate reparameterization in our setting in the following example. Example 1. Consider a single-dimension homogeneous linear model: f(θ, x) = f (0)(x) = θx. Without loss of generality, we use only a single sample ω1 from the s(ω) which corresponds to the feature-independent temporal kernel kθ(t, t′). Again, we assume s(ω) ∼ N(µ, σ). Then the time-aware hidden representation for this layer for datapoint (x1, t1) is given by:
f (0) θ,µ,σ(x1, t1) = 1/ √ 2 [ θx1 cos(t1ω1), θx1 sin(t1ω1) ] , ω1 ∼ N(µ, σ).
Using the reparameterization, given a sample 1 from the standard normal distribution, we have:
f (0) θ,µ,σ(x1, t1) = 1/ √ 2 [ θx1 cos ( t1(σ 1/2 1 + µ) ) , θx1 sin ( t1(σ 1/2 1 + µ) )] , (10)
so the gradients with respect to all the parameters (θ, µ, σ) can be computed in the usual way.
Despite the computation advantage, the spectral density is now learnt from the data instead of being given so the convergence result in Proposition 1 does not provide sample-consistency guarantee. In practice, we may also misspecify the spectral distribution and bring extra intractable factors.
To provide practical guarantees, we first introduce several notations: let KT (S) be the temporal kernel represented by random features such that KT (S) = E [ φᵀφ ] , where the expectation is taken with respect to the data distribution and the random feature vector φ has its samples {ωi}mi=1 drawn from the spectral distribution S. Without abuse of notation, we use φ ∼ S to denote the dependency of the random feature vector φ on the spectral distribution S provided in (8). Given a neural network kernel Σ(h), the neural temporal kernel is then denoted by: Σ(h)T (S) = Σ
(h) ⊗KT (S). So the sample version of Σ(h)T (S) for the dataset { (xi, ti) }n i=1 is given by:
Σ̂ (h) T (S) =
1 n(n− 1) ∑ i 6=j Σ(h)(xi,xj)φ(xi, ti) ᵀφ(xj , tj), φ ∼ S. (11)
If the spectral distribution S is fixed and given, then using standard techniques and Theorem 1 it is straightforward to show limn→∞ Σ̂ (h) T (S) → E [ Σ̂ (h) T (S) ] so the proposed learning schema is sample-consistent.
In our case, the spectral distribution is learnt from the data, so we need some restrictions on the spectral distribution in order to obtain any consistency guarantee. The intuition is that if SθT does not diverge from the true S, e.g. d(SθT ‖S) ≤ δ for some divergence measure, the guarantee on S can transfer to SθT with the rate only suffers a discount that does not depend on n.
Theorem 1. Consider the f-divergence such that d(SθT ‖S) = ∫ c ( dSθT /dS ) dS, with the generator
function c(x) = xk − 1 for any k > 0. Given the neural network kernel Σh, let M = ∥∥Σh∥∥∞, then
Pr (
sup d(SθT ||S)≤δ ∣∣Σ̂(h)T (SθT )→ E[Σ̂(h)T (SθT )∣∣ ≥ ε) ≤ √2 exp( −nε264 max{4,M}(δ + 1))+ C(ε), (12)
where C(ε) ∝ (
2l2t2maxσSθT /max{4,M}
) 2d+2 d+3 exp ( − dh 2
32 max{16,M2}(d+3)
) that does not depend on δ.
The proof is provided in Appendix A.3.4. The key takeaway from (12) is that as long as the divergence between the learnt SθT and the true spectral distribution is bounded, we still achieve sample consistency. Therefore, instead of specifying a distribution family which is more likely to suffer from misspecification, we are motivated to employ some universal distribution approximator such as the invertible neural network (INN) (Ardizzone et al., 2018). INN consists of a series of invertible operations that transform samples from a known auxiliary distribution (such as normal distribution) to arbitrarily complex distributions. The Jacobian that characterize the changes of distributions are made invertible by the INN, so the gradient flow is computationally tractable similar to the case in Example 1. We defer the detailed discussions to Appendix A.3.5. Remark 1. It is clear at this point that the temporal kernel approach applies to all the neural networks who have a Gaussian process behavior with a valid neural network kernel, which includes the major architectures such as CNN, RNN and attention mechanism (Yang, 2019).
For implementation, at each forward and backward computation, we first sample from the auxiliary distribution to construct the random feature representation φ using reparameterization, and then compose it with the selected hidden layer f (h) such as in (10). We illustrate the computation architecture in Figure 2, where we adapt the vanilla RNN to the proposed framework. In Algorithm 1, we provide the detailed forward and backward computations, using the L-layer FFN from the previous sections as an example.
4 RELATED WORK
The earliest work that discuss training continuous-time neural network dates back to LeCun et al. (1988); Pearlmutter (1995), but no feasible solution was proposed at that time. The proposed approach relates to several fields that are under active research.
ODE and neural network. Certain neural architecture such as the residual network has been interpreted as approximate ODE solvers (Lu et al., 2018). More direct approaches have been proposed to learn differential equations from data (Raissi & Karniadakis, 2018; Raissi et al., 2018a;
Long et al., 2018), and significant efforts have been spent on developing solvers that combine ODE and the back-propagation framework (Farrell et al., 2013; Carpenter et al., 2015; Chen et al., 2018). The closest literature to our work is from Raissi et al. (2018b) who design numerical Gaussian process resulting from temporal discretization of time-dependent partial differential equations.
Random feature and kernel machine learning. In supervised learning, the kernel trick provides a powerful tool to characterize non-linear data representations (Shawe-Taylor et al., 2004), but the computation complexity is overwhelming for large dataset. The random (Fourier) feature approach proposed by Rahimi & Recht (2008) provides substantial computation benefits. The existing literature on analyzing the random feature approach all assume the kernel function is fixed and stationary (Yang et al., 2012; Sutherland & Schneider, 2015; Sriperumbudur & Szabó, 2015; Avron et al., 2017).
Reparameterization and INN. Computing the gradient for intractable objectives using samples from auxiliary distribution dates back to the policy gradient method in reinforcement learning (Sutton et al., 2000). In recent years, the approach gains popularity for training generative models (Kingma & Welling, 2013), other variational objectives (Blei et al., 2017) and Bayesian neural networks (Snoek et al., 2015). INN are often employed to parameterize the normalizing flow that transforms a simple distribution into a complex one by applying a sequence of invertible transformation functions (Dinh et al., 2014; Ardizzone et al., 2018; Kingma & Dhariwal, 2018; Dinh et al., 2016).
Our approach characterizes the continuous-time ODE via the lens of kernel. It complements the existing neural ODE methods which are often restricted to specific architectures, relying on ODE solvers and lacking theoretical understandings. We also propose a novel deep kernel learning approach by parameterizing the spectral distribution under random feature representation, which is conceptually different from using temporal kernel for time-series classification (Li & Marlin, 2015). Our work is a extension of Xu et al. (2019; 2020), which study the case for self-attention mechanism.
Algorithm 1: Forward pass and parameter update, using the L-layer FFN as an example. Input: The FFN f(θ, ·) = {f (1)(θ, ·), . . . , f (L)(θ, ·)}; the invertible neural network g(ψ, ·);
the selected hidden layer h; the loss `i associated with each input (xi, ti); the auxilliary distribution P .
for each mini-batch do Sample { 1,j , 2,j }m j=1
from the auxilliary distribution P ; Compute the reparameterized samples ω using the INN g(ψ, ·), e.g. ω1,j(ψ) := g(ψ, 1,j); for sample i in the batch do
Construct the random feature representation φψ(xi, ti) using the reparameterized samples (so φ is now explicitly parameterized by ψ) according to eq. (8);
Forward pass: get f (h)(θ,xi), let f (h) ( (θ,ψ),xi, ti ) := f (h)(θ,xi) ◦ φψ(xi, ti),
then pass it to the following feedforward layers to obtain the final output ŷi ; Gradient computation: compute the gradients∇θ`i(ŷi) ∣∣ ,∇ψ`i(ŷi) ∣∣
for the FFN and INN respectively, conditioned on the samples from the auxiliary distribution;
end Update the parameters using the selected optimizer in a standard batch-wise fashion.
end
It is straightforward from Figure 2 and Algorithm 1 that the proposed approach serves as a plug-in module and do not modify the original network structures of the RNN and FFN.
5 EXPERIMENTS AND RESULTS
We focus on revealing the two major advantages of the proposed temporal kernel approach:
• the temporal kernel approach consistently improves the performance of deep learning models, both for the general architectures such as RNN, CausalCNN and attention mechanism as well as the domain-specific architectures, in the presence of continuous-time information;
• the improvement is not at the cost of computation efficiency and stability, and we outperform the alternative approaches who also applies to general deep learning models.
We point out that the neural point process and the ODE neural networks have only been shown to work for certain model architectures so we are unable to compare with them for all the settings.
Time series prediction with standard neural networks (real-data and simulation)
We conduct time series prediction task using the vanilla RNN, CausalCNN and self-attention mechanism with our temporal kernel approach (Figure A.1). We choose the classical Jena weather data for temperature prediction, and the Wikipedia traffic data to predict the number of visits of Wikipedia pages. Both datasets have vectorized features and are regular-sampled. To illustrate the advantage of leveraging the temporal information compared with using only sequential information, we first conduct the ordinary next-step prediction on the regular observations, which we refer to as Case1. To fully illustrate our capability of handling the irregular continuous-time information, we consider the two simulation setting that generate irregular continuous-time sequences for prediction:
Case2. we sample irregularly from the history, i.e. xt1 , . . . ,xtq , q ≤ k, to predict xtk+1 ; Case3. we use the full history to predict a dynamic future point, i.e. xtk+q for a random q.
We provide the complete data description, preprocessing, and implementation in Appendix B. We use the following two widely-adopted time-aware modifications for neural networks (denote by NN) as baselines, as well as the classical vectorized autoregression model (VAR). NN+time: we directly concatenate the timespan, e.g. tj − ti, to the feature vector. NN+trigo: we concatenate the learnable sine and cosine features, e.g. [sin(π1t), . . . , sin(πkt)], to the feature vector, where {πi}ki=1 are free model parameters. We denote our temporal kernel approach by T-NN. From Figure 3, we see that the temporal kernel outperforms the baselines in all cases when the time series is irregularly sampled (Case2 and Case3), suggesting the effectiveness of the temporal kernel approach in capturing and utilizing the continuous-time signals. Even for the regular Case1 reported in Table A.1, the temporal kernel approach gives the best results, which again emphasizes the advantage of directly characterize the temporal information over discretization. We also show in the ablation studies (Appendix B.5) that INN is necessary for achieving superior performance compared with specifying a distribution family. To demonstrate the stability and robustness, we provide sensitivity analysis in Appendix B.6 for model selection and INN structures.
Temporal sequence learning with complex domain models
Now we study the performance of our temporal kernel approach for the sequential recommendation task with more complicated domain-specific two-tower architectures (Appendix B.2). Temporal information is known to be critical for understanding customer intentions, so we choose the two public e-commerce dataset from Alibaba and Walmart.com, and examine the next-purchase recommendation. To illustrate our flexibility, we select the GRU-based, CNN-based and attention-based recommendation models from the recommender system domain (Hidasi et al., 2015; Li et al., 2017a) and equip them with the temporal kernel. The detailed settings, ablation studies and sensitivity analysis are all in Appendix B. The results are shown in Table A.2. We observe that the temporal kernel approach brings various degrees of improvements to the recommendation models by characterizing
the continuous-time information. The positives results from the recommendation task also suggests the potential of our approach for making impact in boarder domains.
6 DISCUSSION
In this paper, we discuss the insufficiency of existing work on characterizing continuous-time data with deep learning models and describe a principled temporal kernel approach that expands neural networks to characterize continuous-time data. The proposed learning approach has strong theoretical guarantees, and can be easily adapted to a broad range of applications such as deep spatial-temporal modelling, outlier and burst detection, and generative modelling for time series data.
Scope and limitation. Although the temporal kernel approach is motivated by the limiting-width Gaussian behavior of neural networks, in practice, it suffices to use regular widths as we did in our experiments (see Appendix B.2 for the configurations). Therefore, there are still gaps between our theoretical understandings and the observed empirical performance, which require more dedicated analysis. One possible direction is to apply the techniques in Daniely et al. (2016) to characterize the dual kernel view of finite-width neural networks. The technical detail, however, will be more involved. It is also arguably true that we build the connection between the temporal kernel view and continuoustime system in an indirect fashion, compared with the ODE neural networks. However, our approach is fully compatible with the deep learning subroutines while the end-to-end ODE neural networks require substantial modifications to the modelling and implementation. Nevertheless, ODE neural networks are (in theory) capable of modelling more complex systems where the continuous-time setting is a special case. Our work, on the other hand, is dedicated to the temporal setting.
A APPENDIX
We provide the omitted proofs, detailed discussions, extensions and complete numerical results.
A.1 NUMERICAL RESULTS FOR SECTION 5
A.2 SUPPLEMENTARY MATERIAL FOR SECTION 2
We discuss the detailed background for the Gaussian process behavior of neural network and the training trajectory under neural tangent kernel, as well as the proof for Claim 1.
A.2.1 GAUSSIAN PROCESS BEHAVIOR AND NEURAL TANGENT KERNEL FOR DEEP LEARNING MODELS
The Gaussian process (GP) view of neural networks at random initialization was originally discussed in (Neal, 2012). Recently, CNN and other standard neural architectures have all been recognized as functions drawn from GP in the limit of infinite network width (Novak et al., 2018; Yang, 2019). When trained by gradient descent under infinitesimal step schedule, the gradient flow of the standard neural architectures can be described by the notion of Neural Tangent Kernel (NTK) whose asymptotic behavior under infinite network width is known (Jacot et al., 2018). The discovery of NTK has led to several papers studying the training and generalization properties of neural networks (Allen-Zhu et al., 2019; Arora et al., 2019a;b).
For a L-layer FNN f(θ,x) = f (L) with hidden dimensions {dh}Lh=1 and recursively defined via:
f (L) = W(L)f (L)(x) + b(L), f (h)(x) = 1√ dh
W(h)σ ( f (h−1)(x) ) + b(h), f (0)(x) = x,
(A.1) for h = 1, 2, . . . , L− 1, where σ(.) is the activation function and the layer weights W(L) ∈ RdL−1 , W(h) ∈ Rdh−1×dh and intercepts are initialized by sampling independently from N (0, 1) (without loss of generality). As d1, . . . , dL →∞, f (h) tend in law to i.i.d Gaussian processes with covariance Σh defined recursively as shown by Neal (2012):
Σ(1)(x,x′) = 1
h1 xᵀx′ + 1, Σ(h)(x,x′) = Ef∼N (0,Σ(h−1))
[ σ ( f(x) ) σ ( f(x′) )] + 1. (A.2)
We also refer to Σ(h) as the neural network kernel to distinguish from the other kernel notions. Given a training dataset {xi, yi}ni=1, let f ( θ(s) ) = ( f(θ(s),x1), . . . , f(θ(s),xn) ) be the network outputs at the sth training step and y = (y1, . . . , yn).
When training the network by minimizing the squared loss `(θ) with infinitesimal learning rate, i.e. dθ(s) ds = −∇`(θ(s)), the network outputs at training step s follows the evolution (Jacot et al., 2018):
df ( θ(s) ) ds = −Θ(s)× (f ( θ(s) ) − y), [ Θ(s) ] ij = 〈∂f(θ(s), xi) ∂θ , ∂f(θ(s), xj) ∂θ 〉 . (A.3)
The above Θ(s) is referred to as the NTK, and recent results shows that when the network widths go to infinity (or sufficiently large), Θ(s) converges to a fixed Θ0 almost surely (or with high probability).
For a standard L-layer FFN, the NTK Θ0 = Θ (L) 0 for parameters {W(h),b(h)} on the hth-layer can also be computed recursively:
Θ (h) 0 (xi,xj) = Σ (h)(xi,xj) Σ̇(k)(xi,xj) = Ef∼N (0,Σ(k−1)) [ σ̇(f(xi))σ̇(f(xj)) ] ,
and Θ(k)0 (xi,xj) = Θ (k−1) 0 (xi,xj)Σ̇ (k)(xi,xj) + Σ (k)(xi,xj), k = h+ 1, . . . , L.
(A.4)
A number of optimization and generalization properties of neural networks can be studied using NTK, which we refer the interested readers to (Lee et al., 2019; Allen-Zhu et al., 2019; Arora et al., 2019a;b). We also point out that the above GP and NTK constructions can be carried out on all standard neural architectures including CNN, RNN and the attention mechanism (Yang, 2019).
A.2.2 PROOF FOR CLAIM 1
In this part, we denote the continuous-time system by X(t) in order to introduce the full set notations that are needed for our proof, where the length of the discretized interval is explicitly considered. Note that we especially construct the example in Section 2 so that the derivations are not too cumbersome. However, the techniques that we use here can be extended to prove the more complicated settings.
Proof. Consider X(t) to be the second-order continuous-time autoregressive process with covariance function k(t) and spectral density function (SDF) s(ω) such that s(ω) = ∫∞ −∞ exp(−iωt)k(t)dt. The covariance function of the discretization Xa[n] = X(na) with any fixed interval a > 0 is then given by ka[n] = k(na). According to standard results in time series, the SDF of X(t) is given by in the form of:
s(ω) = a1
ω2 + b21 + a2 ω2 + b2 , a1 + a2 = 0, a1b 2 2 + a2b 2 1 6= 0. (A.5)
We assume without loss of generality that b1, b2 are positive numbers. Note that the kernel function for Xa[i] can also be given by
ka[n] = ∫ ∞ −∞ exp(ianω)s(ω)dω
= 1
a ∞∑ k=−∞ ∫ (2k+1)π (2k−1)π exp(inω)s(ω/a)dω
= 1
a ∫ ∞ −∞ exp(inω) ∞∑ k=−∞ s (ω + 2kπ a ) dω,
(A.6)
which suggests that the SDF for the discretization Xa[n] can be given by:
sa(ω) = 1
a ∞∑ k=−∞ s (ω + 2kπ h ) = a1 2 ( e2ab1 − 1 b1|eab1 − eiω|2 − e 2ab2 − 1 b1|eab2 − eiω|2 )
= a1(d1 − 2d2 cos(ω))
2b1b2|(eab1 − eiω)(eab2 − eiω)|2 ,
(A.7)
where d2 = b2eab2(e2ab2 − 1)− b1eab1(e2ab2 − 1). By the definition of discrete-time auto-regressive process, Xa[n] is a second-order AR process only if d2 = 0, which happens if and only if: b2/b1 = (eab2 − e−ab2)/(eab1 − e−ab1). However, the function g(x) = exp(ax)− exp(−ax) is concave on [0,∞) (since the time interval a > 0) and g(0) = 0, the-above equality hold if b1 = b2. However, this contradicts with (A.5), since a1 + a2 = 0 and a1b22 + a2b 2 1 6= 0 suggests a1(b1 − b2)2 6= 0. Hence, Xa[n] cannot be a second-order discrete-time auto-regressive process.
A.3 SUPPLEMENTARY MATERIAL FOR SECTION 3
We first present the related discussions and proof for Claim 2 on the connection between continuoustime system and temporal kernel. Then we prove the convergence result in Theorem 1 regarding the random feature representation for non-stationary kernel. In the sequel, we show the new results for the Gaussian process behavior and neural tangent kernel under random feature representations, and discuss the potential usage of our results. Finally, we prove the sample-consistency result when the spectral distribution is misspecified.
A.3.1 PROOF AND DISCUSSIONS FOR CLAIM 2
Proof. Recall that we study the dynamic system given by:
an(x) dnf(x, t)
dtn + · · ·+ a0(x)f(x, t) = bm(x) dm (x, t) dtm + · · ·+ b0 (x, t), (A.8)
where (x, t = t0) ∼ N(0,Σ(h), ∀t0 ∈ T . The solution process to the above continuous-time system is also a Gaussian process, since (x, t) is a Gaussian process and the solution of a linear different equation is a linear operation on the input. For the sake of notation, we assume b0(x) = 1 and b1(x) = 0, . . . , bm(x) = 0, which does not change the arguments in the proof. We apply the Fourier transformation transformation on both sides and solve the for Fourier transform f̃(iωx, iω):
f̃(iωx, iω) = ( 1 an(x) · (iω)q + · · ·+ a1(x) · iω + a0(x) ) W (iω;ωx), (A.9)
where W (iω;ωx) is the Fourier transform of (x, t). If we do not make the assumption on {bj(x)}mj=1, they will simply show up on the numeration in the same fashion as {aj(x)}nj=1. Let
GθT (iω; x) = aq(x) · (iω)q + · · ·+ a1(x) · iω + a0(x), and p(ωx) = |W (iω;ωx)|2 be the spectral density of the Gaussian process corresponding to (its spectral density does not depend on ω because is a Gaussian white noise process on the time dimension). The dependency of G(· ; ·) on θT is because we defined θT to the underlying parameterization of {aj(·)}nj=1 in the statement of Claim 2. Then the spectral density of the process f(x, t) is given by
p(ω,ωx) = C · p(ωx)|GθT (iω; x)|2 ∝ p(ωx)pθT (ω; x), where C is constant that corresponds to the spectral density of the random Gaussian noise on the time dimension. Notice that the spectral density function obtained this way is regular, since it has the form of pθT (ω; x) = constant/(polynomial of ω 2).
Therefore, according to the classical Wiener-Khinchin theorem Brockwell et al. (1991), the covariance function of the solution process is given by the inverse Fourier transform of the spectral density:
ψ(x, t) = 1
2π
∫ p(ω,ωx) exp ( i[ω,ωx] ᵀ[t,x] ) d(ω,ωx)
∝ ∫ pθT (ω; x) exp(iωt)dω · ∫ p(ωx) exp(iω ᵀ xx)dωx
∝ KθT ( (x, t), (x, t) ) · Σ(h)(x,x).
(A.10)
And therefore we reach the conclusion in Claim 2 by taking Σ(h)T (x, t; x ′, t′) = ψ(x−x′, t− t′).
The inverse statement of Claim 2 may not be always true, since not all the neural-temporal kernel can find a exact corresponding continuous-time system in the form of (A.8). However, we may construct the continuous-time system that approximates the kernel (arbitrarily well) in the following way, using the polynomial approximation tools such as the Taylor expansion.
For a neural-temporal kernel Σ(h)T , we first compute it Fourier transform to obtain the spectral density p(ωx, ω). Note that p(ωx, ω) should be a rational function in the form of (polynomial in ω2)/(polynomial in ω2), or otherwise it does not have stable spectral factorization that leads to a linear dynamic system. To achieve the goal, we can always apply Taylor expansion or Pade approximants that recovers the p(ωx, ω) arbitrarily well.
Then we conduct a spectral factorization on p(ωx, ω) to find G(iωx, iωt) and p(ωx) such that p(ωx, ω) = G(iωx, iωt)p(ωx)G(−iωx,−iωt). Since p(ωx, ω) is now in a rational function form of ω2, we can find G(iωx, iωt) as:
bk(iωx) · (iω)k + · · ·+ b1(iωx) · (iω) + b0(iωx) aq(iωx) · (iω)q + · · ·+ a1(iωx) · (iω) + a0(iωx) .
Let αj(x) and βj(x) be the pseudo-differential operators of aj(iωx) and bj(iωx) defined in terms of their inverse Fourier transforms (Shubin, 1987), then the corresponding continuous-time system is given by:
αq(x) dqf(x, t)
dtq + · · ·+ α0(x)f(x, t) = βk(x) dk (t) dtk + · · ·+ β0(x) (t). (A.11)
For a concrete end-to-end example, we consider the simplified setting where the temporal kernel function is given by:
KθT (t, t ′) := kθ1,θ2,θ3(t− t′) = θ22
21−θ1
Γ(θ1)
(√ 2θ1 t− t′
θ3
)θ1 Bθ1 (√ 2θ1 t− t′
θ3
) ,
where Bθ1(·) is the Bessel function so KθT (t, t′) belongs to the well-known Matern family. It is straightforward to show that the spectral density function is given by:
s(ω) ∝ (2θ1 θ23 + ω2 )−(θ1+1/2) .
As a consequence, we see that s(ω) ∝ (√2θ1
θ3 + iω )−(θ1+1/2)(√2θ1 θ3 − iω )−(θ1+1/2) , so we
directly have GθT (ω) = (√2θ1
θ3 + iω
)−(θ1+1/2) instead of having to seek for polynomial approxi-
mation. Now we can easily expand GθT (ω) using the binomial formula to find the linear parameters for the continuous-time system. For instance, when θ1 = 3/2, we have:
d2f(t) dt2 + 2 √ 2θ1 θ3 df(t) dt + 2θ1 θ23 f(t) = (t).
A.3.2 PROOF FOR PROPOSITION 1 Proof. We first need to show that the random Fourier features for the non-stationary kernel KT ( (x, t), (x′, t′) ) can be given by (11), i.e.
φ(x, t) = 1
2 √ m
[ . . . , cos ( [x, t]ᵀω1,i ) +cos ( [x, t]ᵀω2,i ) , sin ( [x, t]ᵀω1,i ) +sin ( [x, t]ᵀω2,i ) . . . ] .
To simplify notations, we let z := [x, t] ∈ Rd+1 and Z = X × T . For non-stationary kernels, their corresponding Fourier transform can be characterized by the following lemma. Assume without loss of generality that KT is differentiable.
Lemma A.1 (Yaglom (1987)). A non-stationary kernel k(z1, z2) is positive definite in Rd if and only if after scaling, it has the form:
k(z1, z2) =
∫ exp ( i(ωᵀ1 z1 − ω ᵀ 2 z2) ) µ(dω1, dω2), (A.12)
where µ(dω1, dω2) is some positive-semidefinite probability measure with bounded variation.
The above lemma can be think of as the extension of the classical Bochner’s theorem underlies the random Fourier feature for stationary kernels. Notice that when covariance function for the measure µ only has non-zero diagonal elements and ω1 = ω2, then we recover the spectral representation stated in the Bochner’s theorem. Therefore, we can also approximate (A.12) with the Monte Carlo integral. However, we need to ensure the positive-semidefiniteness of the spectral density for µ(dω1, dω2), which we denote by p(ω1,ω2). It has been suggested in Remes et al. (2017) that we consider another density function q(ω1,ω2) and let p be taken on the product space of q and then symmetrise:
p(ω1,ω2) = 1
4
( q(ω1,ω2) + q(ω2,ω1) + q(ω1,ω1) + q(ω2,ω2) ) . (A.13)
Then (A.12) suggests that
k(z1, z2) = 1 4 Eq [ exp ( i(ωᵀ1 z1−ω ᵀ 2 z2) ) + exp ( i(ωᵀ2 z2 − ω ᵀ 1 z1) ) + exp ( i(ωᵀ1 z1 − ω ᵀ 1 z1) ) + exp ( i(ωᵀ2 z2 − ω ᵀ 2 z2) )] .
Recall that the real part of exp ( i(ωᵀ1 z1 − ω ᵀ 2 z2) ) is given by cos(ωᵀ1 z1 − ω ᵀ 2 z2). So with the
Trigonometric equalities, it is straightforward to verify that k(z1, z2) = Eq [ φ(z)ᵀφ(z) ] . Hence, the random Fourier features for non-stationary kernel can be given in the form of (11).
Then we show the uniform convergence result as the number of samples goes to infinity when computing Eq [ φ(z)ᵀφ(z) ] by the Monte Carlo integral. Let Z̃ = Z × Z , so Z̃ = {(x, t,x′, t, ) ∥∥x,x′ ∈ X ; t, t′ ∈ T }. Since diam(X ) = l and T = [0, tmax], we have diam(Z̃) = l2t2max. Let the approximation error be ∆(z, z′) = φ(z)ᵀφ(z′)−KT (z.z′). (A.14) The strategy is to use a -net covering for the input space Z̃ , which would require N =( 2l2t2max/r )d+1 balls of radius r. Let C = {ci}Ni=1 be the centers for each -ball. We first show the bound for |∆(ci)| and the Lipschitz constant L∆ of the error function ∆, and then combine them to get the desired result.
Since ∆ is continuous and differentiable w.r.t z, z′ according to the definition of φ, we have L∆ =∥∥∇∆(c∗)∥∥, where c∗ = arg maxc∈C ∥∥∇∆(c)∥∥. Let c∗ = (z̃, z̃′). By checking the regularity conditions for exchanging the integral and differential operation, we verify that E [ ∇φ(z)ᵀφ(z′) ] =
∇E [ φ(z)ᵀφ(z′) ] = ∇E [ KT (z, z ′) ] . We do not present the details here, since it is easy to check the regularity of φ(z)ᵀφ(z′) as it consists of the sine and cosine functions who are continuous, bounded and have continuous bounded derivatives. Hence, we have:
E [ L2∆ ] = Ez̃,z̃′ [∥∥∇φ(z̃)ᵀφ(z̃′)−∇KT (z̃, z̃′)∥∥2]
= Ez̃,z̃′ [ E‖∇φ(z̃)ᵀφ(z̃′)‖2 − 2‖∇KT (z̃, z̃′)‖ · ‖∇φ(z̃)ᵀφ(z̃′)‖+ ‖∇KT (z̃, z̃′)‖2 ] ≤ Ez̃,z̃′ [ E‖∇φ(z̃)ᵀφ(z̃′)‖2 − ‖∇KT (z̃, z̃′)‖2 ] (by Jensen’s inequality)
≤ E‖∇φ(z̃)ᵀφ(z̃′)‖2 = E ∥∥∥∇( cos(z̃ᵀω1) + cos(z̃ᵀω2))( cos((z̃′)ᵀω1) + cos((z̃′)ᵀω2))
+ ( sin(z̃ᵀω1) + sin(z̃ ᵀω2) )( sin((z̃′)ᵀω1) + sin((z̃ ′)ᵀω2) )∥∥∥2
= 2E ∥∥∥ω1( sin(z̃ᵀω1 − (z̃′)ᵀω1) + sin((z̃′)ᵀω2 − z̃ᵀω1))
+ ω2 ( sin(z̃ᵀω1 − (z̃′)ᵀω2) + sin((z̃′)ᵀω2 − z̃ᵀω2) )∥∥∥2
≤ 8E ∥∥[ω1,ω2]∥∥2 = 8σ2p.
(A.15)
Hence, by the Markov’s inequality, we have p ( L∆ ≥
2r
) ≤ 8σ2p (2r ) . (A.16)
Then we notice that for all c ∈ C, ∆(c) is the mean of m/2 terms bounded by [−1, 1], and the expectation is 0. So applying a union bound and the Hoeffding’s inequality on bounded random variables, we have:
p ( ∪Ni=1 |∆(ci)| ≥
2
) ≤ 2N exp ( − m 2
16
) . (A.17)
Combining the above results, we get
p (
sup (z,z′)∈C
∣∣∆(z, z′)∣∣ ≤ ) ≥1− 32σ2pr2 2 − 2r−(d+1) (2l2tmax
r
)d+1 exp ( − m 2
16 ) ≥ C(d) ( l2t2maxσp )2(d+1)/(d+3) exp ( − m 2
8(d+ 3)
) ,
(A.18)
where in the second inequality we optimize over r such that r∗ = (
(d+1)k1 k2
)1/(d+3) with
k1 = 2(2l 2t2max) d+1 exp(−m 2/16) and k2 = 32σ2p −2. The constant term is given by C(d) = 2 7d+9 d+3 (( d+1
2
)−d−1 d+3 + ( d 2 ) 2 d+3 ) .
A.3.3 THE GAUSSIAN PROCESS BEHAVIOR AND NEURAL TANGENT KERNEL AFTER COMPOSING WITH TEMPORAL KERNEL WITH THE RANDOM FEATURE REPRESENTATION
This section is dedicated to show the infinite-width Gaussian process behavior and neural tangent kernel properties, similar to what we discussed in Appendix A.2, when composing neural networks in the feature space with the random feature representation of the temporal kernel.
For brevity, we still consider the standard L-layer FFN of (A.1). Suppose we compose the FFN with the random feature representation φ(x, t) at the kth layer. It is easy to see that the neural network
kernel for the first k − 1 layer are unchanged, so we compute them in the usual way as in (A.2). For the kth layer, it is straightforward to verify that:
lim dk→∞ E [ 1 dk 〈 W(k)f (k−1)(θ,x) ◦ φ(x, t),W(k)f (k−1)(θ,x′) ◦ φ(x′, t′) 〉∣∣∣f (k−1)] → Σ(k)(x,x′) ·KT ( (x, t), (x′, t′) ) .
The intuition is that the randomness in W (thus f(θ, .)) and φ(., .) are independent, i.e. the former is caused by network parameter initializations and the later is induced by the random features. The covariance functions for the subsequent layers can be derived by induction, e.g. for the (k + 1)th layer we have:
Σ (k+1) T
( (x, t), (x′, t′) ) = E f∼N ( 0,Σ(k)⊗KT )[σ(f(x, t))σ(f(x′, t′))].
In summary, composing the FNN, at any given layer, with the temporal kernel using its random feature representation does not change the infinite-width Gaussian process behavior. The statement is true for all the deep learning models who also have the Gaussian process behavior, which includes most of the standard neural architectures including RNN, CNN and attention mechanism (Yang, 2019).
The derivations for the NTK, however, is more involved since the gradient on all the layers are affected. We summarize the result for the L-layer FFN in the following proposition and provide the derivations afterwards.
Proposition A.1. Suppose f (k) ( θ, (x, t) ) = vec(f (k)(θ,x) ◦φ(x, t)) in the standard L-layer FFN.
Let Σ(h)T = Σ (h) for h = 1, . . . , k, Σ(k)T = Σ (k) T ⊗KT and Σ (h) T = Ef∼N (0,Σ(h)T )[σ(f)σ(f)] + 1 for h = k + 1, . . . , L. If the activation functions σ have polynomially bounded weak derivatives, as the network widths d1, . . . , dL →∞, the neural tangent kernel Θ(L) converges almost surely to Θ
(L) T whose partial application on parameters {W(h),b(h)} in the hth-layer is given recursively by:
Θ (h) T = Σ (h) T , Θ (k) T = Θ (k−1) T ⊗ Σ̇ (k) T + Σ (k) T , k = h+ 1, . . . , L. (A.19)
Proof. The strategies for deriving the NTK and show the convergence has been discussed in Jacot et al. (2018); Yang (2019); Arora et al. (2019a). The key purpose for us presenting the derivations here is to show how the convergence results for the neural-temporal Gaussian (Section 4.2) affects the NTK. To avoid the cumbersome notations induced by the peripheral intercept terms, here we omit the intercept terms b in the FFN without loss of generality. We let g(h) = 1√ dh σ ( f (h)(x, t) ) , so the
FFN can be equivalently defined via the recursion: f (h) = W(h)g(h−1)(x, t). For the final output f ( θ, (x, t) ) := W(L)f (L)(x, t), the partial derivative to W(h) can be given by:
∂f ( θ, (x, t) ) ∂W(h) = z(h)(x, t) ( g(h−1)(x, t) )ᵀ , (A.20)
with z(h) defined by;
z(h)(x, t) =
{ 1, h = L,
1√ dh
D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t), h = 1, . . . , L− 1, (A.21)
where
D(h)(x, t) = diag ( σ̇ ( f (h)(x, t) )) , h = k, . . . , L− 1, diag ( σ̇ ( f (h)(x) )) , h = 1, . . . , k − 1.
Using the above definitions, we have:〈∂f(θ, (x, t)) ∂W(h) , ∂f ( θ, (x′, t′) ) ∂W(h) 〉 = 〈 z(h)(x, t) ( g(h−1)(x, t) )ᵀ , z(h)(x′, t′) ( g(h−1)(x′, t′) )ᵀ〉 = 〈 g(h−1)(x, t),g(h−1)(x′, t′) 〉 · 〈 z(h)(x, t), z(h)(x′, t′) 〉
We have established in Section 4.2 that〈 g(h−1)(x, t),g(h−1)(x′, t′) 〉 → Σ(h−1)T ( (x, t), (x′, t′) ) ,
where
Σ (h) T
( (x, t), (x′, t′) ) = Σ(h)(x,x′) h = 1, . . . , k Σ(h)(x,x′) ·KT ( (x, t), (x′, t′) ) h = k
E f∼N ( 0,Σ
(h−1) T )[σ(f(x, t))σ(f(x′, t′))] h = k + 1, . . . , L. (A.22)
By the definition of z(h), we get〈 z(h)(x, t), z(h)(x′, t′) 〉 = 1
dh
〈 D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t),D(h)(x′, t′) ( W(h+1) )ᵀ z(h+1)(x′, t′) 〉 ≈ 1 dh 〈 D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t),D(h)(x′, t′) ( W̃(h+1) )ᵀ z(h+1)(x′, t′)
〉 → 1
dh tr ( D(h)(x, t)D(h)(x′, t′) )〈 z(h+1)(x, t), z(h+1)(x′, t′) 〉 → Σ̇hT ( (x, t), (x′, t′) 〈 z(h+1)(x, t), z(h+1)(x′, t′) 〉 .
(A.23)
The approximation in the third line is made because the W(h+1) in the right half is replaced by its i.i.d copy under Gaussian initialization. This does not change the limit when dh → ∞ when the actionvation functions have polynomially bounded weak derivatives Yang (2019) such the ReLU activation. Carrying out (A.23) recursively, we see that
〈 z(h)(x, t), z(h)(x′, t′) 〉 → L−1∏ j=h Σ̇j ( (x, t), (x′, t′) ) .
Finally, we have:〈∂f(θ, (x, t)) ∂θ , ∂f ( θ, (x′, t′) ) ∂θ 〉 = L∑ h=1 〈∂f(θ, (x, t)) ∂W(h) , ∂f ( θ, (x′, t′) ) ∂W(h) 〉 =
L∑ h=1 ( Σ (h) T ( (x, t), (x′, t′) ) · L∏ j=h Σ̇j ( (x, t), (x′, t′) )) .
(A.24)
Notice that we use a more compact recursive formulation to state the results in Proposition 1. It is easy to verify that after expansion, we reach the desired results.
Compared with the original NTK before composing with the temporal kernel (given by (A.4)), the results in Proposition A.1 shares a similar recursion structure. As a consequence, the previous results for NTK can be directly adopted to our setting. We list two examples here.
• Following Jacot et al. (2018), given a training dataset {xi, ti, yi(ti)}ni=1, let fT ( θ(s) ) =(
f(θ(s),x1, t1), . . . , f(θ(s),xn, tn) )
be the network outputs at the sth training step and yT = ( y1(t1), . . . , yn(tn) ) . The analysis on the optimization trajectory under infinitesimal learning rate can be conducted via:
dfT ( θ(s) ) ds = −ΘT (s)× (fT ( θ(s) ) − yT ),
where ΘT (s) converges almost surely to the NTK Θ (L) T in Proposition A.1.
• Following Allen-Zhu et al. (2019) and Arora et al. (2019b), the generalization performance of the composed time-aware neural network can be explicitly characterized according to the properties of Θ(L)T .
A.3.4 PROOF FOR THEOREM 1
Proof. We first present a technical lemma that is crucial for establishing the duality result under the distributional constraint df (SθT ‖S) ≤ δ. Recall that the hidden dimension for the kth layer is dk.
Lemma A.2 (Ben-Tal et al. (2013)). Let f be any closed convex function with domain [0,+∞), and this conjugate is given by f∗(s) = supt≥0{ts− f(t)}. Then for any distribution S and any function g : Rdk+1 → R, we have
sup SθT :df (SθT ‖S)≤δ
∫ g(ω)dSθT (ω) = inf
λ≥0,η
{ λ ∫ f∗ (g(ω)− η
λ
) dS(ω) + δλ+ η } . (A.25)
We work with a scaled version of the f-divergence under f(t) = 1k (t k − 1) (because its dual function has a cleaner form), where the constraint set is now equivalent to {SθT : df (SθT ‖S) ≤ δ/k}. It is easy to check that f∗(s) = 1k′ [s] k′ + + 1 k with 1 k′ + 1 k = 1.
Similar to the proof for Proposition 1, we let z := [x, t] ∈ Rd+1 and Z = X × T to simplify the notations. To explicitly annotate the dependency of the random Fourier features on Ω, which is the random variable corresponding to ω, we define φ̃(z,Ω) such that φ̃(z,Ω) = [ cos(zᵀΩ1) + cos(zᵀΩ2), sin(z ᵀΩ1) + sin(z ᵀΩ2) ] , where Ω = [Ω1,Ω2]. Then the approximation error, when
replacing the sampled Fourier features φ by the original random variable φ̃(z,Ω), is given by:
∆n(Ω) := 1 n(n− 1) ∑ i 6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)
− E [ Σ(k)(Xi,Xj)KT,SθT ( (Xi, Ti), (Xj , Tj) )] = 1
n(n− 1) ∑ i 6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)− E [ Σ(k)(X,X′)φ̃(Z,Ω)ᵀφ̃(Z′,Ω) ] .
(A.26)
We first show that sub-Gaussianity of ∆n(Ω) . Let {x′i}ni=1 be an i.i.d copy of the observations except for one element j such that xj 6= x′j . Without loss of generality, we assume the last element is different, i.e. xn 6= x′n. Let ∆′n(Ω) be computed by replacing x and z with the above x′ and its corresponding z′. Note that
|∆n(Ω)−∆′n(Ω)|
= 1 n(n− 1) ∣∣∑ i6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)− Σ(k)(x′i,x′j)φ̃(z′i,Ω)ᵀφ̃(z′j ,Ω) ∣∣ ≤ 1 n(n− 1) (∑ i<n
∣∣Σ(k)(xi,xn)φ̃(zi,Ω)ᵀφ̃(zn,Ω)− Σ(k)(xi,x′n)φ̃(zi,Ω)ᵀφ̃(z′n,Ω)∣∣ + ∑ j<n
∣∣Σ(k)(xn,xj)φ̃(zn,Ω)ᵀφ̃(zj ,Ω)− Σ(k)(x′n,xj)φ̃(z′n,Ω)ᵀφ̃(zj ,Ω)∣∣) ≤ 4 max{1,M}
n ,
(A.27)
where the last inequality comes from the fact that the random Fourier features φ̃ are bounded by 1 and the infinity norm of Σ(k) is bounded by M . The above bounded difference property suggests that ∆n(Ω) is a 4 max{1,M} n -sub-Gaussian random variable.
To bound ∆n(Ω), we use:
sup SθT :df (SθT ||S) ∣∣∣ ∫ ∆n(Ω)dSθT ∣∣∣ ≤ sup SθT :df (SθT ||S) ∫ |∆n(Ω)|dSθT
≤ inf λ≥0 {λ1−k′ k′ ES [ |∆n(Ω)|k ′] + λ(δ + 1) k } (using Lemma 2)
= (δ + 1)1/kES [ |∆n(Ω)|k ′]1/k′ (solving for λ∗ from above) = √ δ + 1ES [ |∆n(Ω)|2 ]1/2 (let k = k′ = 1/2).
(A.28) Therefore, to bound supSθT :df (SθT ||S) ∣∣∣ ∫ ∆n(Ω)dSθT ∣∣∣ we simply need to bound [|∆n(Ω)|2]. Using the classical results for sub-Gaussian random variables (Boucheron et al., 2013), for λ ≤ n/8, we have
E [ exp ( λ∆n(Ω) )2] ≤ exp (− 1 2 log(1− 8 max{1,M}λ/n) ) .
Then we take the integral over ω p (∫ ∆n(ω) 2dS(ω)≥ 2
δ + 1 ) ≤ E [ exp ( λ ∫ ∆n(ω) 2dS(ω) )] exp ( − λ 2
δ + 1
) (Chernoff bound)
≤ exp ( − 1 2 log ( 1− 8 max{1,M}λ n ) − λ 2 δ + 1 ) (apply Jensen’s inequality).
(A.29)
Finally, let the true approximation error be ∆̂n(ω) = Σ̂(k)(SθT )− Σ(k)(SθT ). Notice that∣∣∆̂n(ω)∣∣ ≤ ∣∣∆n(Ω)∣∣+ 1 n(n− 1) ∑ i6=j Σ(k)(xi,xj) ∣∣φ̃(zi,Ω)ᵀφ̃(zj ,Ω)− φ(zi)ᵀφ(zj)∣∣.
From (A.28) and (A.29), we are able to bound supSθT :df (SθT ||S) ∆n(Ω). For the second term, recall from Proposition 1 that we have shown the stochastic uniform convergence bound for ∣∣φ̃(zi,Ω)ᵀφ̃(zj ,Ω) − φ(zi)ᵀφ(zj)∣∣ under any distributions SθT . The desired bound for p ( supSθT :df (SθT ||S) ∣∣∆̂n(ω)∣∣ ≥ ) is obtained after combining all the above results.
A.3.5 REPARAMETRIZATION WITH INVERTIBLE NEURAL NETWORK
In this part, we discuss the idea of constructing and sampling from an arbitrarily complex distribution from a known auxiliary distribution by a sequence of invertible transformations. Given an auxiliary random variable z following some know distribution q(z), suppose another random variable x is constructed via a one | 1. What is the main contribution of the paper regarding neural networks and time dynamics?
2. How does the proposed method improve prediction performance, especially for irregular time points?
3. Can you explain the concept of "neural network kernel" and how it is used in the paper?
4. How does the paper decouple the learning of NN kernel and temporal kernel, and what are the benefits of this approach?
5. What are some potential applications of the proposed method in real-world scenarios?
6. Do you have any concerns or suggestions regarding the mathematical formulation or the assumptions made in the paper?
7. Are there any limitations or areas for improvement in the proposed method, such as the choice of spectral distribution or the computational complexity? | Review | Review
The ms introduces a time component in the traditional NN setup, where the hidden layers change dynamically according to time. The idea is to borrow strength from the newly introduced time dimension to improve prediction performance.
One of the key idea is to treat each hidden layer in NN as a Gaussian process, which is represented as “neural network kernel”. Functions drawn from this Gaussian process at different time points are assumed to follow a continuous-time system, which is actually a ODE. For some reason it is difficult to use the continuous-time system to compute the temporal kernel directly in the time domain. The ms proposes to convert the functions to the frequency domain, which leads to a nice property such that we can compute a temporal kernel (Eq. (2)) in frequency/spectral domain. Hidden layer of NN at different time points can be seen as a large Gaussian process, whose kernel could be composed by the aforementioned NN kernel and temporal kernel (Eq. (5)). This decoupling is the key of this ms.
Regarding the computation of the kernels, NN kernel can be computed by extracting the features of the hidden layer. The temporal kernel can be computed by sampling the spectral distribution, which is called the random feature representation (Eq. (7)). However, it is not clear how to specify the spectral distribution. In all examples, Normal distribution is used, which is OK but may not be able to capture the complexity of ODE.
The ms applies the proposed method to real dataset and achieve better performance than baseline methods in prediction tasks involving irregular time points setup.
In general I feel this is a nice paper. The idea of learning NN and time dynamics at the same time seems to be useful in many applications. The ms cleverly decouples the learning of NN kernel and temporal kernel in two independent modules, which can maximumly utilise current implementations.
However, due to my limited knowledge in signal processing, I am not able to dig into the mathematical details and make strong recommendations (especially Claim 2).
Some minor comments
What is the purpose of Claim 1? From the supplementary it just shows that f(t) and f[i] are not equal, but they may be very close and does not have a huge impact of the result. There are lots of approximations in other parts of the model.
Why f(t) needs to be ODE ?
Page 3, section 3, line 2, in the formula of f(iw), the second derivation term seems to be missing
Page 4, 5 lines after Eq. (3), a_2(x)!=0 => a_1(x)!=0
Page 5, Eq. 7, cos(tw_n) => cos(tw_m) |
ICLR | Title
A Temporal Kernel Approach for Deep Learning with Continuous-time Information
Abstract
Sequential deep learning models such as RNN, causal CNN and attention mechanism do not readily consume continuous-time information. Discretizing the temporal data, as we show, causes inconsistency even for simple continuous-time processes. Current approaches often handle time in a heuristic manner to be consistent with the existing deep learning architectures and implementations. In this paper, we provide a principled way to characterize continuous-time systems using deep learning tools. Notably, the proposed approach applies to all the major deep learning architectures and requires little modifications to the implementation. The critical insight is to represent the continuous-time system by composing neural networks with a temporal kernel, where we gain our intuition from the recent advancements in understanding deep learning with Gaussian process and neural tangent kernel. To represent the temporal kernel, we introduce the random feature approach and convert the kernel learning problem to spectral density estimation under reparameterization. We further prove the convergence and consistency results even when the temporal kernel is non-stationary, and the spectral density is misspecified. The simulations and real-data experiments demonstrate the empirical effectiveness of our temporal kernel approach in a broad range of settings.
1 INTRODUCTION
Deep learning models have achieved remarkable performances in sequence learning tasks leveraging the powerful building blocks from recurrent neural networks (RNN) (Mikolov et al., 2010), longshort term memory (LSTM) (Hochreiter & Schmidhuber, 1997), causal convolution neural network (CausalCNN/WaveNet) (Oord et al., 2016) and attention mechanism (Bahdanau et al., 2014; Vaswani et al., 2017). Their applicability to the continuous-time data, on the other hand, is less explored due to the complication of incorporating time when the sequence is irregularly sampled (spaced). The widely-adopted workaround is to study the discretized counterpart instead, e.g. the temporal data is aggregated into bins and then treated as equally-spaced, with the hope to approximate the temporal signal using the sequence information. It is perhaps without surprise, as we show in Claim 1, that even for regular temporal sequence the discretization modifies the spectral structure. The gap can only be amplified for irregular data, so discretizing the temporal information will almost always
∗The work was done when the author was with Walmart Labs.
introduce intractable noise and perturbations, which emphasizes the importance to characterize the continuous-time information directly. Previous efforts to incorporate temporal information for deep learning include concatenating the time or timespan to feature vector (Choi et al., 2016; Lipton et al., 2016; Li et al., 2017b), learning the generative model of time series as missing data problems (Soleimani et al., 2017; Futoma et al., 2017), characterizing the representation of time (Xu et al., 2019; 2020; Du et al., 2016) and using neural point process (Mei & Eisner, 2017; Li et al., 2018). While they provide different tools to expand neural networks to coupe with time, the underlying continuous-time system and process are involved explicitly or implicitly. As a consequence, it remains unknown in what way and to what extend are the continuous-time signals interacting with the original deep learning model. Explicitly characterizing the continuous-time system (via differential equations), on the other hand, is the major pursuit of classical signal processing methods such as smoothing and filtering (Doucet & Johansen, 2009; Särkkä, 2013). The lack of connections is partly due to the compatibility issues between the signal processing methods and the auto-differential gradient computation framework of modern deep learning. Generally speaking, for continuous-time systems, model learning and parameter estimation often rely on the more complicated differential equation solvers (Raissi & Karniadakis, 2018; Raissi et al., 2018a). Although the intersection of neural network and differential equations is gaining popularity in recent years, the combined neural differential methods often require involved modifications to both the modelling part and implementation detail (Chen et al., 2018; Baydin et al., 2017).
Inspired by the recent advancement in understanding neural network with Gaussian process and the neural tangent kernel (Yang, 2019; Jacot et al., 2018), we discover a natural connection between the continuous-time system and the neural Gaussian process after composing with a temporal kernel. The significance of the temporal kernel is that it fills in the gap between signal processing and deep learning: we can explicitly characterize the continuous-time systems while maintaining the usual deep learning architectures and optimization procedures. While the kernel composition is also known for integrating signals from various domains (Shawe-Taylor et al., 2004), we face the additional complication of characterizing and learning the unknown temporal kernel in a data-adaptive fashion. Unlike the existing kernel learning methods where at least the parametric form of the kernel is given (Wilson et al., 2016), we have little context on the temporal kernel, and aggressively assuming the parametric form will risk altering the temporal structures implicitly just like discretization. Instead, we leverage the Bochner’s theorem and its extension (Bochner, 1948; Yaglom, 1987) to first covert the kernel learning problem to the more reasonable spectral domain where we can direct characterize the spectral properties with random (Fourier) features. Representing the temporal kernel by random features is favorable as we show they preserve the existing Gaussian process and NTK properties of neural networks. This is desired from the deep learning’s perspective since our approach will not violate the current understandings of deep learning. Then we apply the reparametrization trick (Kingma & Welling, 2013), which is a standard tool for generative models and Bayesian deep learning, to jointly optimize the spectral density estimator. Furthermore, we provide theoretical guarantees for the random-feature-based kernel learning approach when the temporal kernel is non-stationary, and the spectral density estimator is misspecified. These two scenarios are essential for practical usage but have not been studied in the previous literature. Finally, we conduct simulations and experiments on real-world continuous-time sequence data to show the effectiveness of the temporal kernel approach, which significantly improves the performance of both standard neural architectures and complicated domain-specific models. We summarize our contributions as follow.
• We study a novel connection between the continuous-time system and neural network via the composition with a temporal kernel.
• We propose an efficient kernel learning method based on random feature representation, spectral density estimation and reparameterization, and provide strong theoretical guarantees when the kernel is nonstationary and the spectral density is misspeficied.
• We analyze the empirical performance of our temporal kernel approach for both the standard and domain-specific deep learning models through real-data simulation and experiments.
2 NOTATIONS AND BACKGROUND
We use bold-font letters to denote vectors and matrices. We use xt and (x, t) interchangeably to denote a time-sensitive event occurred at time t, with t ∈ T ≡ [0, tmax]. Neural networks are denoted
by f(θ,x), where x ∈ X ⊂ Rd is the input where diameter(X ) ≤ l, and the network parameters θ are sampled i.i.d from the standard normal distribution at initialization. Without loss of generality, we study the standard L-layer feedforward neural network with its output at the hth hidden layer given by f (h) ∈ Rdh . We use and (t) to denote Gaussian noise and continuous-time Gaussian noise process. By convention, we use ⊗ and ◦ and to represent the tensor and outer product.
2.1 UNDERSTANDING THE STANDARD NEURAL NETWORK
We follow the settings from Jacot et al. (2018); Yang (2019) to briefly illustrate the limiting Gaussian behavior of f(θ,x) at initialization, and its training trajectory under weak optimization. As d1, . . . , dL → ∞, f (h) tend in law to i.i.d Gaussian processes with covariance Σh ∈ Rdh×dh : f (h) ∼ N(0,Σh), which we refer to as the neural network kernel to distinguish from the other kernel notions. Also, given a training dataset {xi, yi}ni=1, let f ( θ(s) ) = ( f(θ(s),x1), . . . , f(θ (s),xn) ) be the network outputs at the sth training step and y = (y1, . . . , yn). Using the squared loss for example, when training with infinitesimal learning rate, the outputs follows: df ( θ(s) ) /ds =
−Θ(s)× (f ( θ(s) ) − y), where Θ(s) is the neural tangent kernel (NTK). The detailed formulation
of Σh and Θ(s) are provided in Appendix A.2. We introduce the two concepts here because:
1. instead of incorporating time to f(θ,x), which is then subject to its specific structures, can we alternatively consider an universal approach which expands the Σh to the temporal domain such as by composing it with a time-aware kernel?
2. When jointly optimize the unknown temporal kernel and the model parameters, how can we preserve the results on the training trajectory with NTK?
In our paper, we show that both goals are achieved by representing a temporal kernel via random features.
2.2 DIFFERENCE BETWEEN CONTINUOUS-TIME AND ITS DISCRETIZATION
We now discuss the gap between continuous-time process and its equally-spaced discretization. We study the simple univariate continuous-time system f(t):
d2f(t) dt2 + a0 df(t) dt + a1f(t) = b0 (t). (1)
A discretization with a fixed interval is then given by: f[i] = f(i× interval) for i = 1, 2, . . .. Notice that f(t) is a second-order auto-regressive process, so both f(t) and f[i] are stationary. Recall that the covariance function for a stationary process is given by k(t) := cov ( f(t0), f(t0 + t) ) , and the
spectral density function (SDF) is defined as s(ω) = ∫∞ −∞ exp(−iωt)k(t)dt.
Claim 1. The spectral density function for f(t) and f[i] are different.
The proof is relegated to Appendix A.2.2. The key takeaway from the example is that the spectral density function, which characterizes the signal on the frequency domain, is altered implicitly even by regular discretization in this simple case. Hence, we should be cautious about the potential impact of the modelling assumption, which eventually motivates us to explicitly model the spectral distribution.
3 METHODOLOGY
We first explain our intuition using the above example. If we take the Fourier transform on (1) and rearrange terms, it becomes: f̃(iω) = ( b0
(iω)2 + a0(iω) + a1
) ̃(iω), where f̃(iω) and ̃(iω)
are the Fourier transform of f(t) and (t). Note that the spectral density of a Gaussian noise process is constant, i.e. |̃(iω)|2 = p0, so the spectral density of f(t) is given by: sθT (ω) = p0 ∣∣b0/((iω)2 + a0(iω) + a1)∣∣2, where we use θT = [a0, a1, b0] to denote the parameters of the linear dynamic system defined in (1). The subscript T is added to distinguish from the parameters of the neural network. The classical Wiener-Khinchin theorem (Wiener et al., 1930) states that the
covariance function for f(t), which is a Gaussian process since the linear differential equation is a linear operation on (t), is given by the inverse Fourier transform of the spectral density:
KT (t, t ′) := kθT (t ′ − t) = 1 2π
∫ sθT (ω) exp(iωt)dω, (2)
We defer the discussions on the inverse direction, that given a kernel kθT (t ′− t) we can also construct a continuous-time system, to Appendix A.3.1. Consequently, there is a correspondence between the parameterization of a stochastic ODE and the kernel of a Gaussian process. The mapping is not necessarily one-to-one; however, it may lead to a more convenient way to parameterize a continuoustime process alternatively using deep learning models, especially knowing the connections between neural network and Gaussian process (which we highlighted in Section 2.1).
To connect the neural network kernel Σ(h) (e.g. for the hth layer of the FFN) to a continuous-time system, the critical step is to understand the interplay between the neural network kernel and the temporal kernel (e.g. the kernel in (2)):
• the neural network kernel characterizes the covariance structures among the hidden representation of data (transformed by the neural network) at any fixed time point;
• the temporal kernel, which corresponds to some continuous-time system, tells how each static neural network kernel propagates forward in time. See Figure 1a for a visual illustration.
Continuing with Example 1, it is straightforward construct the integrated continuous-time system as:
a2(x) d2f(x, t)
dt2 +a1(x) df(x, t) dt
+a0(x)f(x, t) = b0(x) (x, t), (x, t = t0) ∼ N(0,Σ(h), ∀t0 ∈ T , (3)
where we use the neural network kernel Σ(h) to define the Gaussian process (x, t) on the feature dimension, so the ODE parameters are now functions of the data as well. To see that (3) generalizes the hth layer of a FFN to the temporal domain, we first consider a2(x) = a1(x) = 0 and a0(x) = b0(x). Then the continuous-time process f(x, t) exactly follows f (h) at any fixed time point t, and its trajectory on the time axis is simply a Gaussian process. When a1(x), a2(x) 6= 0, f(x, t) still matches f (h) at the initial point, but its propagation on the time axis becomes nontrivial and is now characterized by the constructed continuous-time system. We can easily the setting to incorporate higher-order terms:
an(x) dnf(x, t)
dtn + · · ·+ a0(x)f(x, t) = bm(x) dm (x, t) dtm + · · ·+ b0(x) (x, t). (4)
Keeping the heuristics in mind, an immediate question is what is the structure of the corresponding kernel function after we combine the continuous-time system with the neural network kernel?
Claim 2. The kernel function for f(x, t) in (4) is given by: Σ(h)T (x, t; x ′, t′) = kθT (x, t; x ′, t′) · Σ(h)(x,x′), where θT is the underlying parameterization of { ai(·) }n i=1 and { bi(·) }m i=1 as functions
of x. When { ai }n i=1 and { bi }m i=1 are scalars, Σ(h)T (x, t; x ′, t′) = kθT (t, t ′) ·Σ(h)(x,x′).
We defer the proof and the discussion on the inverse direction from temporal kernel to continuoustime system to Appendix A.3.1. Claim 2 shows that it is possible to expand any layer of a standard neural network to the temporal domain, as part of a continuous-time system using kernel composition. The composition is flexible and can happen at any hidden layer. In particular, given the temporal kernel KT and neural network kernel Σ(h), we obtain the neural-temporal kernel on X × T : Σ
(h) T = diag ( Σ(h) ⊗KT ) , where diag(·) is the partial diagonalization operation on X :
Σ (h) T (x, t; x ′, t′) = Σ(h)(x,x′) ·KT (x, t; x′, t′). (5) The above argument shows that instead of taking care of both the deep learning and continuous-time system, which remains challenging for general architectures, we can convert the problem to finding a suitable temporal kernel. We further point out that when using neural networks, we are parameterizing the hidden representation (feature lift) in the feature space rather than the kernel function in the kernel space. Therefore, to give a consistent characterization, we should also study the feature representation of the temporal kernel and then combine it with the hidden representations of the neural network.
3.1 THE RANDOM FEATURE REPRESENTATION FOR TEMPORAL KERNEL
We start by considering the simpler case where the temporal kernel is stationary and independent of features: KT (t, t′) = k(t′ − t), for some properly scaled positive even function k(·). The classical Bochner’s theorem (Bochner, 1948) states that:
ψ(t′ − t) = ∫ R e−i(t ′−t)ωds(ω), for some probability density function s on R, (6)
where s(·) is the spectral density function we highlighted in Section 2.2. To compute the integral, we may sample (ω1, . . . , ωm) from s(ω) and use the Monte Carlo method: ψ(t′ − t) ≈ 1
m
∑m i=1 e −i(t′−t)ω . Since e−i(t ′−t)ᵀω = cos ( (t′− t)ω ) − i sin ( (t′− t)ω ) , for the real part, we let:
φ(t) = 1√ m
[ cos(tω1), sin(tω1) . . . , cos(tωm), sin(tωm) ] , (7)
and it is easy to check that ψ(t′− t) ≈ 〈φ(t),φ(t′)〉. Since φ(t) is constructed from random samples, we refer to it as the random feature representation of KT . Random feature has been extensive studied in the kernel machine literature, however, we propose a novel application for random features to parametrize an unknown kernel function. A straightforward idea is to parametrize the spectral density function s(ω), whose pivotal role has been highlighted in Section 2.2 and Example 1.
Suppose θT is the distribution parameters for s(ω). Then φθT (t) is also (implicitly) parameterized by θT through the samples { ω(θT )i }m i=1
from s(ω). The idea resembles the reparameterization trick for training variational objectives (Kingma & Welling, 2013), which we formalize in the next section. For now, it remains unknown if we can also obtain the random feature representation for nonstationary kernels where Bochner’s theorem is not applicable. Note that for a general temporal kernel KT (x, t; x
′, t′), in practice, it is not reasonable to assume stationarity specially for the feature domain. In Proposition 1, we provide a rigorous result that generalizes the random feature representation for nonstationary kernels with convergence guarantee. Proposition 1. For any (scaled) continuous non-stationary PDS kernel KT on X × T , there exists a joint probability measure with spectral density function s(ω1,ω2), such that KT ( (x, t), (x′, t′) ) =
Es(ω1,ω2) [ φ(x, t)ᵀφ(x′, t′) ] where φ(x, t) is given by:
1
2 √ m
[ . . . , cos ( [x, t]ᵀω1,i ) + cos ( [x, t]ᵀω2,i ) , sin ( [x, t]ᵀω1,i ) + sin ( [x, t]ᵀω2,i ) . . . ] , (8)
Here, { (ω1,i,ω2,i) }m i=1
are the m samples from s(ω1,ω2). When the sample size m ≥ 8(d+1) ε2 log ( C(d) ( l2t2maxσp/ε ) 2d+2 d+3 /δ ) , with probability at least 1− δ, for any ε > 0,
sup (x,t),(x′,t′)
∣∣∣KT ((x, t), (x′, t′))− φ(x, t)ᵀφ(x′, t′)∣∣∣ ≤ ε, (9)
where σ2p is the second moment of the spectral density function s(ω1,ω2) and C(d) is a constant.
We defer the proof to Appendix A.3.2. It is obvious that the new random feature representation in (8) is a generalization of the stationary setting. There are two advantages for using the random feature representation:
• the composition in the kernel space suggested by (5) is equivalent to the computationally efficient operation f (h)(x) ◦ φ(x, t) in the feature space (Shawe-Taylor et al., 2004);
• we preserve a similar Gaussian process behavior and the neural tangent kernel results that we discussed in Section 2.1, and we defer the discussion and proof to Appendix A.3.3.
In the forward-passing computations, we simply replace the original hidden representation f (h)(x) by the time-aware representation f (h)(x) ◦ φ(x, t). Also, the existing methods and results on analyzing neural networks though Gaussian process and NTK, though not emphasized in this paper, can be directly carried out to the temporal setting as well (see Appendix A.3.3).
3.2 REPARAMETERIZATION WITH THE SPECTRAL DENSITY FUNCTION
We now present the gradient computation for the parameters of the spectral distribution using only their samples. We start from the well-studied case where s(ω) is given by a normal distribution N(µ,Σ) with parameters θT = [µ,Σ]. When computing the gradients of θT , instead of sampling from the intractable distribution s(ω), we reparameterize each sample ωi via: Σ1/2 + µ, where is sampled from a standard multivariate normal distribution. The gradient computations that relied on ω is now replaced using the easy-to-sample and θT = [µ,Σ] now become tractable parameters in the model given . We illustrate reparameterization in our setting in the following example. Example 1. Consider a single-dimension homogeneous linear model: f(θ, x) = f (0)(x) = θx. Without loss of generality, we use only a single sample ω1 from the s(ω) which corresponds to the feature-independent temporal kernel kθ(t, t′). Again, we assume s(ω) ∼ N(µ, σ). Then the time-aware hidden representation for this layer for datapoint (x1, t1) is given by:
f (0) θ,µ,σ(x1, t1) = 1/ √ 2 [ θx1 cos(t1ω1), θx1 sin(t1ω1) ] , ω1 ∼ N(µ, σ).
Using the reparameterization, given a sample 1 from the standard normal distribution, we have:
f (0) θ,µ,σ(x1, t1) = 1/ √ 2 [ θx1 cos ( t1(σ 1/2 1 + µ) ) , θx1 sin ( t1(σ 1/2 1 + µ) )] , (10)
so the gradients with respect to all the parameters (θ, µ, σ) can be computed in the usual way.
Despite the computation advantage, the spectral density is now learnt from the data instead of being given so the convergence result in Proposition 1 does not provide sample-consistency guarantee. In practice, we may also misspecify the spectral distribution and bring extra intractable factors.
To provide practical guarantees, we first introduce several notations: let KT (S) be the temporal kernel represented by random features such that KT (S) = E [ φᵀφ ] , where the expectation is taken with respect to the data distribution and the random feature vector φ has its samples {ωi}mi=1 drawn from the spectral distribution S. Without abuse of notation, we use φ ∼ S to denote the dependency of the random feature vector φ on the spectral distribution S provided in (8). Given a neural network kernel Σ(h), the neural temporal kernel is then denoted by: Σ(h)T (S) = Σ
(h) ⊗KT (S). So the sample version of Σ(h)T (S) for the dataset { (xi, ti) }n i=1 is given by:
Σ̂ (h) T (S) =
1 n(n− 1) ∑ i 6=j Σ(h)(xi,xj)φ(xi, ti) ᵀφ(xj , tj), φ ∼ S. (11)
If the spectral distribution S is fixed and given, then using standard techniques and Theorem 1 it is straightforward to show limn→∞ Σ̂ (h) T (S) → E [ Σ̂ (h) T (S) ] so the proposed learning schema is sample-consistent.
In our case, the spectral distribution is learnt from the data, so we need some restrictions on the spectral distribution in order to obtain any consistency guarantee. The intuition is that if SθT does not diverge from the true S, e.g. d(SθT ‖S) ≤ δ for some divergence measure, the guarantee on S can transfer to SθT with the rate only suffers a discount that does not depend on n.
Theorem 1. Consider the f-divergence such that d(SθT ‖S) = ∫ c ( dSθT /dS ) dS, with the generator
function c(x) = xk − 1 for any k > 0. Given the neural network kernel Σh, let M = ∥∥Σh∥∥∞, then
Pr (
sup d(SθT ||S)≤δ ∣∣Σ̂(h)T (SθT )→ E[Σ̂(h)T (SθT )∣∣ ≥ ε) ≤ √2 exp( −nε264 max{4,M}(δ + 1))+ C(ε), (12)
where C(ε) ∝ (
2l2t2maxσSθT /max{4,M}
) 2d+2 d+3 exp ( − dh 2
32 max{16,M2}(d+3)
) that does not depend on δ.
The proof is provided in Appendix A.3.4. The key takeaway from (12) is that as long as the divergence between the learnt SθT and the true spectral distribution is bounded, we still achieve sample consistency. Therefore, instead of specifying a distribution family which is more likely to suffer from misspecification, we are motivated to employ some universal distribution approximator such as the invertible neural network (INN) (Ardizzone et al., 2018). INN consists of a series of invertible operations that transform samples from a known auxiliary distribution (such as normal distribution) to arbitrarily complex distributions. The Jacobian that characterize the changes of distributions are made invertible by the INN, so the gradient flow is computationally tractable similar to the case in Example 1. We defer the detailed discussions to Appendix A.3.5. Remark 1. It is clear at this point that the temporal kernel approach applies to all the neural networks who have a Gaussian process behavior with a valid neural network kernel, which includes the major architectures such as CNN, RNN and attention mechanism (Yang, 2019).
For implementation, at each forward and backward computation, we first sample from the auxiliary distribution to construct the random feature representation φ using reparameterization, and then compose it with the selected hidden layer f (h) such as in (10). We illustrate the computation architecture in Figure 2, where we adapt the vanilla RNN to the proposed framework. In Algorithm 1, we provide the detailed forward and backward computations, using the L-layer FFN from the previous sections as an example.
4 RELATED WORK
The earliest work that discuss training continuous-time neural network dates back to LeCun et al. (1988); Pearlmutter (1995), but no feasible solution was proposed at that time. The proposed approach relates to several fields that are under active research.
ODE and neural network. Certain neural architecture such as the residual network has been interpreted as approximate ODE solvers (Lu et al., 2018). More direct approaches have been proposed to learn differential equations from data (Raissi & Karniadakis, 2018; Raissi et al., 2018a;
Long et al., 2018), and significant efforts have been spent on developing solvers that combine ODE and the back-propagation framework (Farrell et al., 2013; Carpenter et al., 2015; Chen et al., 2018). The closest literature to our work is from Raissi et al. (2018b) who design numerical Gaussian process resulting from temporal discretization of time-dependent partial differential equations.
Random feature and kernel machine learning. In supervised learning, the kernel trick provides a powerful tool to characterize non-linear data representations (Shawe-Taylor et al., 2004), but the computation complexity is overwhelming for large dataset. The random (Fourier) feature approach proposed by Rahimi & Recht (2008) provides substantial computation benefits. The existing literature on analyzing the random feature approach all assume the kernel function is fixed and stationary (Yang et al., 2012; Sutherland & Schneider, 2015; Sriperumbudur & Szabó, 2015; Avron et al., 2017).
Reparameterization and INN. Computing the gradient for intractable objectives using samples from auxiliary distribution dates back to the policy gradient method in reinforcement learning (Sutton et al., 2000). In recent years, the approach gains popularity for training generative models (Kingma & Welling, 2013), other variational objectives (Blei et al., 2017) and Bayesian neural networks (Snoek et al., 2015). INN are often employed to parameterize the normalizing flow that transforms a simple distribution into a complex one by applying a sequence of invertible transformation functions (Dinh et al., 2014; Ardizzone et al., 2018; Kingma & Dhariwal, 2018; Dinh et al., 2016).
Our approach characterizes the continuous-time ODE via the lens of kernel. It complements the existing neural ODE methods which are often restricted to specific architectures, relying on ODE solvers and lacking theoretical understandings. We also propose a novel deep kernel learning approach by parameterizing the spectral distribution under random feature representation, which is conceptually different from using temporal kernel for time-series classification (Li & Marlin, 2015). Our work is a extension of Xu et al. (2019; 2020), which study the case for self-attention mechanism.
Algorithm 1: Forward pass and parameter update, using the L-layer FFN as an example. Input: The FFN f(θ, ·) = {f (1)(θ, ·), . . . , f (L)(θ, ·)}; the invertible neural network g(ψ, ·);
the selected hidden layer h; the loss `i associated with each input (xi, ti); the auxilliary distribution P .
for each mini-batch do Sample { 1,j , 2,j }m j=1
from the auxilliary distribution P ; Compute the reparameterized samples ω using the INN g(ψ, ·), e.g. ω1,j(ψ) := g(ψ, 1,j); for sample i in the batch do
Construct the random feature representation φψ(xi, ti) using the reparameterized samples (so φ is now explicitly parameterized by ψ) according to eq. (8);
Forward pass: get f (h)(θ,xi), let f (h) ( (θ,ψ),xi, ti ) := f (h)(θ,xi) ◦ φψ(xi, ti),
then pass it to the following feedforward layers to obtain the final output ŷi ; Gradient computation: compute the gradients∇θ`i(ŷi) ∣∣ ,∇ψ`i(ŷi) ∣∣
for the FFN and INN respectively, conditioned on the samples from the auxiliary distribution;
end Update the parameters using the selected optimizer in a standard batch-wise fashion.
end
It is straightforward from Figure 2 and Algorithm 1 that the proposed approach serves as a plug-in module and do not modify the original network structures of the RNN and FFN.
5 EXPERIMENTS AND RESULTS
We focus on revealing the two major advantages of the proposed temporal kernel approach:
• the temporal kernel approach consistently improves the performance of deep learning models, both for the general architectures such as RNN, CausalCNN and attention mechanism as well as the domain-specific architectures, in the presence of continuous-time information;
• the improvement is not at the cost of computation efficiency and stability, and we outperform the alternative approaches who also applies to general deep learning models.
We point out that the neural point process and the ODE neural networks have only been shown to work for certain model architectures so we are unable to compare with them for all the settings.
Time series prediction with standard neural networks (real-data and simulation)
We conduct time series prediction task using the vanilla RNN, CausalCNN and self-attention mechanism with our temporal kernel approach (Figure A.1). We choose the classical Jena weather data for temperature prediction, and the Wikipedia traffic data to predict the number of visits of Wikipedia pages. Both datasets have vectorized features and are regular-sampled. To illustrate the advantage of leveraging the temporal information compared with using only sequential information, we first conduct the ordinary next-step prediction on the regular observations, which we refer to as Case1. To fully illustrate our capability of handling the irregular continuous-time information, we consider the two simulation setting that generate irregular continuous-time sequences for prediction:
Case2. we sample irregularly from the history, i.e. xt1 , . . . ,xtq , q ≤ k, to predict xtk+1 ; Case3. we use the full history to predict a dynamic future point, i.e. xtk+q for a random q.
We provide the complete data description, preprocessing, and implementation in Appendix B. We use the following two widely-adopted time-aware modifications for neural networks (denote by NN) as baselines, as well as the classical vectorized autoregression model (VAR). NN+time: we directly concatenate the timespan, e.g. tj − ti, to the feature vector. NN+trigo: we concatenate the learnable sine and cosine features, e.g. [sin(π1t), . . . , sin(πkt)], to the feature vector, where {πi}ki=1 are free model parameters. We denote our temporal kernel approach by T-NN. From Figure 3, we see that the temporal kernel outperforms the baselines in all cases when the time series is irregularly sampled (Case2 and Case3), suggesting the effectiveness of the temporal kernel approach in capturing and utilizing the continuous-time signals. Even for the regular Case1 reported in Table A.1, the temporal kernel approach gives the best results, which again emphasizes the advantage of directly characterize the temporal information over discretization. We also show in the ablation studies (Appendix B.5) that INN is necessary for achieving superior performance compared with specifying a distribution family. To demonstrate the stability and robustness, we provide sensitivity analysis in Appendix B.6 for model selection and INN structures.
Temporal sequence learning with complex domain models
Now we study the performance of our temporal kernel approach for the sequential recommendation task with more complicated domain-specific two-tower architectures (Appendix B.2). Temporal information is known to be critical for understanding customer intentions, so we choose the two public e-commerce dataset from Alibaba and Walmart.com, and examine the next-purchase recommendation. To illustrate our flexibility, we select the GRU-based, CNN-based and attention-based recommendation models from the recommender system domain (Hidasi et al., 2015; Li et al., 2017a) and equip them with the temporal kernel. The detailed settings, ablation studies and sensitivity analysis are all in Appendix B. The results are shown in Table A.2. We observe that the temporal kernel approach brings various degrees of improvements to the recommendation models by characterizing
the continuous-time information. The positives results from the recommendation task also suggests the potential of our approach for making impact in boarder domains.
6 DISCUSSION
In this paper, we discuss the insufficiency of existing work on characterizing continuous-time data with deep learning models and describe a principled temporal kernel approach that expands neural networks to characterize continuous-time data. The proposed learning approach has strong theoretical guarantees, and can be easily adapted to a broad range of applications such as deep spatial-temporal modelling, outlier and burst detection, and generative modelling for time series data.
Scope and limitation. Although the temporal kernel approach is motivated by the limiting-width Gaussian behavior of neural networks, in practice, it suffices to use regular widths as we did in our experiments (see Appendix B.2 for the configurations). Therefore, there are still gaps between our theoretical understandings and the observed empirical performance, which require more dedicated analysis. One possible direction is to apply the techniques in Daniely et al. (2016) to characterize the dual kernel view of finite-width neural networks. The technical detail, however, will be more involved. It is also arguably true that we build the connection between the temporal kernel view and continuoustime system in an indirect fashion, compared with the ODE neural networks. However, our approach is fully compatible with the deep learning subroutines while the end-to-end ODE neural networks require substantial modifications to the modelling and implementation. Nevertheless, ODE neural networks are (in theory) capable of modelling more complex systems where the continuous-time setting is a special case. Our work, on the other hand, is dedicated to the temporal setting.
A APPENDIX
We provide the omitted proofs, detailed discussions, extensions and complete numerical results.
A.1 NUMERICAL RESULTS FOR SECTION 5
A.2 SUPPLEMENTARY MATERIAL FOR SECTION 2
We discuss the detailed background for the Gaussian process behavior of neural network and the training trajectory under neural tangent kernel, as well as the proof for Claim 1.
A.2.1 GAUSSIAN PROCESS BEHAVIOR AND NEURAL TANGENT KERNEL FOR DEEP LEARNING MODELS
The Gaussian process (GP) view of neural networks at random initialization was originally discussed in (Neal, 2012). Recently, CNN and other standard neural architectures have all been recognized as functions drawn from GP in the limit of infinite network width (Novak et al., 2018; Yang, 2019). When trained by gradient descent under infinitesimal step schedule, the gradient flow of the standard neural architectures can be described by the notion of Neural Tangent Kernel (NTK) whose asymptotic behavior under infinite network width is known (Jacot et al., 2018). The discovery of NTK has led to several papers studying the training and generalization properties of neural networks (Allen-Zhu et al., 2019; Arora et al., 2019a;b).
For a L-layer FNN f(θ,x) = f (L) with hidden dimensions {dh}Lh=1 and recursively defined via:
f (L) = W(L)f (L)(x) + b(L), f (h)(x) = 1√ dh
W(h)σ ( f (h−1)(x) ) + b(h), f (0)(x) = x,
(A.1) for h = 1, 2, . . . , L− 1, where σ(.) is the activation function and the layer weights W(L) ∈ RdL−1 , W(h) ∈ Rdh−1×dh and intercepts are initialized by sampling independently from N (0, 1) (without loss of generality). As d1, . . . , dL →∞, f (h) tend in law to i.i.d Gaussian processes with covariance Σh defined recursively as shown by Neal (2012):
Σ(1)(x,x′) = 1
h1 xᵀx′ + 1, Σ(h)(x,x′) = Ef∼N (0,Σ(h−1))
[ σ ( f(x) ) σ ( f(x′) )] + 1. (A.2)
We also refer to Σ(h) as the neural network kernel to distinguish from the other kernel notions. Given a training dataset {xi, yi}ni=1, let f ( θ(s) ) = ( f(θ(s),x1), . . . , f(θ(s),xn) ) be the network outputs at the sth training step and y = (y1, . . . , yn).
When training the network by minimizing the squared loss `(θ) with infinitesimal learning rate, i.e. dθ(s) ds = −∇`(θ(s)), the network outputs at training step s follows the evolution (Jacot et al., 2018):
df ( θ(s) ) ds = −Θ(s)× (f ( θ(s) ) − y), [ Θ(s) ] ij = 〈∂f(θ(s), xi) ∂θ , ∂f(θ(s), xj) ∂θ 〉 . (A.3)
The above Θ(s) is referred to as the NTK, and recent results shows that when the network widths go to infinity (or sufficiently large), Θ(s) converges to a fixed Θ0 almost surely (or with high probability).
For a standard L-layer FFN, the NTK Θ0 = Θ (L) 0 for parameters {W(h),b(h)} on the hth-layer can also be computed recursively:
Θ (h) 0 (xi,xj) = Σ (h)(xi,xj) Σ̇(k)(xi,xj) = Ef∼N (0,Σ(k−1)) [ σ̇(f(xi))σ̇(f(xj)) ] ,
and Θ(k)0 (xi,xj) = Θ (k−1) 0 (xi,xj)Σ̇ (k)(xi,xj) + Σ (k)(xi,xj), k = h+ 1, . . . , L.
(A.4)
A number of optimization and generalization properties of neural networks can be studied using NTK, which we refer the interested readers to (Lee et al., 2019; Allen-Zhu et al., 2019; Arora et al., 2019a;b). We also point out that the above GP and NTK constructions can be carried out on all standard neural architectures including CNN, RNN and the attention mechanism (Yang, 2019).
A.2.2 PROOF FOR CLAIM 1
In this part, we denote the continuous-time system by X(t) in order to introduce the full set notations that are needed for our proof, where the length of the discretized interval is explicitly considered. Note that we especially construct the example in Section 2 so that the derivations are not too cumbersome. However, the techniques that we use here can be extended to prove the more complicated settings.
Proof. Consider X(t) to be the second-order continuous-time autoregressive process with covariance function k(t) and spectral density function (SDF) s(ω) such that s(ω) = ∫∞ −∞ exp(−iωt)k(t)dt. The covariance function of the discretization Xa[n] = X(na) with any fixed interval a > 0 is then given by ka[n] = k(na). According to standard results in time series, the SDF of X(t) is given by in the form of:
s(ω) = a1
ω2 + b21 + a2 ω2 + b2 , a1 + a2 = 0, a1b 2 2 + a2b 2 1 6= 0. (A.5)
We assume without loss of generality that b1, b2 are positive numbers. Note that the kernel function for Xa[i] can also be given by
ka[n] = ∫ ∞ −∞ exp(ianω)s(ω)dω
= 1
a ∞∑ k=−∞ ∫ (2k+1)π (2k−1)π exp(inω)s(ω/a)dω
= 1
a ∫ ∞ −∞ exp(inω) ∞∑ k=−∞ s (ω + 2kπ a ) dω,
(A.6)
which suggests that the SDF for the discretization Xa[n] can be given by:
sa(ω) = 1
a ∞∑ k=−∞ s (ω + 2kπ h ) = a1 2 ( e2ab1 − 1 b1|eab1 − eiω|2 − e 2ab2 − 1 b1|eab2 − eiω|2 )
= a1(d1 − 2d2 cos(ω))
2b1b2|(eab1 − eiω)(eab2 − eiω)|2 ,
(A.7)
where d2 = b2eab2(e2ab2 − 1)− b1eab1(e2ab2 − 1). By the definition of discrete-time auto-regressive process, Xa[n] is a second-order AR process only if d2 = 0, which happens if and only if: b2/b1 = (eab2 − e−ab2)/(eab1 − e−ab1). However, the function g(x) = exp(ax)− exp(−ax) is concave on [0,∞) (since the time interval a > 0) and g(0) = 0, the-above equality hold if b1 = b2. However, this contradicts with (A.5), since a1 + a2 = 0 and a1b22 + a2b 2 1 6= 0 suggests a1(b1 − b2)2 6= 0. Hence, Xa[n] cannot be a second-order discrete-time auto-regressive process.
A.3 SUPPLEMENTARY MATERIAL FOR SECTION 3
We first present the related discussions and proof for Claim 2 on the connection between continuoustime system and temporal kernel. Then we prove the convergence result in Theorem 1 regarding the random feature representation for non-stationary kernel. In the sequel, we show the new results for the Gaussian process behavior and neural tangent kernel under random feature representations, and discuss the potential usage of our results. Finally, we prove the sample-consistency result when the spectral distribution is misspecified.
A.3.1 PROOF AND DISCUSSIONS FOR CLAIM 2
Proof. Recall that we study the dynamic system given by:
an(x) dnf(x, t)
dtn + · · ·+ a0(x)f(x, t) = bm(x) dm (x, t) dtm + · · ·+ b0 (x, t), (A.8)
where (x, t = t0) ∼ N(0,Σ(h), ∀t0 ∈ T . The solution process to the above continuous-time system is also a Gaussian process, since (x, t) is a Gaussian process and the solution of a linear different equation is a linear operation on the input. For the sake of notation, we assume b0(x) = 1 and b1(x) = 0, . . . , bm(x) = 0, which does not change the arguments in the proof. We apply the Fourier transformation transformation on both sides and solve the for Fourier transform f̃(iωx, iω):
f̃(iωx, iω) = ( 1 an(x) · (iω)q + · · ·+ a1(x) · iω + a0(x) ) W (iω;ωx), (A.9)
where W (iω;ωx) is the Fourier transform of (x, t). If we do not make the assumption on {bj(x)}mj=1, they will simply show up on the numeration in the same fashion as {aj(x)}nj=1. Let
GθT (iω; x) = aq(x) · (iω)q + · · ·+ a1(x) · iω + a0(x), and p(ωx) = |W (iω;ωx)|2 be the spectral density of the Gaussian process corresponding to (its spectral density does not depend on ω because is a Gaussian white noise process on the time dimension). The dependency of G(· ; ·) on θT is because we defined θT to the underlying parameterization of {aj(·)}nj=1 in the statement of Claim 2. Then the spectral density of the process f(x, t) is given by
p(ω,ωx) = C · p(ωx)|GθT (iω; x)|2 ∝ p(ωx)pθT (ω; x), where C is constant that corresponds to the spectral density of the random Gaussian noise on the time dimension. Notice that the spectral density function obtained this way is regular, since it has the form of pθT (ω; x) = constant/(polynomial of ω 2).
Therefore, according to the classical Wiener-Khinchin theorem Brockwell et al. (1991), the covariance function of the solution process is given by the inverse Fourier transform of the spectral density:
ψ(x, t) = 1
2π
∫ p(ω,ωx) exp ( i[ω,ωx] ᵀ[t,x] ) d(ω,ωx)
∝ ∫ pθT (ω; x) exp(iωt)dω · ∫ p(ωx) exp(iω ᵀ xx)dωx
∝ KθT ( (x, t), (x, t) ) · Σ(h)(x,x).
(A.10)
And therefore we reach the conclusion in Claim 2 by taking Σ(h)T (x, t; x ′, t′) = ψ(x−x′, t− t′).
The inverse statement of Claim 2 may not be always true, since not all the neural-temporal kernel can find a exact corresponding continuous-time system in the form of (A.8). However, we may construct the continuous-time system that approximates the kernel (arbitrarily well) in the following way, using the polynomial approximation tools such as the Taylor expansion.
For a neural-temporal kernel Σ(h)T , we first compute it Fourier transform to obtain the spectral density p(ωx, ω). Note that p(ωx, ω) should be a rational function in the form of (polynomial in ω2)/(polynomial in ω2), or otherwise it does not have stable spectral factorization that leads to a linear dynamic system. To achieve the goal, we can always apply Taylor expansion or Pade approximants that recovers the p(ωx, ω) arbitrarily well.
Then we conduct a spectral factorization on p(ωx, ω) to find G(iωx, iωt) and p(ωx) such that p(ωx, ω) = G(iωx, iωt)p(ωx)G(−iωx,−iωt). Since p(ωx, ω) is now in a rational function form of ω2, we can find G(iωx, iωt) as:
bk(iωx) · (iω)k + · · ·+ b1(iωx) · (iω) + b0(iωx) aq(iωx) · (iω)q + · · ·+ a1(iωx) · (iω) + a0(iωx) .
Let αj(x) and βj(x) be the pseudo-differential operators of aj(iωx) and bj(iωx) defined in terms of their inverse Fourier transforms (Shubin, 1987), then the corresponding continuous-time system is given by:
αq(x) dqf(x, t)
dtq + · · ·+ α0(x)f(x, t) = βk(x) dk (t) dtk + · · ·+ β0(x) (t). (A.11)
For a concrete end-to-end example, we consider the simplified setting where the temporal kernel function is given by:
KθT (t, t ′) := kθ1,θ2,θ3(t− t′) = θ22
21−θ1
Γ(θ1)
(√ 2θ1 t− t′
θ3
)θ1 Bθ1 (√ 2θ1 t− t′
θ3
) ,
where Bθ1(·) is the Bessel function so KθT (t, t′) belongs to the well-known Matern family. It is straightforward to show that the spectral density function is given by:
s(ω) ∝ (2θ1 θ23 + ω2 )−(θ1+1/2) .
As a consequence, we see that s(ω) ∝ (√2θ1
θ3 + iω )−(θ1+1/2)(√2θ1 θ3 − iω )−(θ1+1/2) , so we
directly have GθT (ω) = (√2θ1
θ3 + iω
)−(θ1+1/2) instead of having to seek for polynomial approxi-
mation. Now we can easily expand GθT (ω) using the binomial formula to find the linear parameters for the continuous-time system. For instance, when θ1 = 3/2, we have:
d2f(t) dt2 + 2 √ 2θ1 θ3 df(t) dt + 2θ1 θ23 f(t) = (t).
A.3.2 PROOF FOR PROPOSITION 1 Proof. We first need to show that the random Fourier features for the non-stationary kernel KT ( (x, t), (x′, t′) ) can be given by (11), i.e.
φ(x, t) = 1
2 √ m
[ . . . , cos ( [x, t]ᵀω1,i ) +cos ( [x, t]ᵀω2,i ) , sin ( [x, t]ᵀω1,i ) +sin ( [x, t]ᵀω2,i ) . . . ] .
To simplify notations, we let z := [x, t] ∈ Rd+1 and Z = X × T . For non-stationary kernels, their corresponding Fourier transform can be characterized by the following lemma. Assume without loss of generality that KT is differentiable.
Lemma A.1 (Yaglom (1987)). A non-stationary kernel k(z1, z2) is positive definite in Rd if and only if after scaling, it has the form:
k(z1, z2) =
∫ exp ( i(ωᵀ1 z1 − ω ᵀ 2 z2) ) µ(dω1, dω2), (A.12)
where µ(dω1, dω2) is some positive-semidefinite probability measure with bounded variation.
The above lemma can be think of as the extension of the classical Bochner’s theorem underlies the random Fourier feature for stationary kernels. Notice that when covariance function for the measure µ only has non-zero diagonal elements and ω1 = ω2, then we recover the spectral representation stated in the Bochner’s theorem. Therefore, we can also approximate (A.12) with the Monte Carlo integral. However, we need to ensure the positive-semidefiniteness of the spectral density for µ(dω1, dω2), which we denote by p(ω1,ω2). It has been suggested in Remes et al. (2017) that we consider another density function q(ω1,ω2) and let p be taken on the product space of q and then symmetrise:
p(ω1,ω2) = 1
4
( q(ω1,ω2) + q(ω2,ω1) + q(ω1,ω1) + q(ω2,ω2) ) . (A.13)
Then (A.12) suggests that
k(z1, z2) = 1 4 Eq [ exp ( i(ωᵀ1 z1−ω ᵀ 2 z2) ) + exp ( i(ωᵀ2 z2 − ω ᵀ 1 z1) ) + exp ( i(ωᵀ1 z1 − ω ᵀ 1 z1) ) + exp ( i(ωᵀ2 z2 − ω ᵀ 2 z2) )] .
Recall that the real part of exp ( i(ωᵀ1 z1 − ω ᵀ 2 z2) ) is given by cos(ωᵀ1 z1 − ω ᵀ 2 z2). So with the
Trigonometric equalities, it is straightforward to verify that k(z1, z2) = Eq [ φ(z)ᵀφ(z) ] . Hence, the random Fourier features for non-stationary kernel can be given in the form of (11).
Then we show the uniform convergence result as the number of samples goes to infinity when computing Eq [ φ(z)ᵀφ(z) ] by the Monte Carlo integral. Let Z̃ = Z × Z , so Z̃ = {(x, t,x′, t, ) ∥∥x,x′ ∈ X ; t, t′ ∈ T }. Since diam(X ) = l and T = [0, tmax], we have diam(Z̃) = l2t2max. Let the approximation error be ∆(z, z′) = φ(z)ᵀφ(z′)−KT (z.z′). (A.14) The strategy is to use a -net covering for the input space Z̃ , which would require N =( 2l2t2max/r )d+1 balls of radius r. Let C = {ci}Ni=1 be the centers for each -ball. We first show the bound for |∆(ci)| and the Lipschitz constant L∆ of the error function ∆, and then combine them to get the desired result.
Since ∆ is continuous and differentiable w.r.t z, z′ according to the definition of φ, we have L∆ =∥∥∇∆(c∗)∥∥, where c∗ = arg maxc∈C ∥∥∇∆(c)∥∥. Let c∗ = (z̃, z̃′). By checking the regularity conditions for exchanging the integral and differential operation, we verify that E [ ∇φ(z)ᵀφ(z′) ] =
∇E [ φ(z)ᵀφ(z′) ] = ∇E [ KT (z, z ′) ] . We do not present the details here, since it is easy to check the regularity of φ(z)ᵀφ(z′) as it consists of the sine and cosine functions who are continuous, bounded and have continuous bounded derivatives. Hence, we have:
E [ L2∆ ] = Ez̃,z̃′ [∥∥∇φ(z̃)ᵀφ(z̃′)−∇KT (z̃, z̃′)∥∥2]
= Ez̃,z̃′ [ E‖∇φ(z̃)ᵀφ(z̃′)‖2 − 2‖∇KT (z̃, z̃′)‖ · ‖∇φ(z̃)ᵀφ(z̃′)‖+ ‖∇KT (z̃, z̃′)‖2 ] ≤ Ez̃,z̃′ [ E‖∇φ(z̃)ᵀφ(z̃′)‖2 − ‖∇KT (z̃, z̃′)‖2 ] (by Jensen’s inequality)
≤ E‖∇φ(z̃)ᵀφ(z̃′)‖2 = E ∥∥∥∇( cos(z̃ᵀω1) + cos(z̃ᵀω2))( cos((z̃′)ᵀω1) + cos((z̃′)ᵀω2))
+ ( sin(z̃ᵀω1) + sin(z̃ ᵀω2) )( sin((z̃′)ᵀω1) + sin((z̃ ′)ᵀω2) )∥∥∥2
= 2E ∥∥∥ω1( sin(z̃ᵀω1 − (z̃′)ᵀω1) + sin((z̃′)ᵀω2 − z̃ᵀω1))
+ ω2 ( sin(z̃ᵀω1 − (z̃′)ᵀω2) + sin((z̃′)ᵀω2 − z̃ᵀω2) )∥∥∥2
≤ 8E ∥∥[ω1,ω2]∥∥2 = 8σ2p.
(A.15)
Hence, by the Markov’s inequality, we have p ( L∆ ≥
2r
) ≤ 8σ2p (2r ) . (A.16)
Then we notice that for all c ∈ C, ∆(c) is the mean of m/2 terms bounded by [−1, 1], and the expectation is 0. So applying a union bound and the Hoeffding’s inequality on bounded random variables, we have:
p ( ∪Ni=1 |∆(ci)| ≥
2
) ≤ 2N exp ( − m 2
16
) . (A.17)
Combining the above results, we get
p (
sup (z,z′)∈C
∣∣∆(z, z′)∣∣ ≤ ) ≥1− 32σ2pr2 2 − 2r−(d+1) (2l2tmax
r
)d+1 exp ( − m 2
16 ) ≥ C(d) ( l2t2maxσp )2(d+1)/(d+3) exp ( − m 2
8(d+ 3)
) ,
(A.18)
where in the second inequality we optimize over r such that r∗ = (
(d+1)k1 k2
)1/(d+3) with
k1 = 2(2l 2t2max) d+1 exp(−m 2/16) and k2 = 32σ2p −2. The constant term is given by C(d) = 2 7d+9 d+3 (( d+1
2
)−d−1 d+3 + ( d 2 ) 2 d+3 ) .
A.3.3 THE GAUSSIAN PROCESS BEHAVIOR AND NEURAL TANGENT KERNEL AFTER COMPOSING WITH TEMPORAL KERNEL WITH THE RANDOM FEATURE REPRESENTATION
This section is dedicated to show the infinite-width Gaussian process behavior and neural tangent kernel properties, similar to what we discussed in Appendix A.2, when composing neural networks in the feature space with the random feature representation of the temporal kernel.
For brevity, we still consider the standard L-layer FFN of (A.1). Suppose we compose the FFN with the random feature representation φ(x, t) at the kth layer. It is easy to see that the neural network
kernel for the first k − 1 layer are unchanged, so we compute them in the usual way as in (A.2). For the kth layer, it is straightforward to verify that:
lim dk→∞ E [ 1 dk 〈 W(k)f (k−1)(θ,x) ◦ φ(x, t),W(k)f (k−1)(θ,x′) ◦ φ(x′, t′) 〉∣∣∣f (k−1)] → Σ(k)(x,x′) ·KT ( (x, t), (x′, t′) ) .
The intuition is that the randomness in W (thus f(θ, .)) and φ(., .) are independent, i.e. the former is caused by network parameter initializations and the later is induced by the random features. The covariance functions for the subsequent layers can be derived by induction, e.g. for the (k + 1)th layer we have:
Σ (k+1) T
( (x, t), (x′, t′) ) = E f∼N ( 0,Σ(k)⊗KT )[σ(f(x, t))σ(f(x′, t′))].
In summary, composing the FNN, at any given layer, with the temporal kernel using its random feature representation does not change the infinite-width Gaussian process behavior. The statement is true for all the deep learning models who also have the Gaussian process behavior, which includes most of the standard neural architectures including RNN, CNN and attention mechanism (Yang, 2019).
The derivations for the NTK, however, is more involved since the gradient on all the layers are affected. We summarize the result for the L-layer FFN in the following proposition and provide the derivations afterwards.
Proposition A.1. Suppose f (k) ( θ, (x, t) ) = vec(f (k)(θ,x) ◦φ(x, t)) in the standard L-layer FFN.
Let Σ(h)T = Σ (h) for h = 1, . . . , k, Σ(k)T = Σ (k) T ⊗KT and Σ (h) T = Ef∼N (0,Σ(h)T )[σ(f)σ(f)] + 1 for h = k + 1, . . . , L. If the activation functions σ have polynomially bounded weak derivatives, as the network widths d1, . . . , dL →∞, the neural tangent kernel Θ(L) converges almost surely to Θ
(L) T whose partial application on parameters {W(h),b(h)} in the hth-layer is given recursively by:
Θ (h) T = Σ (h) T , Θ (k) T = Θ (k−1) T ⊗ Σ̇ (k) T + Σ (k) T , k = h+ 1, . . . , L. (A.19)
Proof. The strategies for deriving the NTK and show the convergence has been discussed in Jacot et al. (2018); Yang (2019); Arora et al. (2019a). The key purpose for us presenting the derivations here is to show how the convergence results for the neural-temporal Gaussian (Section 4.2) affects the NTK. To avoid the cumbersome notations induced by the peripheral intercept terms, here we omit the intercept terms b in the FFN without loss of generality. We let g(h) = 1√ dh σ ( f (h)(x, t) ) , so the
FFN can be equivalently defined via the recursion: f (h) = W(h)g(h−1)(x, t). For the final output f ( θ, (x, t) ) := W(L)f (L)(x, t), the partial derivative to W(h) can be given by:
∂f ( θ, (x, t) ) ∂W(h) = z(h)(x, t) ( g(h−1)(x, t) )ᵀ , (A.20)
with z(h) defined by;
z(h)(x, t) =
{ 1, h = L,
1√ dh
D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t), h = 1, . . . , L− 1, (A.21)
where
D(h)(x, t) = diag ( σ̇ ( f (h)(x, t) )) , h = k, . . . , L− 1, diag ( σ̇ ( f (h)(x) )) , h = 1, . . . , k − 1.
Using the above definitions, we have:〈∂f(θ, (x, t)) ∂W(h) , ∂f ( θ, (x′, t′) ) ∂W(h) 〉 = 〈 z(h)(x, t) ( g(h−1)(x, t) )ᵀ , z(h)(x′, t′) ( g(h−1)(x′, t′) )ᵀ〉 = 〈 g(h−1)(x, t),g(h−1)(x′, t′) 〉 · 〈 z(h)(x, t), z(h)(x′, t′) 〉
We have established in Section 4.2 that〈 g(h−1)(x, t),g(h−1)(x′, t′) 〉 → Σ(h−1)T ( (x, t), (x′, t′) ) ,
where
Σ (h) T
( (x, t), (x′, t′) ) = Σ(h)(x,x′) h = 1, . . . , k Σ(h)(x,x′) ·KT ( (x, t), (x′, t′) ) h = k
E f∼N ( 0,Σ
(h−1) T )[σ(f(x, t))σ(f(x′, t′))] h = k + 1, . . . , L. (A.22)
By the definition of z(h), we get〈 z(h)(x, t), z(h)(x′, t′) 〉 = 1
dh
〈 D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t),D(h)(x′, t′) ( W(h+1) )ᵀ z(h+1)(x′, t′) 〉 ≈ 1 dh 〈 D(h)(x, t) ( W(h+1) )ᵀ z(h+1)(x, t),D(h)(x′, t′) ( W̃(h+1) )ᵀ z(h+1)(x′, t′)
〉 → 1
dh tr ( D(h)(x, t)D(h)(x′, t′) )〈 z(h+1)(x, t), z(h+1)(x′, t′) 〉 → Σ̇hT ( (x, t), (x′, t′) 〈 z(h+1)(x, t), z(h+1)(x′, t′) 〉 .
(A.23)
The approximation in the third line is made because the W(h+1) in the right half is replaced by its i.i.d copy under Gaussian initialization. This does not change the limit when dh → ∞ when the actionvation functions have polynomially bounded weak derivatives Yang (2019) such the ReLU activation. Carrying out (A.23) recursively, we see that
〈 z(h)(x, t), z(h)(x′, t′) 〉 → L−1∏ j=h Σ̇j ( (x, t), (x′, t′) ) .
Finally, we have:〈∂f(θ, (x, t)) ∂θ , ∂f ( θ, (x′, t′) ) ∂θ 〉 = L∑ h=1 〈∂f(θ, (x, t)) ∂W(h) , ∂f ( θ, (x′, t′) ) ∂W(h) 〉 =
L∑ h=1 ( Σ (h) T ( (x, t), (x′, t′) ) · L∏ j=h Σ̇j ( (x, t), (x′, t′) )) .
(A.24)
Notice that we use a more compact recursive formulation to state the results in Proposition 1. It is easy to verify that after expansion, we reach the desired results.
Compared with the original NTK before composing with the temporal kernel (given by (A.4)), the results in Proposition A.1 shares a similar recursion structure. As a consequence, the previous results for NTK can be directly adopted to our setting. We list two examples here.
• Following Jacot et al. (2018), given a training dataset {xi, ti, yi(ti)}ni=1, let fT ( θ(s) ) =(
f(θ(s),x1, t1), . . . , f(θ(s),xn, tn) )
be the network outputs at the sth training step and yT = ( y1(t1), . . . , yn(tn) ) . The analysis on the optimization trajectory under infinitesimal learning rate can be conducted via:
dfT ( θ(s) ) ds = −ΘT (s)× (fT ( θ(s) ) − yT ),
where ΘT (s) converges almost surely to the NTK Θ (L) T in Proposition A.1.
• Following Allen-Zhu et al. (2019) and Arora et al. (2019b), the generalization performance of the composed time-aware neural network can be explicitly characterized according to the properties of Θ(L)T .
A.3.4 PROOF FOR THEOREM 1
Proof. We first present a technical lemma that is crucial for establishing the duality result under the distributional constraint df (SθT ‖S) ≤ δ. Recall that the hidden dimension for the kth layer is dk.
Lemma A.2 (Ben-Tal et al. (2013)). Let f be any closed convex function with domain [0,+∞), and this conjugate is given by f∗(s) = supt≥0{ts− f(t)}. Then for any distribution S and any function g : Rdk+1 → R, we have
sup SθT :df (SθT ‖S)≤δ
∫ g(ω)dSθT (ω) = inf
λ≥0,η
{ λ ∫ f∗ (g(ω)− η
λ
) dS(ω) + δλ+ η } . (A.25)
We work with a scaled version of the f-divergence under f(t) = 1k (t k − 1) (because its dual function has a cleaner form), where the constraint set is now equivalent to {SθT : df (SθT ‖S) ≤ δ/k}. It is easy to check that f∗(s) = 1k′ [s] k′ + + 1 k with 1 k′ + 1 k = 1.
Similar to the proof for Proposition 1, we let z := [x, t] ∈ Rd+1 and Z = X × T to simplify the notations. To explicitly annotate the dependency of the random Fourier features on Ω, which is the random variable corresponding to ω, we define φ̃(z,Ω) such that φ̃(z,Ω) = [ cos(zᵀΩ1) + cos(zᵀΩ2), sin(z ᵀΩ1) + sin(z ᵀΩ2) ] , where Ω = [Ω1,Ω2]. Then the approximation error, when
replacing the sampled Fourier features φ by the original random variable φ̃(z,Ω), is given by:
∆n(Ω) := 1 n(n− 1) ∑ i 6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)
− E [ Σ(k)(Xi,Xj)KT,SθT ( (Xi, Ti), (Xj , Tj) )] = 1
n(n− 1) ∑ i 6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)− E [ Σ(k)(X,X′)φ̃(Z,Ω)ᵀφ̃(Z′,Ω) ] .
(A.26)
We first show that sub-Gaussianity of ∆n(Ω) . Let {x′i}ni=1 be an i.i.d copy of the observations except for one element j such that xj 6= x′j . Without loss of generality, we assume the last element is different, i.e. xn 6= x′n. Let ∆′n(Ω) be computed by replacing x and z with the above x′ and its corresponding z′. Note that
|∆n(Ω)−∆′n(Ω)|
= 1 n(n− 1) ∣∣∑ i6=j Σ(k)(xi,xj)φ̃(zi,Ω) ᵀφ̃(zj ,Ω)− Σ(k)(x′i,x′j)φ̃(z′i,Ω)ᵀφ̃(z′j ,Ω) ∣∣ ≤ 1 n(n− 1) (∑ i<n
∣∣Σ(k)(xi,xn)φ̃(zi,Ω)ᵀφ̃(zn,Ω)− Σ(k)(xi,x′n)φ̃(zi,Ω)ᵀφ̃(z′n,Ω)∣∣ + ∑ j<n
∣∣Σ(k)(xn,xj)φ̃(zn,Ω)ᵀφ̃(zj ,Ω)− Σ(k)(x′n,xj)φ̃(z′n,Ω)ᵀφ̃(zj ,Ω)∣∣) ≤ 4 max{1,M}
n ,
(A.27)
where the last inequality comes from the fact that the random Fourier features φ̃ are bounded by 1 and the infinity norm of Σ(k) is bounded by M . The above bounded difference property suggests that ∆n(Ω) is a 4 max{1,M} n -sub-Gaussian random variable.
To bound ∆n(Ω), we use:
sup SθT :df (SθT ||S) ∣∣∣ ∫ ∆n(Ω)dSθT ∣∣∣ ≤ sup SθT :df (SθT ||S) ∫ |∆n(Ω)|dSθT
≤ inf λ≥0 {λ1−k′ k′ ES [ |∆n(Ω)|k ′] + λ(δ + 1) k } (using Lemma 2)
= (δ + 1)1/kES [ |∆n(Ω)|k ′]1/k′ (solving for λ∗ from above) = √ δ + 1ES [ |∆n(Ω)|2 ]1/2 (let k = k′ = 1/2).
(A.28) Therefore, to bound supSθT :df (SθT ||S) ∣∣∣ ∫ ∆n(Ω)dSθT ∣∣∣ we simply need to bound [|∆n(Ω)|2]. Using the classical results for sub-Gaussian random variables (Boucheron et al., 2013), for λ ≤ n/8, we have
E [ exp ( λ∆n(Ω) )2] ≤ exp (− 1 2 log(1− 8 max{1,M}λ/n) ) .
Then we take the integral over ω p (∫ ∆n(ω) 2dS(ω)≥ 2
δ + 1 ) ≤ E [ exp ( λ ∫ ∆n(ω) 2dS(ω) )] exp ( − λ 2
δ + 1
) (Chernoff bound)
≤ exp ( − 1 2 log ( 1− 8 max{1,M}λ n ) − λ 2 δ + 1 ) (apply Jensen’s inequality).
(A.29)
Finally, let the true approximation error be ∆̂n(ω) = Σ̂(k)(SθT )− Σ(k)(SθT ). Notice that∣∣∆̂n(ω)∣∣ ≤ ∣∣∆n(Ω)∣∣+ 1 n(n− 1) ∑ i6=j Σ(k)(xi,xj) ∣∣φ̃(zi,Ω)ᵀφ̃(zj ,Ω)− φ(zi)ᵀφ(zj)∣∣.
From (A.28) and (A.29), we are able to bound supSθT :df (SθT ||S) ∆n(Ω). For the second term, recall from Proposition 1 that we have shown the stochastic uniform convergence bound for ∣∣φ̃(zi,Ω)ᵀφ̃(zj ,Ω) − φ(zi)ᵀφ(zj)∣∣ under any distributions SθT . The desired bound for p ( supSθT :df (SθT ||S) ∣∣∆̂n(ω)∣∣ ≥ ) is obtained after combining all the above results.
A.3.5 REPARAMETRIZATION WITH INVERTIBLE NEURAL NETWORK
In this part, we discuss the idea of constructing and sampling from an arbitrarily complex distribution from a known auxiliary distribution by a sequence of invertible transformations. Given an auxiliary random variable z following some know distribution q(z), suppose another random variable x is constructed via a one | 1. What is the focus of the paper regarding deep learning and time series modeling?
2. What are the strengths of the proposed approach, particularly in handling irregularly sampled or missing value time series data?
3. Are there any concerns or limitations regarding the comparison with other baseline models?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any minor suggestions for improving the paper's presentation? | Review | Review
This paper proposed a general deep learning method with temporal-kernel for continuous time series modeling.
The proposed method is technically sound and solid. The decomposition of the neural and temporal kernel brings together the kernel methods and deep learning, which delivers a general and fundamental solution to time series, especially the irregularly sampled ones or those with missing values. In brief, this work may demonstrate a promising way of handling such problems and inspires and encourages other research in this direction.
The writing is thorough and clear. Though I did not check all proofs and the supplementary, the descriptions and arguments in the paper are properly delivered.
The proposed model consistently outperforms RNN, TCN, and attention baselines on a variety of datasets. The settings of the case 2/3 are reasonable. Besides, it is interesting to see that the speed is still comparable to the baselines.
However, in my opinion, more baseline comparisons need to be added. I did not quite buy the claim that the proposed method is not compared with recurrent neural ODE-type models and point process models because of its more generalization and flexibility.
Minor typos: In page 3: infitnitesimal -> infinitesimal; covaraince -> covariance The font size in figures may be increased for better readability. |
ICLR | Title
Continual Learning In Low-coherence Subspace: A Strategy To Mitigate Learning Capacity Degradation
Abstract
Methods using gradient orthogonal projection, an efficient strategy in continual learning, have achieved promising success in mitigating catastrophic forgetting. However, these methods often suffer from the learning capacity degradation problem following the increasing number of tasks. To address this problem, we propose to learn new tasks in low-coherence subspaces rather than orthogonal subspaces. Specifically, we construct a unified cost function involving regular DNN parameters and gradient projections on the Oblique manifold. We finally develop a gradient descent algorithm on a smooth manifold to jointly minimize the cost function and minimize both the inter-task and the intra-task coherence. Numerical experimental results show that the proposed method has prominent advantages in maintaining the learning capacity when tasks are increased, especially on a large number of tasks, compared with baselines. Li & Lin (2018)
1 INTRODUCTION
Deep Neural Networks (DNNs) have achieved promising performance on many tasks. However, they lack the ability for continual learning, i.e., they suffer from catastrophic forgetting French (1999) when learning sequential tasks, where catastrophic forgetting is a phenomenon of new knowledge interfering with old knowledge. Research on continual learning, also known as incremental learning Aljundi et al. (2018a); Chaudhry et al. (2018a); Chen & Liu (2018); Aljundi et al. (2017), and sequential learning Aljundi et al. (2018b); McCloskey & Cohen (1989), aims to find effective algorithms that enable DNNs to simultaneously achieve plasticity and stability, i.e., to achieve both high learning capacity and high memory capacity.
Various methods have been proposed to avoid or mitigate the catastrophic forgetting De Lange et al. (2019), either by replaying training samples Rolnick et al. (2019); Ayub & Wagner (2020); Saha et al. (2021), or reducing mutual interference of model parameters, features or model architectures between different tasks Zenke et al. (2017); Mallya & Lazebnik (2018); Wang et al. (2021). Among these methods, Gradient Orthogonal Projection (GOP) Chaudhry et al. (2020); Zeng et al. (2019); Farajtabar et al. (2020); Li et al. (2021) is an efficient continual learning strategy that advocates projecting gradients with the orthogonal projector to prevent the knowledge interference between tasks. GOP-based methods have achieved encouraging results in mitigating catastrophic forgetting. However, from Fig. 1, we observe that these methods suffer from the learning capacity degradation problem: their learning capacity is gradually degraded as the number of tasks increases and eventually becomes unlearnable. Specifically, when learning multiple tasks, e.g., more than 30 tasks in Fig. 1, their performance on new tasks dramatically decreases. These results suggest that the GOP-based methods focus on the stability and somewhat ignore the plasticity. Ignoring the plasticity may limit the task learning capacity of models, i.e., the number of tasks that a model can learn without forgetting.
To address this issue, we propose to learn new tasks in low-coherence subspaces rather than orthogonal subspaces. Specifically, Low-coherence projectors are utilized for each layer to project features and gradients into low-coherence subspaces. To achieve this, we construct a unified cost function to find projectors and develop a gradient descent algorithm on the Oblique manifold to jointly minimize inter-task coherence and intra-task coherence. Minimizing the inter-task coherence can reduce the mutual interference between tasks, and minimizing the intra-task coherence can enhance the model’s expressive power. Restricting projectors on the Oblique manifold can avoid the scale ambiguity
Aharon et al. (2006); Wei et al. (2017), i.e., preventing the parameters of the projector from being extremely large or extremely small.
The main contributions of this work are summarized as follows. First, to address the learning capacity degradation problem of GOP, we propose a novel method, namely, Low-coherence Subspace Projection (LcSP), that replaces the orthogonal projectors with the low-coherence gradient projectors, allowing the DNN to maintain both plasticity and stability. Additionally, our work observes that the GOP models with Batch Normalization (BN) Ioffe & Szegedy (2015) layers could cause catastrophic forgetting. This paper proposes two strategies in LcSP to solve this problem, i.e., replacing BN with Group Normalization (GN) Wu & He (2018) or learning specific BN for each task.
2 RELATED WORK
In this section, we briefly review some existing works of continual learning and the GOP based methods.
Replay-based Strategy The basic idea of this type of approach is to use limited memory to store small amounts of data (e.g., raw samples) from previous tasks, called episodic memory, and to replay them when training a new task. Some of the existing works focused on selecting a subset of raw samples from the previous tasks Rolnick et al. (2019); Isele & Cosgun (2018); Chaudhry et al. (2019); Zhang et al. (2020). In contrast, others concentrated on training a generative model to synthesize new data that can substitute for the old data Shin et al. (2017); Van de Ven & Tolias (2018); Lavda et al. (2018); Ramapuram et al. (2020).
Regularization-based Strategy This strategy prevents catastrophic forgetting by introducing a regularization term in the loss function to penalize the changes in the network parameters. Existing works can be divided into data-focused, and prior-focused methods De Lange et al. (2021). The Data-focused methods take the previous model as the teacher and the current model as the student, transferring the knowledge from the teacher model to the student model through knowledge distillation. Typical methods include LwF Li & Hoiem (2017), LFL Jung et al. (2016), EBLL Rannen et al. (2017), DMC Zhang et al. (2020) and GD-WILD Lee et al. (2019). The prior-focused methods estimate a distribution over the model parameters, assigning an importance score to each parameter and penalizing the changes in significant parameters during learning. Relevant works include SI Zenke et al. (2017), EWC Kirkpatrick et al. (2017), RWalk Chaudhry et al. (2018a), AGS-CL Jung et al. (2020) and IMM Lee et al. (2017).
Parameter Isolation-based Strategy This strategy considers dynamically modifying the network architecture by pruning, parameter mask, or expansion to greatly or even completely reduce catastrophic forgetting. Existing works can be roughly divided into two categories. One is dedicated to isolating separate sub-networks for each task from a large network through pruning techniques and parameter mask, including PackNet Mallya & Lazebnik (2018), PathNet Fernando et al. (2017), HAT Serra et al. (2018) and Piggyback Mallya et al. (2018). Another class of methods dynamically expands the network architecture, increasing the number of neurons or sub-network branches, to break the limits of expressive capacity (Rusu et al., 2016; Aljundi et al., 2017; Xu & Zhu, 2018; Rosenfeld & Tsotsos, 2018). However, as the number of tasks growing, this approach also complicates the network architecture and increases the computation and memory consumption.
Gradient Orthogonal Projection-based Strategy Methods based on GOP strategies, which reduce catastrophic forgetting by projecting gradient or features with orthogonal projectors, have been shown to be effective in continual learning with encouraging results Farajtabar et al. (2020); Zeng et al. (2019); Saha et al. (2021); Wang et al. (2021); Chaudhry et al. (2020). According to the different ways of finding the projector, we can further divide the existing works into Context Orthogonal Projection (COP) and Subspace Orthogonal Projection (SOP). Methods based on COP, such as OWM Zeng et al. (2019), Adam-NSCL Wang et al. (2021), and GPM Saha et al. (2021), always rely on the context of previous tasks to build projectors. In contrast to COP, SOP-based methods such as ORTHOG-SUBSPACE Chaudhry et al. (2020) use hand-crafted, task-specific orthogonal projectors and yield competitive results.
This paper proposes a novel approach to continual learning called LcSP. Compared with other methods based on GOP, LcSP trains the network on the low-coherence subspaces to balance plasticity and stability and overcomes the learning capacity degradation problem, which significantly decreases the performance of GOP methods with the increasing number of tasks.
3 CONTINUAL LEARNING SETUP
This work adopts the Task-Incremental Learning (TIL) setting, where multiple tasks are learned sequentially. Let us assume that there are T tasks, denoted by Tt for t = 1, . . . , T with its training data Dt = {(xi, yi, τt)Nti=1}. Here, the data (xi, yi) ∈ X × Yt is assumed to be drawn from some independently and identically distributed random variable, and τt ∈ T denotes the task identifier. In the TIL setting, the data Dt can be accessed if and only if task Tt arrives. When episodic memory is adopted, a limited number of data samples drawn from old tasks can be stored in the replay buffer M so that Dt ∪M can be used for training when task Tt arrives. Assuming that a network f parameterized with Φ = {θ, φ} consists of two parts, where θ denotes the parameters of the backbone network and φ denotes the parameters of the classifier. Let f(x; θ) : X × T → H denote the backbone network parameterized with θ = {Wl}Ll=1, which encodes the data samples x into feature vector. Let f(x;φ) : H → Y denote the classifier parameterized with φ = w which returns the classification result of the feature vector obtained by f(x; θ). The goal of TIL is to learn T tasks sequentially with the network f and finally achieve the optimal loss on all tasks.
Evaluation Metrics Once the training on all tasks is finished, we evaluate the performance of algorithm by calculating the average accuracy A and forgetting F Chaudhry et al. (2020) of the network on the T tasks {T1, ..., TT }. Suppose all tasks come sequentially, let Acci,j denote the test accuracy of the network on task Ti after learning task Tj , where i ≤ j. The average accuracy is defined as
A = 1 T T∑ i=1 Acci,T , (1)
and the forgetting is defined as
F = 1 T − 1 T−1∑ i=1 max j∈{i,...,T−1} (Acci,j −Acci,T ). (2)
4 CONTINUAL LEARNING IN LOW-COHERENCE SUBSPACES
In this section, we first introduce how to find task-specific, low-coherence projectors for LcSP on the Oblique manifold. We then describe how to use it in a specific DNN architecture to project features and gradients. Finally, we analyze the factors that enable LcSP to maintain plasticity and stability.
4.1 CONSTRUCTING LOW-COHERENCE PROJECTORS ON OBLIQUE MANIFOLD
Here, we first introduce the concept of coherence metric. The coherence metric is usually used in compressed sensing and sparse signal recovery to describe the correlation of the columns of a measurement matrix Candes et al. (2011); Candes & Romberg (2007). Formally, the coherence of a matrix M is defined as
µ(M ,N) = maxj<k |⟨Mj ,Mk⟩| ∥Mj∥2∥Mk∥2 , M = N
maxi,j |⟨Mi,Nj⟩| ∥Mi∥2∥Nj∥2 , M ̸= N
. (3)
where Mj and Mk denote the column vectors of matrix M . Without causing confusion, we use µ(M) denote µ(M ,M). To measure the coherence between different projectors, we introduce the
Babel function Li & Lin (2018), measuring the maximum total coherence between a fixed atom and a collection of other atoms in a dictionary, which can be described as follows.
B(M) = max Λ,|Λ|=M max i/∈Λ ∑ j∈Λ |⟨Mi,Mj⟩| ∥Mi∥ ∥Mj∥
(4)
With the concept of a coherence metric in mind, we then introduce the main optimization objective in finding projectors. Specifically, suppose that the DNN has learned the task T1, T2, ..., Tt−1 in the subspace S1,S2, ...,St−1, respectively, and P1,P2, ..,Pt−1 denote the projectors of all previous tasks. When learning task Tt, we project features and gradients into a dt-dimensional low-coherence subspace St with projector Pt so that the LcSP can prevent catastrophic forgetting. The projector Pt can be found by optimizing
argminB(Pt),
s.t. Pt ∈ Rm×m, rank(Pt) = dt. (5)
Two considerations need to be taken in solving Eq. (5), i.e., considering the rank constraint and avoiding the entries of Pt being extremely large or extremely small. With these considerations in mind, we can rephrase the rank and scale constrained problem as a problem on the Riemannian manifold, more specifically on the Oblique manifold OM(m, dt), by setting Pt = OtO⊤t and normalizing the columns of Ot , i.e, diag(O⊤t Ot) = In, where diag(·) represents the diagonal matrix and In is the n × n identity matrix. With these settings, the new cost function J(·) and optimization problem can be described as follows:
J(Ot) = { λ ·B(OtO⊤t ) + γ · µ(OtO⊤t ), t > 1 µ(OtO ⊤ t ), t = 1 ,
Ot = argminJ(Ot), s.t. Ot ∈ OB(m, dt). (6)
In the cost function J(Ot), we further divided the optimization objective into inter-task B(OtO⊤t ) and intra-task µ(OtO⊤t ), and utilize parameters λ and γ to provide a trade-off between them. Here, intra-task coherence is optimized to maintain the full rank of Ot. Meeting the full-rank constraint helps to balance plasticity and stability in the case of increasing tasks. Relevant ablation studies and numerical analyses are given in the appendix.
Optimization on the Oblique manifold, i.e., the solution lies on the Oblique manifold, is a wellestablished area of research Absil et al. (2009); Absil & Gallivan (2006); Selvan et al. (2012). Here, we briefly summarize the main steps of the optimization process. Formally, the Oblique manifold OM(n, p) is defined as
OM(n, p) ≜ {X ∈ Rn×p : diag(X⊤X) = Ip}, (7) representing the set of all n× p matrices with normalized columns. OM can also be considered as an embedded Riemannian manifold of Rn×p, endowed with the canonical inner product
⟨X1,X2⟩ = trace ( X⊤1 X2 ) , (8)
where trace(·) represents the sum of the diagonal elements of the given matrix. For a given point X on OM, the tangent space at X , denoted by TXOM, is defined as
TXOM(n, p) = {U ∈ Rn×p : diag(X⊤U) = 0}. (9) Further, the tangent space projector PX at X which projects H ∈ Rn×p into TXOM, is represented as
PX(H) = H −X ddiag ( X⊤H ) , (10)
where ddiag sets all off-diagonal entries of a matrix to zero. When optimizing on OM, the kth iterate Xk must move along a descent curve on OM for the cost function, such that the next iterate Xk+1 will be fixed on the manifold. This is achieved by the retraction
RXk(U) = normalize(Xk +U), (11) where normalize scales each column of the input matrix to have unit length. Finally, with this knowledge, we can extend the gradient descent algorithm to solve any unconstrained optimization problems on OM, which can be summarized as
U = PXk(∇XkJ), Xk+1 = RXk(−αU),
(12)
where ∇XkJ denotes the Euclidean gradient at the kth iterate and α is the step size. Finally, our algorithm for finding Ot in OM to task Tt is summarized in Algorithm 1.
Algorithm 1 Construct the Ot on OM for Task Tt Input: O1, . . . , Ot−1 Output: Ot 1: RX(U) := normalize(X + U) ▷ normalize scales each column of the input matrix to have norm 1 2: X0 ← random initialization onOM 3: k ← 0 4: Set tolerance error 0 ≤ E ≪ 1 5: while True do 6: G← ∇f(Xk) ▷ Calculate Euclidean Gradient 7: U ← G−Xk ddiag(X⊤k G) ▷ Calculate Riemann Gradient 8: if ∥U∥ ≤ E then 9: break 10: end if 11: α ∈ (0, 0.5), β ∈ (0, 1) 12: t← 1 ▷ Initial step size 13: while J(RXk (−t ·U)) > J(Xk)− α · t · ∥U∥ 2 2 do ▷ Searching the step size for calculating next iterate 14: t← β · t 15: end while 16: Xk+1 ← RXk (−t ·U) ▷ Updating the next iterate Xk+1 and fixing it on manifold 17: k ← k + 1 18: end while 19: Ot = Xk 20: return Ot
4.2 THE APPLICATION OF LOW-COHERENCE PROJECTORS IN DNNS
With the LcSP at hand, the following introduces some technical details of applying LcSP in DNNs. When learning task Tt, LcSP first constructs task-specific projector P lt for each layer before training, and freezes them during training. These projectors are used to project the features and gradients, ensuring that the DNN learns in the low-coherence subspace. Specifically, suppose that a network f with L linear layers is used as DNN architecture, let W lt , x l t, z l t, σ
l, and P lt denote the model parameters, the input features, the output features, the activation function, and the introduced lowcoherence projector in layer l ∈ {1, ..., L}, respectively. LcSP introduces P lt immediately after W lt such that the pre-activation features are projected into the subspace, i.e.,
zlt = (x l tW l t )P l t ,
xl+1t = σ l(zlt).
(13)
According to the chain rule of derivation, the gradients at W lt will also be multiplied with P t l in backpropagation, as follows
∂L ∂(W lt )(i,:) = ∂L ∂zlt ∂zlt ∂(W lt )(i,:)
= ∂L zLt L−1∏ k=l ∂zk+1t ∂zkt · (xlt)i · P lt , (14)
where (W lt )(i,:) represents the ith row of W l t and (x l t)i is the ith element of x l t. In Convolutional Neural Networks (CNNs), the input and the output typically represent the image features and have more than two dimensions, e.g., input channel, output channel, height, and width. In this case, we reshape zl ∈ Rcout×(cin·h·w) to zl ∈ R(cin·h·w)×cout and align the dimension of projector with the output channel so that P lt ∈ Rcout×cout . After the projection, we recover the shape of zlt so that it can be used as input for the next layer.
Overcoming the Catastrophic Forgetting in BN based models BN is a widely used module in DNNs to make training of DNNs faster and more stable through normalization of the layers’ features by re-centering and re-scaling Ioffe & Szegedy (2015). However, re-centering and re-scaling of the layers’ features changes the data distribution (e.g., the mean and the variance) of features of previous tasks, which often leads to the catastrophic forgetting of LcSP. For example, when learning the new
task Tt, W lt may not work for Tt due to the change in data distribution caused by BN. To solve this problem, we propose two strategies in LcSP: the strategy (1) learning specific BN for each task, or the strategy (2) using GN instead of BN. We verify the effectiveness of these two strategies in experiments and compare their performance in §5.
4.3 METHOD ANALYSIS
In this section, we provide analysis on plasticity and stability of LcSP.
Stability Analysis Let θ = {W lt }Ll=1 denote the parameter set of f ; ∆θ = {∆W 1t , . . . ,∆WLt } denote set of variation values of parameters after learning task Tt; Pt = {P lt }Ll=1 denote the projectors set obtained by Algorithm 1; xlq,t and z l q,t denote the input and ouput when feeding the data of task Tq (q ≤ t) into the network f , which has been optimized in learning task Tt. Lemma 1. Assume that f is fed the data of task Tt (q < t), then f can effectively overcomes catastrophic forgetting if zlq,q ≈ zlq,t, ∀q ≤ t (15) holds for l ∈ {1, 2, ..., L}. Lemma 1 suggests that f can overcome catastrophic forgetting if the output of f to previous tasks is invariant. In the following, we prove that LcSP achieves approximate invariance to the output of previous tasks.
Proof. Suppose q = t− 1. When l = 1, xlq,t = xlq,q. Then
zlq,t = x l q,t(W l q +∆W l t )P l q
= xlq,tW l qP l q + x l q,t∆W l tP l q
= zlq,q + x l q,t∆W l tP l q .
(16)
Let glt denote the gradient when training the network on task Tt. In backpropagation, ∆W lt = gltP lt . Then xlq,t∆W l tP l q = x l q,tg l tP l tP l q . If the inter-task coherence µ(P l t ,P l q) ≈ 0, then P ltP lq ≈ 0. Projectors satisfying this condition can be found by Algorithm 1. We can prove that zlq,q ≈ zlq,t holds for all layers by repeating the above process. This proof can also be generalized to any previous task Tq .
Plasticity Analysis Let g̃lt = gltP lt denote the projected gradient at W lt . f can achieve optimal loss on task Tt if ⟨glt, g̃lt⟩ > 0 holds for each l ∈ {1, . . . , L}, where ⟨·, ·⟩ represents the inner product. Here, we prove that ⟨glt, g̃lt⟩ > 0 holds for each l ∈ {1, . . . , L}. Proof. Let g̃lt = gltP lt denote the projected gradient, we have
⟨glt, g̃lt⟩ = gtg̃lt⊤ = g l tO l tO l t ⊤glt ⊤
= ⟨gltOlt, gltOlt⟩ = ∥gltOlt∥ > 0. (17)
Note that ∥gltOlt∥ is always positive unless gltOlt is 0. This result is easy to generalize to each layer.
5 EXPERIMENTS
In this section, we evaluate our approach on several popular continual learning benchmarks and compare LcSP with previous state-of-the-art methods. The result of accuracy and forgetting demonstrate the effectiveness of our LcSP, especially when the number of tasks is large.
5.1 BENCHMARKS
Benchmarks for Learning 20 Tasks We conducted this experiment on four image classification datasets: Permuted MNIST, Rotated MNIST, Split CIFAR100 and Split miniImageNet. Permuted MNIST is constructed by randomly rearranging MNIST LeCun (1998) image pixels, using different seeds for different tasks. Rotated MNIST is constructed by rotating the MNIST image at a certain
angle. The rotation angle is a random value between [0, π] and arbitrary selection for different tasks. In this experimental setup, both of the above MNIST datasets contain 20 tasks, each task containing 10,000 samples from 10 classes. Split CIFAR is constructed by splitting CIFAR100 into multiple tasks, where each task contains the data pertaining to five random classes (without replacement) out of the total 100 classes. Split miniImageNet Vinyals et al. (2016) is a subset of ImageNet. In Split miniImageNet, each task contains the data from five random classes (without replacement) out of 100 classes. Both CIFAR100 and miniImageNet contain 20 tasks, each contains 250 samples from each of the five classes.
Benchmarks for Learning 150 Tasks and 64 Tasks This experiment was conducted on two image classification datasets: Permuted MNIST and Permuted CIFAR10. Both Permuted MNIST and Permuted CIFAR10 are obtained by randomly rearranging image pixels. In this experimental setup, Permuted MNIST contains 150 tasks, and Permuted CIFAR10 contains 64 tasks, each containing 10,000 samples from 10 classes.
5.2 BASELINES
We compare the proposed method with SOTA based on GOP. As aforementioned, we generalize GOP into COP and SOP. For methods based on the COP strategy, we compare proposed method with OWM Zeng et al. (2019), Adam-NSCL Wang et al. (2021) and GPM Saha et al. (2021). For methods based on the SOP strategy, we compare the proposed method with ORTHOG-SUBSPACE Chaudhry et al. (2020). Moreover, we also compare our method with HAT Serra et al. (2018), EWC Kirkpatrick et al. (2017), ER-Ring Chaudhry et al. (2019), AGEM Chaudhry et al. (2018b) and Kernel Continual Learning (KCL) Derakhshani et al. (2021).
5.3 IMPLEMENTATION DETAILS
Learning 20 Tasks For experiments on Permuted MNIST and Rotated MNIST, all methods use a fully connected network with two hidden layers, each with 256 neurons, using ReLU activations. For experiments on CIFAR and miniImageNet, all methods use standard ResNet18 architecture except OWM, HAT, and GPM which use AlexNet Krizhevsky et al. (2012). As described in § 4.2, the proposed LcSP makes two changes to the BN layer in standard ResNet18: (1) learning specific BN for each task and (2) replacing BN with GN. We also apply these strategies to ORTHOG-SUBSPACE
for additional comparison. For experiments on MNIST, all tasks share the same classifier. For experiments on CIFAR and miniImageNet, each task requires a task-specific classifier. For all experiments, LcSP does not use episodic memory to store data samples for data replay. For all methods, We use Stochastic Gradient Descent (SGD) uniformly. The learning rate is set to 0.01 for experiments on MNIST and 0.003 for experiments on CIFAR and ImageNet. Both λ and γ in Eq. (??) are set to 1. All experiments were run five times with five different random seeds, with the batch size besing 10.
Learning 150 Tasks and 64 Tasks For experiments on Permuted CIFAR10, LcSP uses ResNet18 architecture and applies the strategy (1) to change BN in ResNet18. To compare the performance of different GOP strategies, we did not use episodic memory for ORTHOG-SUBSPACE. Except for these changes, other experimental settings are the same as described above.
5.4 EXPERIMENTAL RESULTS
Comparisons of Learning 20 Tasks Table 1 compares the average accuracy and forgetting results of the proposed LcSP and its variants (LcSP-BN and LcSP-GN) with baselines on the four continual learning benchmarks. Therein, LcSP-BN and LcSP-GN adopt strategy (1) and (2) described in § 4.2, respectively. First, as shown in Table 1, the proposed methods outperform all baselines on MNIST and miniImageNet. On Permuted MNIST and Rotated MNIST, the average accuracy of LcSP surpasses the baselines by 23.8% ∼ 5.6% and 43% ∼ 4.8%, respectively. On miniImageNet, the average accuracy of LcSP-BN surpasses the baselines by 44.9% ∼ 4.63%. On CIFAR100, the proposed LcSP-BN achieved a competitive performance with the second highest average accuracy, 3.25% lower than Adam-NSCL, and a forgetting rate of 0. The average accuracy of LcSP-GN also outperforms most baselines, being lower than Adam-NSCL, HAT and GPM on CIFAR100 but higher than compared methods on miniImageNet. These results suggest that minimizing the inter-task and intra-task coherence with low-coherence projectors is an effective strategy for solving catastrophic forgetting. Secondly, results on CIFAR100 and miniImageNet also show that BN in ORTHOG-SUBSPACE Chaudhry et al. (2020) and LcSP may change previous tasks’ data distribution and lead to catastrophic forgetting. Both strategies (1) and (2) described in § 4.2 can effectively solve this problem.
Comparisons of Learning 150 Tasks and 64 Tasks In order to demonstrate the high advantage of proposed methods in learning a long sequence of tasks, the following experiments compare the results with 64 tasks and 150 tasks. Note that, in Fig. 1, LcSP (orthogonal) and LcSP-BN (orthogonal) use orthogonal projectors as comprisons while LcSP (low-coherence) and LcSP-BN (low-coherence) use low-coherence projectors. Fig. 1(a) and 1(b) report the average accuracy and forgetting of last 10 tasks, with learning 150 tasks on Permuted MNIST. Fig. 1(c) and 1(d) report the average accuracy and forgetting of last 5 tasks with learning 64 tasks on Permuted CIFAR10. The average accuracy of all methods, except the proposed LcSP-BN (low-coherence), dramatically degrades or is consistently low as the number of tasks increases. Furthermore, it can be seen from 1(d) that all methods except ORTHOG-SUBSPACE have almost no forgetting. This result indicates that methods using orthogonal projectors gradually lose their learning capacity with increasing number of tasks. The proposed method uses the low-coherence projector to relax the orthogonal restriction, effectively solving this problem. Fig. 2 gives the ablation study and shows the performance of our method with different λ and γ. When λ equals γ, the result of average accuracy on Permuted MNIST reached the highest. Results reach the worst when either λ or γ equals zero. These results indicate that both inter-task and intra-task coherence should be minimized to solve the plasticity and stability dilemma.
6 CONCLUSION
This paper proposed a novel gradient projection approach for continual learning to address gradient orthogonal projection’s learning capacity degradation problem. Instead of learning in orthogonal subspace, we propose projecting features and gradients via low-coherence projectors to minimize inter-task and intra-task coherence. Additionally, two strategies have been proposed to mitigate the catastrophic forgetting caused by the BN layer, i.e., replacing BN with GN or learning specific BN for each task. Extensive experiments show that our approach works well in alleviating forgetting and has a significant advantage in maintaining learning capacity, especially in learning long sequence tasks.
A APPENDIX
In this section, we give the implementation details about the experiments on Permuted MNIST and Permuted CIFAR10, to help readers reproduce these experiments. Additionally, more ablation studies and experimental results are provided here to further support the conclusions and contributions.
A.1 IMPLEMENTATION DETAILS
The main hyperparameter settings are listed in Table 2 and 3. For the baselines, we adopt the default settings provided in their code to bring out the proper performance. For fair comprision, we use the uniform Batch size for all methods.
A.2 ABLATION STUDIES AND ADDITIONAL RESULTS
Additional results on Permuted MNIST and Permuted CIFAR10 Readers may wonder whether our conclusion holds if we evaluate the average performance with more tasks (e.g., the average accuracy and forgetting on the last 20 tasks). As shown in Fig.3, LcSP still outperforms all baselines with a significant advantage. However, the phenomenon of learning capacity degradation in baselines becomes more imperceptible, e.g., the average accuracy of OWM on Permuted CIFAR10 is consistently low, rather than significantly decreasing. To further investigate the learning capacity degradation problem, we give the accuracy of baselines for the last task on Permuted MNIST and Permuted CIFAR10. As shown in Fig.7, all baselines, except LcSP, suffer from this problem with different degrees and show some decrease in accuracy compared to the initial (66.16% ∼ 24.63% on Permuted MNIST and 24.8% ∼ 3.48% on Permuted CIFAR10). These results suggest that this
problem is the critical factor that results in degrading the performance of GOP-based methods in the case of a large number of tasks.
Ablation studies and experiments for rank and scale constraints Further ablation studies and experiments are conducted to investigate the effects of the rank and scale constraints on the expressive power (plasticity) and stability of DNNs.
The result in Fig.5(a) suggests that projecting features or gradients into subspaces with low dimensions (e.g., lower than 5 in Fig.5(a) ) will decrease the expressive power of DNN. GOP methods and LcSP rely on the projectors to project the features or gradients (or both) into a d-dimensional subspace, which can also be considered as a form of dimension reduction. The dimension reduction is motivated by a consensus in the high-dimensional data analysis community, i.e., the data can be summarized in a low-dimensional space embedded in a high-dimensional space, such as a nonlinear manifold Levina & Bickel (2004). The dimension of this low-dimensional space is also known as the intrinsic dimension D Carreira-Perpinán (1997). If the d is too small, e.g., d ≪ D, important data features will be ”collapsed” onto the same dimension. From the perspective of training a DNN, if the gradients are projected into a subspace with a dimension lower than D, the DNN cannot activate sufficient parameters to learn the presentation of this task.
Due to the unknown number of tasks to be learned, the limited dimensions of the features, and the strict orthogonality constraint between projectors, the orthogonal projector cannot be constrained to a fixed-rank manifold. As shown in Fig.5(b), the rank of the orthogonal projector decreases as the number of tasks increases. Therefore, methods using orthogonal projectors usually ignore the intrinsic dimension of data (features or gradients) and finally lead to suffering from the learning capacity degradation problem. In contrast to GOP methods, LcSP relaxes the orthogonality constraint
and meets the rank constraint by optimizing intra-task coherence on the Oblique manifold and thus does not suffer from this problem.
Finally, Fig.6 gives an ablation study for scale constraints on the projector’s columns. In Fig.6, when the columns of the projector have unit length, the average accuracy reaches the highest. Result gets worse when the length of the projector’s columns is too small or too large.
Based on the RTX3060 12G, we tested the efficiency of LcSP and compared methods on Permuted CIFAR10 (which contains 64 tasks, and each image’s size is 3× 32× 32), using AlexNet for OWM and GPM, and resnet18 for the other methods. We provide 4 metrics to assess the efficiency of the method: floating point operations FLOPs, time spent per epoch during training Time (s), the number of epochs needed to train to convergence Epochs, and the average time spent on inference Mean inference time (ms). The detailed results are listed in Table.4.
A.3 METHOD ANALYSIS
The mechanism of low-coherence learning and the difference with orthogonal learning Recall the Lemma 1 described in Section 4.3. Lemma 1. Assume that f is fed the data of task Tt (q < t), then f can effectively overcomes catastrophic forgetting if
zlq,q ≈ zlq,t, ∀q ≤ t
. Here, zlq,t denotes the output feature of l-th layer when data from task Tq are fed into the DNN that has been trained on task Tt. Let us consider the case where there are only two tasks, i.e., q = t− 1. By appling the LcSP’s projection strategy, we have
zlq,t = z l q,q + x l q,t∆W l tP l q = z l q,q + x l q,tg l tP l tP l q
. Here, glt, x l q,t and P l t denote the gradient, input data, and projection matrix (which is a symmetric matrix) of task Tt in l-th layer, respectively. The term xlq,tgltP ltP lq can be thought of as the forgetting, which changes the output of DNN to previous data. Since we cannot change xlq,t and g l t, the key to reduce xlq,tg l tP l tP l q is to minimize P l tP l q . If P l t is orthogonal to P l q , the forgetting is equal to 0. Let us consider reducing forgetting when P lt cannot be orthogonal to P l q (e.g., the number of tasks is large and the dimensionality of the space is limited). If we optimize P ltP l q directly, we may get P lt = 0, or the column scale of P l t is too small. To find a useful P l t , LcSP constrains the length of the column vector to be 1, i.e., to find P lt on the Oblique manifold (OM). To reduce forgetting, we optimize the inter-task coherence µ(P lt ,P l q) on OM so that the maximum entry of P ltP lq is minimized.
In the worst case, we may obtain P lt consisting of only one column vector (i.e., P l t with rank 1). To avoid this situation, we optimize the intra-task coherence µ(P lt ) along with inter-task coherence, forcing the projector to satisfy the rank constraint in order to maintain the learning capacity of DNN (as shown in Fig. 5(a), the rank of projector affects the plasticity of DNN).
In general, low-coherence projection, which can be seen as a relaxed orthogonal constraint, aims to balance plasticity and stability better, motivated by our observation that orthogonal projections reduce the plasticity of the model and make the DNN unable to adapt to new environments. | 1. What is the focus and contribution of the paper regarding continual learning?
2. What are the strengths of the proposed approach, particularly in terms of learning in low-coherence subspace?
3. What are the weaknesses of the paper, especially regarding its limitations and comparisons with other works?
4. Do you have any concerns or questions about the method's ability to alleviate forgetting while not isolating subspaces?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Regularizing the subspace of the network representation has been used in continual learning to alleviate forgetting. To mitigate learning capacity degradation caused by orthogonal projection/regularization, this paper proposes to learn in low-coherence subspace. It learns task-specific projectors to represent each task in subspace with low coherence to others. To learn in low-coherence subspace, the projectors are learned on the Oblique manifold. The proposed method performs better than the compared methods (especially the orthogonal subspace learning method) on the task-incremental setting (with known task identifiers).
Strengths And Weaknesses
Strength
This paper proposes to learn the projections in the low-coherence subspace, instead of the orthogonal subspace, which lets the model capacity be used better, while reducing the parameter interference, with less capacity degradation. This is well motivated.
The experiments prove the claim about the benefits of learning in the low-coherence subspace, compared to learning with the orthogonal subspace.
The paper is generally written clearly, with some points that can be improved, as discussed in the following.
Weakness
The work can be seen as an extension of the orthogonal subspace based continual learning paper (Chaudhry et al. 2020). The main difference is the projectors are learned on the Oblique manifold instead of the Riemannian manifold, which enables learning in the low-coherence subspace with some overlaps between the tasks. It brings limited novelty for continual learning.
Intuitively, the low-coherence subspace learning can be seen as a “relaxation” of the orthogonal subspace learning. The mechanism of low-coherence learning and the difference with orthogonal learning is not clearly analyzed and discussed. -- Specifically, some questions can be asked and discussed. For example, comparing with the orthogonal subspace learning, how the method alleviates the forgetting issue while not totally isolating the subspaces.
The proposed method is limited to only the task-incremental setting where the task identifiers are required in both training and testing phases, as shown on page 3 and experiments. It further influences the significance of the work, considering the insignificant technical novelty, although this is with the same setting as (Chaudhry et al. 2020). The authors may further
Some detailed questions: -- What is the gap between the implementation and the analysis on page 6? -- How is the efficiency of the proposed method? How is the running time compared to other methods?
Clarity, Quality, Novelty And Reproducibility
The whole paper is generally written clearly, and the work is with good reproducibility. The details on the task-specific BN can be improved for better reproducibility. The analyses, discussion, and introduction for the settings can be improved. The novelty and contribution are not significant enough. |
ICLR | Title
Continual Learning In Low-coherence Subspace: A Strategy To Mitigate Learning Capacity Degradation
Abstract
Methods using gradient orthogonal projection, an efficient strategy in continual learning, have achieved promising success in mitigating catastrophic forgetting. However, these methods often suffer from the learning capacity degradation problem following the increasing number of tasks. To address this problem, we propose to learn new tasks in low-coherence subspaces rather than orthogonal subspaces. Specifically, we construct a unified cost function involving regular DNN parameters and gradient projections on the Oblique manifold. We finally develop a gradient descent algorithm on a smooth manifold to jointly minimize the cost function and minimize both the inter-task and the intra-task coherence. Numerical experimental results show that the proposed method has prominent advantages in maintaining the learning capacity when tasks are increased, especially on a large number of tasks, compared with baselines. Li & Lin (2018)
1 INTRODUCTION
Deep Neural Networks (DNNs) have achieved promising performance on many tasks. However, they lack the ability for continual learning, i.e., they suffer from catastrophic forgetting French (1999) when learning sequential tasks, where catastrophic forgetting is a phenomenon of new knowledge interfering with old knowledge. Research on continual learning, also known as incremental learning Aljundi et al. (2018a); Chaudhry et al. (2018a); Chen & Liu (2018); Aljundi et al. (2017), and sequential learning Aljundi et al. (2018b); McCloskey & Cohen (1989), aims to find effective algorithms that enable DNNs to simultaneously achieve plasticity and stability, i.e., to achieve both high learning capacity and high memory capacity.
Various methods have been proposed to avoid or mitigate the catastrophic forgetting De Lange et al. (2019), either by replaying training samples Rolnick et al. (2019); Ayub & Wagner (2020); Saha et al. (2021), or reducing mutual interference of model parameters, features or model architectures between different tasks Zenke et al. (2017); Mallya & Lazebnik (2018); Wang et al. (2021). Among these methods, Gradient Orthogonal Projection (GOP) Chaudhry et al. (2020); Zeng et al. (2019); Farajtabar et al. (2020); Li et al. (2021) is an efficient continual learning strategy that advocates projecting gradients with the orthogonal projector to prevent the knowledge interference between tasks. GOP-based methods have achieved encouraging results in mitigating catastrophic forgetting. However, from Fig. 1, we observe that these methods suffer from the learning capacity degradation problem: their learning capacity is gradually degraded as the number of tasks increases and eventually becomes unlearnable. Specifically, when learning multiple tasks, e.g., more than 30 tasks in Fig. 1, their performance on new tasks dramatically decreases. These results suggest that the GOP-based methods focus on the stability and somewhat ignore the plasticity. Ignoring the plasticity may limit the task learning capacity of models, i.e., the number of tasks that a model can learn without forgetting.
To address this issue, we propose to learn new tasks in low-coherence subspaces rather than orthogonal subspaces. Specifically, Low-coherence projectors are utilized for each layer to project features and gradients into low-coherence subspaces. To achieve this, we construct a unified cost function to find projectors and develop a gradient descent algorithm on the Oblique manifold to jointly minimize inter-task coherence and intra-task coherence. Minimizing the inter-task coherence can reduce the mutual interference between tasks, and minimizing the intra-task coherence can enhance the model’s expressive power. Restricting projectors on the Oblique manifold can avoid the scale ambiguity
Aharon et al. (2006); Wei et al. (2017), i.e., preventing the parameters of the projector from being extremely large or extremely small.
The main contributions of this work are summarized as follows. First, to address the learning capacity degradation problem of GOP, we propose a novel method, namely, Low-coherence Subspace Projection (LcSP), that replaces the orthogonal projectors with the low-coherence gradient projectors, allowing the DNN to maintain both plasticity and stability. Additionally, our work observes that the GOP models with Batch Normalization (BN) Ioffe & Szegedy (2015) layers could cause catastrophic forgetting. This paper proposes two strategies in LcSP to solve this problem, i.e., replacing BN with Group Normalization (GN) Wu & He (2018) or learning specific BN for each task.
2 RELATED WORK
In this section, we briefly review some existing works of continual learning and the GOP based methods.
Replay-based Strategy The basic idea of this type of approach is to use limited memory to store small amounts of data (e.g., raw samples) from previous tasks, called episodic memory, and to replay them when training a new task. Some of the existing works focused on selecting a subset of raw samples from the previous tasks Rolnick et al. (2019); Isele & Cosgun (2018); Chaudhry et al. (2019); Zhang et al. (2020). In contrast, others concentrated on training a generative model to synthesize new data that can substitute for the old data Shin et al. (2017); Van de Ven & Tolias (2018); Lavda et al. (2018); Ramapuram et al. (2020).
Regularization-based Strategy This strategy prevents catastrophic forgetting by introducing a regularization term in the loss function to penalize the changes in the network parameters. Existing works can be divided into data-focused, and prior-focused methods De Lange et al. (2021). The Data-focused methods take the previous model as the teacher and the current model as the student, transferring the knowledge from the teacher model to the student model through knowledge distillation. Typical methods include LwF Li & Hoiem (2017), LFL Jung et al. (2016), EBLL Rannen et al. (2017), DMC Zhang et al. (2020) and GD-WILD Lee et al. (2019). The prior-focused methods estimate a distribution over the model parameters, assigning an importance score to each parameter and penalizing the changes in significant parameters during learning. Relevant works include SI Zenke et al. (2017), EWC Kirkpatrick et al. (2017), RWalk Chaudhry et al. (2018a), AGS-CL Jung et al. (2020) and IMM Lee et al. (2017).
Parameter Isolation-based Strategy This strategy considers dynamically modifying the network architecture by pruning, parameter mask, or expansion to greatly or even completely reduce catastrophic forgetting. Existing works can be roughly divided into two categories. One is dedicated to isolating separate sub-networks for each task from a large network through pruning techniques and parameter mask, including PackNet Mallya & Lazebnik (2018), PathNet Fernando et al. (2017), HAT Serra et al. (2018) and Piggyback Mallya et al. (2018). Another class of methods dynamically expands the network architecture, increasing the number of neurons or sub-network branches, to break the limits of expressive capacity (Rusu et al., 2016; Aljundi et al., 2017; Xu & Zhu, 2018; Rosenfeld & Tsotsos, 2018). However, as the number of tasks growing, this approach also complicates the network architecture and increases the computation and memory consumption.
Gradient Orthogonal Projection-based Strategy Methods based on GOP strategies, which reduce catastrophic forgetting by projecting gradient or features with orthogonal projectors, have been shown to be effective in continual learning with encouraging results Farajtabar et al. (2020); Zeng et al. (2019); Saha et al. (2021); Wang et al. (2021); Chaudhry et al. (2020). According to the different ways of finding the projector, we can further divide the existing works into Context Orthogonal Projection (COP) and Subspace Orthogonal Projection (SOP). Methods based on COP, such as OWM Zeng et al. (2019), Adam-NSCL Wang et al. (2021), and GPM Saha et al. (2021), always rely on the context of previous tasks to build projectors. In contrast to COP, SOP-based methods such as ORTHOG-SUBSPACE Chaudhry et al. (2020) use hand-crafted, task-specific orthogonal projectors and yield competitive results.
This paper proposes a novel approach to continual learning called LcSP. Compared with other methods based on GOP, LcSP trains the network on the low-coherence subspaces to balance plasticity and stability and overcomes the learning capacity degradation problem, which significantly decreases the performance of GOP methods with the increasing number of tasks.
3 CONTINUAL LEARNING SETUP
This work adopts the Task-Incremental Learning (TIL) setting, where multiple tasks are learned sequentially. Let us assume that there are T tasks, denoted by Tt for t = 1, . . . , T with its training data Dt = {(xi, yi, τt)Nti=1}. Here, the data (xi, yi) ∈ X × Yt is assumed to be drawn from some independently and identically distributed random variable, and τt ∈ T denotes the task identifier. In the TIL setting, the data Dt can be accessed if and only if task Tt arrives. When episodic memory is adopted, a limited number of data samples drawn from old tasks can be stored in the replay buffer M so that Dt ∪M can be used for training when task Tt arrives. Assuming that a network f parameterized with Φ = {θ, φ} consists of two parts, where θ denotes the parameters of the backbone network and φ denotes the parameters of the classifier. Let f(x; θ) : X × T → H denote the backbone network parameterized with θ = {Wl}Ll=1, which encodes the data samples x into feature vector. Let f(x;φ) : H → Y denote the classifier parameterized with φ = w which returns the classification result of the feature vector obtained by f(x; θ). The goal of TIL is to learn T tasks sequentially with the network f and finally achieve the optimal loss on all tasks.
Evaluation Metrics Once the training on all tasks is finished, we evaluate the performance of algorithm by calculating the average accuracy A and forgetting F Chaudhry et al. (2020) of the network on the T tasks {T1, ..., TT }. Suppose all tasks come sequentially, let Acci,j denote the test accuracy of the network on task Ti after learning task Tj , where i ≤ j. The average accuracy is defined as
A = 1 T T∑ i=1 Acci,T , (1)
and the forgetting is defined as
F = 1 T − 1 T−1∑ i=1 max j∈{i,...,T−1} (Acci,j −Acci,T ). (2)
4 CONTINUAL LEARNING IN LOW-COHERENCE SUBSPACES
In this section, we first introduce how to find task-specific, low-coherence projectors for LcSP on the Oblique manifold. We then describe how to use it in a specific DNN architecture to project features and gradients. Finally, we analyze the factors that enable LcSP to maintain plasticity and stability.
4.1 CONSTRUCTING LOW-COHERENCE PROJECTORS ON OBLIQUE MANIFOLD
Here, we first introduce the concept of coherence metric. The coherence metric is usually used in compressed sensing and sparse signal recovery to describe the correlation of the columns of a measurement matrix Candes et al. (2011); Candes & Romberg (2007). Formally, the coherence of a matrix M is defined as
µ(M ,N) = maxj<k |⟨Mj ,Mk⟩| ∥Mj∥2∥Mk∥2 , M = N
maxi,j |⟨Mi,Nj⟩| ∥Mi∥2∥Nj∥2 , M ̸= N
. (3)
where Mj and Mk denote the column vectors of matrix M . Without causing confusion, we use µ(M) denote µ(M ,M). To measure the coherence between different projectors, we introduce the
Babel function Li & Lin (2018), measuring the maximum total coherence between a fixed atom and a collection of other atoms in a dictionary, which can be described as follows.
B(M) = max Λ,|Λ|=M max i/∈Λ ∑ j∈Λ |⟨Mi,Mj⟩| ∥Mi∥ ∥Mj∥
(4)
With the concept of a coherence metric in mind, we then introduce the main optimization objective in finding projectors. Specifically, suppose that the DNN has learned the task T1, T2, ..., Tt−1 in the subspace S1,S2, ...,St−1, respectively, and P1,P2, ..,Pt−1 denote the projectors of all previous tasks. When learning task Tt, we project features and gradients into a dt-dimensional low-coherence subspace St with projector Pt so that the LcSP can prevent catastrophic forgetting. The projector Pt can be found by optimizing
argminB(Pt),
s.t. Pt ∈ Rm×m, rank(Pt) = dt. (5)
Two considerations need to be taken in solving Eq. (5), i.e., considering the rank constraint and avoiding the entries of Pt being extremely large or extremely small. With these considerations in mind, we can rephrase the rank and scale constrained problem as a problem on the Riemannian manifold, more specifically on the Oblique manifold OM(m, dt), by setting Pt = OtO⊤t and normalizing the columns of Ot , i.e, diag(O⊤t Ot) = In, where diag(·) represents the diagonal matrix and In is the n × n identity matrix. With these settings, the new cost function J(·) and optimization problem can be described as follows:
J(Ot) = { λ ·B(OtO⊤t ) + γ · µ(OtO⊤t ), t > 1 µ(OtO ⊤ t ), t = 1 ,
Ot = argminJ(Ot), s.t. Ot ∈ OB(m, dt). (6)
In the cost function J(Ot), we further divided the optimization objective into inter-task B(OtO⊤t ) and intra-task µ(OtO⊤t ), and utilize parameters λ and γ to provide a trade-off between them. Here, intra-task coherence is optimized to maintain the full rank of Ot. Meeting the full-rank constraint helps to balance plasticity and stability in the case of increasing tasks. Relevant ablation studies and numerical analyses are given in the appendix.
Optimization on the Oblique manifold, i.e., the solution lies on the Oblique manifold, is a wellestablished area of research Absil et al. (2009); Absil & Gallivan (2006); Selvan et al. (2012). Here, we briefly summarize the main steps of the optimization process. Formally, the Oblique manifold OM(n, p) is defined as
OM(n, p) ≜ {X ∈ Rn×p : diag(X⊤X) = Ip}, (7) representing the set of all n× p matrices with normalized columns. OM can also be considered as an embedded Riemannian manifold of Rn×p, endowed with the canonical inner product
⟨X1,X2⟩ = trace ( X⊤1 X2 ) , (8)
where trace(·) represents the sum of the diagonal elements of the given matrix. For a given point X on OM, the tangent space at X , denoted by TXOM, is defined as
TXOM(n, p) = {U ∈ Rn×p : diag(X⊤U) = 0}. (9) Further, the tangent space projector PX at X which projects H ∈ Rn×p into TXOM, is represented as
PX(H) = H −X ddiag ( X⊤H ) , (10)
where ddiag sets all off-diagonal entries of a matrix to zero. When optimizing on OM, the kth iterate Xk must move along a descent curve on OM for the cost function, such that the next iterate Xk+1 will be fixed on the manifold. This is achieved by the retraction
RXk(U) = normalize(Xk +U), (11) where normalize scales each column of the input matrix to have unit length. Finally, with this knowledge, we can extend the gradient descent algorithm to solve any unconstrained optimization problems on OM, which can be summarized as
U = PXk(∇XkJ), Xk+1 = RXk(−αU),
(12)
where ∇XkJ denotes the Euclidean gradient at the kth iterate and α is the step size. Finally, our algorithm for finding Ot in OM to task Tt is summarized in Algorithm 1.
Algorithm 1 Construct the Ot on OM for Task Tt Input: O1, . . . , Ot−1 Output: Ot 1: RX(U) := normalize(X + U) ▷ normalize scales each column of the input matrix to have norm 1 2: X0 ← random initialization onOM 3: k ← 0 4: Set tolerance error 0 ≤ E ≪ 1 5: while True do 6: G← ∇f(Xk) ▷ Calculate Euclidean Gradient 7: U ← G−Xk ddiag(X⊤k G) ▷ Calculate Riemann Gradient 8: if ∥U∥ ≤ E then 9: break 10: end if 11: α ∈ (0, 0.5), β ∈ (0, 1) 12: t← 1 ▷ Initial step size 13: while J(RXk (−t ·U)) > J(Xk)− α · t · ∥U∥ 2 2 do ▷ Searching the step size for calculating next iterate 14: t← β · t 15: end while 16: Xk+1 ← RXk (−t ·U) ▷ Updating the next iterate Xk+1 and fixing it on manifold 17: k ← k + 1 18: end while 19: Ot = Xk 20: return Ot
4.2 THE APPLICATION OF LOW-COHERENCE PROJECTORS IN DNNS
With the LcSP at hand, the following introduces some technical details of applying LcSP in DNNs. When learning task Tt, LcSP first constructs task-specific projector P lt for each layer before training, and freezes them during training. These projectors are used to project the features and gradients, ensuring that the DNN learns in the low-coherence subspace. Specifically, suppose that a network f with L linear layers is used as DNN architecture, let W lt , x l t, z l t, σ
l, and P lt denote the model parameters, the input features, the output features, the activation function, and the introduced lowcoherence projector in layer l ∈ {1, ..., L}, respectively. LcSP introduces P lt immediately after W lt such that the pre-activation features are projected into the subspace, i.e.,
zlt = (x l tW l t )P l t ,
xl+1t = σ l(zlt).
(13)
According to the chain rule of derivation, the gradients at W lt will also be multiplied with P t l in backpropagation, as follows
∂L ∂(W lt )(i,:) = ∂L ∂zlt ∂zlt ∂(W lt )(i,:)
= ∂L zLt L−1∏ k=l ∂zk+1t ∂zkt · (xlt)i · P lt , (14)
where (W lt )(i,:) represents the ith row of W l t and (x l t)i is the ith element of x l t. In Convolutional Neural Networks (CNNs), the input and the output typically represent the image features and have more than two dimensions, e.g., input channel, output channel, height, and width. In this case, we reshape zl ∈ Rcout×(cin·h·w) to zl ∈ R(cin·h·w)×cout and align the dimension of projector with the output channel so that P lt ∈ Rcout×cout . After the projection, we recover the shape of zlt so that it can be used as input for the next layer.
Overcoming the Catastrophic Forgetting in BN based models BN is a widely used module in DNNs to make training of DNNs faster and more stable through normalization of the layers’ features by re-centering and re-scaling Ioffe & Szegedy (2015). However, re-centering and re-scaling of the layers’ features changes the data distribution (e.g., the mean and the variance) of features of previous tasks, which often leads to the catastrophic forgetting of LcSP. For example, when learning the new
task Tt, W lt may not work for Tt due to the change in data distribution caused by BN. To solve this problem, we propose two strategies in LcSP: the strategy (1) learning specific BN for each task, or the strategy (2) using GN instead of BN. We verify the effectiveness of these two strategies in experiments and compare their performance in §5.
4.3 METHOD ANALYSIS
In this section, we provide analysis on plasticity and stability of LcSP.
Stability Analysis Let θ = {W lt }Ll=1 denote the parameter set of f ; ∆θ = {∆W 1t , . . . ,∆WLt } denote set of variation values of parameters after learning task Tt; Pt = {P lt }Ll=1 denote the projectors set obtained by Algorithm 1; xlq,t and z l q,t denote the input and ouput when feeding the data of task Tq (q ≤ t) into the network f , which has been optimized in learning task Tt. Lemma 1. Assume that f is fed the data of task Tt (q < t), then f can effectively overcomes catastrophic forgetting if zlq,q ≈ zlq,t, ∀q ≤ t (15) holds for l ∈ {1, 2, ..., L}. Lemma 1 suggests that f can overcome catastrophic forgetting if the output of f to previous tasks is invariant. In the following, we prove that LcSP achieves approximate invariance to the output of previous tasks.
Proof. Suppose q = t− 1. When l = 1, xlq,t = xlq,q. Then
zlq,t = x l q,t(W l q +∆W l t )P l q
= xlq,tW l qP l q + x l q,t∆W l tP l q
= zlq,q + x l q,t∆W l tP l q .
(16)
Let glt denote the gradient when training the network on task Tt. In backpropagation, ∆W lt = gltP lt . Then xlq,t∆W l tP l q = x l q,tg l tP l tP l q . If the inter-task coherence µ(P l t ,P l q) ≈ 0, then P ltP lq ≈ 0. Projectors satisfying this condition can be found by Algorithm 1. We can prove that zlq,q ≈ zlq,t holds for all layers by repeating the above process. This proof can also be generalized to any previous task Tq .
Plasticity Analysis Let g̃lt = gltP lt denote the projected gradient at W lt . f can achieve optimal loss on task Tt if ⟨glt, g̃lt⟩ > 0 holds for each l ∈ {1, . . . , L}, where ⟨·, ·⟩ represents the inner product. Here, we prove that ⟨glt, g̃lt⟩ > 0 holds for each l ∈ {1, . . . , L}. Proof. Let g̃lt = gltP lt denote the projected gradient, we have
⟨glt, g̃lt⟩ = gtg̃lt⊤ = g l tO l tO l t ⊤glt ⊤
= ⟨gltOlt, gltOlt⟩ = ∥gltOlt∥ > 0. (17)
Note that ∥gltOlt∥ is always positive unless gltOlt is 0. This result is easy to generalize to each layer.
5 EXPERIMENTS
In this section, we evaluate our approach on several popular continual learning benchmarks and compare LcSP with previous state-of-the-art methods. The result of accuracy and forgetting demonstrate the effectiveness of our LcSP, especially when the number of tasks is large.
5.1 BENCHMARKS
Benchmarks for Learning 20 Tasks We conducted this experiment on four image classification datasets: Permuted MNIST, Rotated MNIST, Split CIFAR100 and Split miniImageNet. Permuted MNIST is constructed by randomly rearranging MNIST LeCun (1998) image pixels, using different seeds for different tasks. Rotated MNIST is constructed by rotating the MNIST image at a certain
angle. The rotation angle is a random value between [0, π] and arbitrary selection for different tasks. In this experimental setup, both of the above MNIST datasets contain 20 tasks, each task containing 10,000 samples from 10 classes. Split CIFAR is constructed by splitting CIFAR100 into multiple tasks, where each task contains the data pertaining to five random classes (without replacement) out of the total 100 classes. Split miniImageNet Vinyals et al. (2016) is a subset of ImageNet. In Split miniImageNet, each task contains the data from five random classes (without replacement) out of 100 classes. Both CIFAR100 and miniImageNet contain 20 tasks, each contains 250 samples from each of the five classes.
Benchmarks for Learning 150 Tasks and 64 Tasks This experiment was conducted on two image classification datasets: Permuted MNIST and Permuted CIFAR10. Both Permuted MNIST and Permuted CIFAR10 are obtained by randomly rearranging image pixels. In this experimental setup, Permuted MNIST contains 150 tasks, and Permuted CIFAR10 contains 64 tasks, each containing 10,000 samples from 10 classes.
5.2 BASELINES
We compare the proposed method with SOTA based on GOP. As aforementioned, we generalize GOP into COP and SOP. For methods based on the COP strategy, we compare proposed method with OWM Zeng et al. (2019), Adam-NSCL Wang et al. (2021) and GPM Saha et al. (2021). For methods based on the SOP strategy, we compare the proposed method with ORTHOG-SUBSPACE Chaudhry et al. (2020). Moreover, we also compare our method with HAT Serra et al. (2018), EWC Kirkpatrick et al. (2017), ER-Ring Chaudhry et al. (2019), AGEM Chaudhry et al. (2018b) and Kernel Continual Learning (KCL) Derakhshani et al. (2021).
5.3 IMPLEMENTATION DETAILS
Learning 20 Tasks For experiments on Permuted MNIST and Rotated MNIST, all methods use a fully connected network with two hidden layers, each with 256 neurons, using ReLU activations. For experiments on CIFAR and miniImageNet, all methods use standard ResNet18 architecture except OWM, HAT, and GPM which use AlexNet Krizhevsky et al. (2012). As described in § 4.2, the proposed LcSP makes two changes to the BN layer in standard ResNet18: (1) learning specific BN for each task and (2) replacing BN with GN. We also apply these strategies to ORTHOG-SUBSPACE
for additional comparison. For experiments on MNIST, all tasks share the same classifier. For experiments on CIFAR and miniImageNet, each task requires a task-specific classifier. For all experiments, LcSP does not use episodic memory to store data samples for data replay. For all methods, We use Stochastic Gradient Descent (SGD) uniformly. The learning rate is set to 0.01 for experiments on MNIST and 0.003 for experiments on CIFAR and ImageNet. Both λ and γ in Eq. (??) are set to 1. All experiments were run five times with five different random seeds, with the batch size besing 10.
Learning 150 Tasks and 64 Tasks For experiments on Permuted CIFAR10, LcSP uses ResNet18 architecture and applies the strategy (1) to change BN in ResNet18. To compare the performance of different GOP strategies, we did not use episodic memory for ORTHOG-SUBSPACE. Except for these changes, other experimental settings are the same as described above.
5.4 EXPERIMENTAL RESULTS
Comparisons of Learning 20 Tasks Table 1 compares the average accuracy and forgetting results of the proposed LcSP and its variants (LcSP-BN and LcSP-GN) with baselines on the four continual learning benchmarks. Therein, LcSP-BN and LcSP-GN adopt strategy (1) and (2) described in § 4.2, respectively. First, as shown in Table 1, the proposed methods outperform all baselines on MNIST and miniImageNet. On Permuted MNIST and Rotated MNIST, the average accuracy of LcSP surpasses the baselines by 23.8% ∼ 5.6% and 43% ∼ 4.8%, respectively. On miniImageNet, the average accuracy of LcSP-BN surpasses the baselines by 44.9% ∼ 4.63%. On CIFAR100, the proposed LcSP-BN achieved a competitive performance with the second highest average accuracy, 3.25% lower than Adam-NSCL, and a forgetting rate of 0. The average accuracy of LcSP-GN also outperforms most baselines, being lower than Adam-NSCL, HAT and GPM on CIFAR100 but higher than compared methods on miniImageNet. These results suggest that minimizing the inter-task and intra-task coherence with low-coherence projectors is an effective strategy for solving catastrophic forgetting. Secondly, results on CIFAR100 and miniImageNet also show that BN in ORTHOG-SUBSPACE Chaudhry et al. (2020) and LcSP may change previous tasks’ data distribution and lead to catastrophic forgetting. Both strategies (1) and (2) described in § 4.2 can effectively solve this problem.
Comparisons of Learning 150 Tasks and 64 Tasks In order to demonstrate the high advantage of proposed methods in learning a long sequence of tasks, the following experiments compare the results with 64 tasks and 150 tasks. Note that, in Fig. 1, LcSP (orthogonal) and LcSP-BN (orthogonal) use orthogonal projectors as comprisons while LcSP (low-coherence) and LcSP-BN (low-coherence) use low-coherence projectors. Fig. 1(a) and 1(b) report the average accuracy and forgetting of last 10 tasks, with learning 150 tasks on Permuted MNIST. Fig. 1(c) and 1(d) report the average accuracy and forgetting of last 5 tasks with learning 64 tasks on Permuted CIFAR10. The average accuracy of all methods, except the proposed LcSP-BN (low-coherence), dramatically degrades or is consistently low as the number of tasks increases. Furthermore, it can be seen from 1(d) that all methods except ORTHOG-SUBSPACE have almost no forgetting. This result indicates that methods using orthogonal projectors gradually lose their learning capacity with increasing number of tasks. The proposed method uses the low-coherence projector to relax the orthogonal restriction, effectively solving this problem. Fig. 2 gives the ablation study and shows the performance of our method with different λ and γ. When λ equals γ, the result of average accuracy on Permuted MNIST reached the highest. Results reach the worst when either λ or γ equals zero. These results indicate that both inter-task and intra-task coherence should be minimized to solve the plasticity and stability dilemma.
6 CONCLUSION
This paper proposed a novel gradient projection approach for continual learning to address gradient orthogonal projection’s learning capacity degradation problem. Instead of learning in orthogonal subspace, we propose projecting features and gradients via low-coherence projectors to minimize inter-task and intra-task coherence. Additionally, two strategies have been proposed to mitigate the catastrophic forgetting caused by the BN layer, i.e., replacing BN with GN or learning specific BN for each task. Extensive experiments show that our approach works well in alleviating forgetting and has a significant advantage in maintaining learning capacity, especially in learning long sequence tasks.
A APPENDIX
In this section, we give the implementation details about the experiments on Permuted MNIST and Permuted CIFAR10, to help readers reproduce these experiments. Additionally, more ablation studies and experimental results are provided here to further support the conclusions and contributions.
A.1 IMPLEMENTATION DETAILS
The main hyperparameter settings are listed in Table 2 and 3. For the baselines, we adopt the default settings provided in their code to bring out the proper performance. For fair comprision, we use the uniform Batch size for all methods.
A.2 ABLATION STUDIES AND ADDITIONAL RESULTS
Additional results on Permuted MNIST and Permuted CIFAR10 Readers may wonder whether our conclusion holds if we evaluate the average performance with more tasks (e.g., the average accuracy and forgetting on the last 20 tasks). As shown in Fig.3, LcSP still outperforms all baselines with a significant advantage. However, the phenomenon of learning capacity degradation in baselines becomes more imperceptible, e.g., the average accuracy of OWM on Permuted CIFAR10 is consistently low, rather than significantly decreasing. To further investigate the learning capacity degradation problem, we give the accuracy of baselines for the last task on Permuted MNIST and Permuted CIFAR10. As shown in Fig.7, all baselines, except LcSP, suffer from this problem with different degrees and show some decrease in accuracy compared to the initial (66.16% ∼ 24.63% on Permuted MNIST and 24.8% ∼ 3.48% on Permuted CIFAR10). These results suggest that this
problem is the critical factor that results in degrading the performance of GOP-based methods in the case of a large number of tasks.
Ablation studies and experiments for rank and scale constraints Further ablation studies and experiments are conducted to investigate the effects of the rank and scale constraints on the expressive power (plasticity) and stability of DNNs.
The result in Fig.5(a) suggests that projecting features or gradients into subspaces with low dimensions (e.g., lower than 5 in Fig.5(a) ) will decrease the expressive power of DNN. GOP methods and LcSP rely on the projectors to project the features or gradients (or both) into a d-dimensional subspace, which can also be considered as a form of dimension reduction. The dimension reduction is motivated by a consensus in the high-dimensional data analysis community, i.e., the data can be summarized in a low-dimensional space embedded in a high-dimensional space, such as a nonlinear manifold Levina & Bickel (2004). The dimension of this low-dimensional space is also known as the intrinsic dimension D Carreira-Perpinán (1997). If the d is too small, e.g., d ≪ D, important data features will be ”collapsed” onto the same dimension. From the perspective of training a DNN, if the gradients are projected into a subspace with a dimension lower than D, the DNN cannot activate sufficient parameters to learn the presentation of this task.
Due to the unknown number of tasks to be learned, the limited dimensions of the features, and the strict orthogonality constraint between projectors, the orthogonal projector cannot be constrained to a fixed-rank manifold. As shown in Fig.5(b), the rank of the orthogonal projector decreases as the number of tasks increases. Therefore, methods using orthogonal projectors usually ignore the intrinsic dimension of data (features or gradients) and finally lead to suffering from the learning capacity degradation problem. In contrast to GOP methods, LcSP relaxes the orthogonality constraint
and meets the rank constraint by optimizing intra-task coherence on the Oblique manifold and thus does not suffer from this problem.
Finally, Fig.6 gives an ablation study for scale constraints on the projector’s columns. In Fig.6, when the columns of the projector have unit length, the average accuracy reaches the highest. Result gets worse when the length of the projector’s columns is too small or too large.
Based on the RTX3060 12G, we tested the efficiency of LcSP and compared methods on Permuted CIFAR10 (which contains 64 tasks, and each image’s size is 3× 32× 32), using AlexNet for OWM and GPM, and resnet18 for the other methods. We provide 4 metrics to assess the efficiency of the method: floating point operations FLOPs, time spent per epoch during training Time (s), the number of epochs needed to train to convergence Epochs, and the average time spent on inference Mean inference time (ms). The detailed results are listed in Table.4.
A.3 METHOD ANALYSIS
The mechanism of low-coherence learning and the difference with orthogonal learning Recall the Lemma 1 described in Section 4.3. Lemma 1. Assume that f is fed the data of task Tt (q < t), then f can effectively overcomes catastrophic forgetting if
zlq,q ≈ zlq,t, ∀q ≤ t
. Here, zlq,t denotes the output feature of l-th layer when data from task Tq are fed into the DNN that has been trained on task Tt. Let us consider the case where there are only two tasks, i.e., q = t− 1. By appling the LcSP’s projection strategy, we have
zlq,t = z l q,q + x l q,t∆W l tP l q = z l q,q + x l q,tg l tP l tP l q
. Here, glt, x l q,t and P l t denote the gradient, input data, and projection matrix (which is a symmetric matrix) of task Tt in l-th layer, respectively. The term xlq,tgltP ltP lq can be thought of as the forgetting, which changes the output of DNN to previous data. Since we cannot change xlq,t and g l t, the key to reduce xlq,tg l tP l tP l q is to minimize P l tP l q . If P l t is orthogonal to P l q , the forgetting is equal to 0. Let us consider reducing forgetting when P lt cannot be orthogonal to P l q (e.g., the number of tasks is large and the dimensionality of the space is limited). If we optimize P ltP l q directly, we may get P lt = 0, or the column scale of P l t is too small. To find a useful P l t , LcSP constrains the length of the column vector to be 1, i.e., to find P lt on the Oblique manifold (OM). To reduce forgetting, we optimize the inter-task coherence µ(P lt ,P l q) on OM so that the maximum entry of P ltP lq is minimized.
In the worst case, we may obtain P lt consisting of only one column vector (i.e., P l t with rank 1). To avoid this situation, we optimize the intra-task coherence µ(P lt ) along with inter-task coherence, forcing the projector to satisfy the rank constraint in order to maintain the learning capacity of DNN (as shown in Fig. 5(a), the rank of projector affects the plasticity of DNN).
In general, low-coherence projection, which can be seen as a relaxed orthogonal constraint, aims to balance plasticity and stability better, motivated by our observation that orthogonal projections reduce the plasticity of the model and make the DNN unable to adapt to new environments. | 1. What is the main contribution of the paper regarding continual learning?
2. What are the strengths and weaknesses of the proposed approach compared to other methods, particularly GOP?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or suggestions regarding the experimental results and analyses?
5. Is there a better alternative to the coherence metric used in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to learn new tasks in low-coherence subspaces rather than orthogonal subspaces to mitigate catastrophic forgetting in continual learning. The authors believe that Gradient Orthogonal Projection (GOP) (though helps battle catastrophic forgetting) causes learning capacity degradation and its learning capacity "is gradually degraded as the number of tasks increases and eventually becomes unlearnable".
To learn new tasks in low-coherence subspaces, the authors propose a unified cost function to seek projectors and develop a gradient descent algorithm (on the Oblique manifold) by jointly minimizing inter-task coherence and intra-task coherence. The original mutual coherence (often used in compressive sensing) of a matrix is the maximum absolute value of the cross-correlations between the matrix's columns. The authors extend the concept of coherence to two matrices to capture the maximum absolute value of the cross-correlations for the columns from two matrices. To deal with catastrophic forgetting heightened by batch normalisation (BN), the authors use two strategies: (1) learning specific BN for each task, or (2) using group normalisation (GN) instead of BN.
The authors did experiments on Permuted MNIST, Rotated MNIST, Split CIFAR100 and Split miniImageNet. For 20 tasks (a typically setting), the proposed method achieved best accuracies on Permuted MNIST and Rotated MNIST, and 3nd best forgetting on Permuted MNIST and 2rd best forgetting on Rotated MNIST. The proposed method was reported to achieve best accuracies and zero forgetting (surprising to see zero forgetting) on Split CIFAR100 and Split miniImageNet.
To show other methods would struggle with increasing tasks, the authors did experiments on 64 tasks and 150 tasks where the proposed method shows a clear advantage on accuracies, but not necessarily on forgetting (the proposed method is reported to achieve best accuracies on both Permuted MNIST and Permuted CIFAR10, and 2nd best forgetting on Permuted MNIST and 3rd best forgetting on Permuted CIFAR10).
Strengths And Weaknesses
Strength:
learning in low-coherence subspaces can be an alternative to GOP.
the experimental results are impressive.
Weakness:
The authors believe that GOP causes learning capacity degradation and its learning capacity "is gradually degraded as the number of tasks increases and eventually becomes unlearnable". This should be more thoroughly analysed and supported by ablation study (curves on 64 tasks and 150 tasks with other methods are not sufficient to serve this purpose).
The proposed method's forgetting performance on different datasets and settings vary widely from 0 forgetting to the state of being beaten by one or two competitors (e.g. see Fig. 1 (b) (d)).
Coherence metric can be restrictive. Babel function which extends cross-correlation between pairs of columns to the cross-correlation from one column to a set of other columns might be a better alternative.
Clarity, Quality, Novelty And Reproducibility
Low-coherence subspaces projection can be a good alternative to GOP. It is novel. More thorough analysis can be provided for the severity of GOP's learning capacity degradation issue. The authors' claim "their learning capacity is gradually degraded as the number of tasks increases and eventually becomes unlearnable" needs solid analysis and evidence. Likewise analysis and evidence for the proposed method does not suffer from this should be provided too.
GOP should be used when the network has sufficient learning capacity as Chaudhry et al. (2020) pointed out that “We assume that the network is sufficiently parameterized, which often is the case with deep networks, so that all the tasks can be learned in independent subspaces.” If learning capacity can not catch up with the demands of increasing tasks, perhaps we can try to increase capacity on the fly? One way is to grow the network's depth adaptively [1*].
[1*] Online Deep Learning: Learning Deep Neural Networks on the Fly, by Doyen Sahoo, Quang Pham, Jing Lu, Steven C. H. Hoi, IJCAI 2018. |
ICLR | Title
Continual Learning In Low-coherence Subspace: A Strategy To Mitigate Learning Capacity Degradation
Abstract
Methods using gradient orthogonal projection, an efficient strategy in continual learning, have achieved promising success in mitigating catastrophic forgetting. However, these methods often suffer from the learning capacity degradation problem following the increasing number of tasks. To address this problem, we propose to learn new tasks in low-coherence subspaces rather than orthogonal subspaces. Specifically, we construct a unified cost function involving regular DNN parameters and gradient projections on the Oblique manifold. We finally develop a gradient descent algorithm on a smooth manifold to jointly minimize the cost function and minimize both the inter-task and the intra-task coherence. Numerical experimental results show that the proposed method has prominent advantages in maintaining the learning capacity when tasks are increased, especially on a large number of tasks, compared with baselines. Li & Lin (2018)
1 INTRODUCTION
Deep Neural Networks (DNNs) have achieved promising performance on many tasks. However, they lack the ability for continual learning, i.e., they suffer from catastrophic forgetting French (1999) when learning sequential tasks, where catastrophic forgetting is a phenomenon of new knowledge interfering with old knowledge. Research on continual learning, also known as incremental learning Aljundi et al. (2018a); Chaudhry et al. (2018a); Chen & Liu (2018); Aljundi et al. (2017), and sequential learning Aljundi et al. (2018b); McCloskey & Cohen (1989), aims to find effective algorithms that enable DNNs to simultaneously achieve plasticity and stability, i.e., to achieve both high learning capacity and high memory capacity.
Various methods have been proposed to avoid or mitigate the catastrophic forgetting De Lange et al. (2019), either by replaying training samples Rolnick et al. (2019); Ayub & Wagner (2020); Saha et al. (2021), or reducing mutual interference of model parameters, features or model architectures between different tasks Zenke et al. (2017); Mallya & Lazebnik (2018); Wang et al. (2021). Among these methods, Gradient Orthogonal Projection (GOP) Chaudhry et al. (2020); Zeng et al. (2019); Farajtabar et al. (2020); Li et al. (2021) is an efficient continual learning strategy that advocates projecting gradients with the orthogonal projector to prevent the knowledge interference between tasks. GOP-based methods have achieved encouraging results in mitigating catastrophic forgetting. However, from Fig. 1, we observe that these methods suffer from the learning capacity degradation problem: their learning capacity is gradually degraded as the number of tasks increases and eventually becomes unlearnable. Specifically, when learning multiple tasks, e.g., more than 30 tasks in Fig. 1, their performance on new tasks dramatically decreases. These results suggest that the GOP-based methods focus on the stability and somewhat ignore the plasticity. Ignoring the plasticity may limit the task learning capacity of models, i.e., the number of tasks that a model can learn without forgetting.
To address this issue, we propose to learn new tasks in low-coherence subspaces rather than orthogonal subspaces. Specifically, Low-coherence projectors are utilized for each layer to project features and gradients into low-coherence subspaces. To achieve this, we construct a unified cost function to find projectors and develop a gradient descent algorithm on the Oblique manifold to jointly minimize inter-task coherence and intra-task coherence. Minimizing the inter-task coherence can reduce the mutual interference between tasks, and minimizing the intra-task coherence can enhance the model’s expressive power. Restricting projectors on the Oblique manifold can avoid the scale ambiguity
Aharon et al. (2006); Wei et al. (2017), i.e., preventing the parameters of the projector from being extremely large or extremely small.
The main contributions of this work are summarized as follows. First, to address the learning capacity degradation problem of GOP, we propose a novel method, namely, Low-coherence Subspace Projection (LcSP), that replaces the orthogonal projectors with the low-coherence gradient projectors, allowing the DNN to maintain both plasticity and stability. Additionally, our work observes that the GOP models with Batch Normalization (BN) Ioffe & Szegedy (2015) layers could cause catastrophic forgetting. This paper proposes two strategies in LcSP to solve this problem, i.e., replacing BN with Group Normalization (GN) Wu & He (2018) or learning specific BN for each task.
2 RELATED WORK
In this section, we briefly review some existing works of continual learning and the GOP based methods.
Replay-based Strategy The basic idea of this type of approach is to use limited memory to store small amounts of data (e.g., raw samples) from previous tasks, called episodic memory, and to replay them when training a new task. Some of the existing works focused on selecting a subset of raw samples from the previous tasks Rolnick et al. (2019); Isele & Cosgun (2018); Chaudhry et al. (2019); Zhang et al. (2020). In contrast, others concentrated on training a generative model to synthesize new data that can substitute for the old data Shin et al. (2017); Van de Ven & Tolias (2018); Lavda et al. (2018); Ramapuram et al. (2020).
Regularization-based Strategy This strategy prevents catastrophic forgetting by introducing a regularization term in the loss function to penalize the changes in the network parameters. Existing works can be divided into data-focused, and prior-focused methods De Lange et al. (2021). The Data-focused methods take the previous model as the teacher and the current model as the student, transferring the knowledge from the teacher model to the student model through knowledge distillation. Typical methods include LwF Li & Hoiem (2017), LFL Jung et al. (2016), EBLL Rannen et al. (2017), DMC Zhang et al. (2020) and GD-WILD Lee et al. (2019). The prior-focused methods estimate a distribution over the model parameters, assigning an importance score to each parameter and penalizing the changes in significant parameters during learning. Relevant works include SI Zenke et al. (2017), EWC Kirkpatrick et al. (2017), RWalk Chaudhry et al. (2018a), AGS-CL Jung et al. (2020) and IMM Lee et al. (2017).
Parameter Isolation-based Strategy This strategy considers dynamically modifying the network architecture by pruning, parameter mask, or expansion to greatly or even completely reduce catastrophic forgetting. Existing works can be roughly divided into two categories. One is dedicated to isolating separate sub-networks for each task from a large network through pruning techniques and parameter mask, including PackNet Mallya & Lazebnik (2018), PathNet Fernando et al. (2017), HAT Serra et al. (2018) and Piggyback Mallya et al. (2018). Another class of methods dynamically expands the network architecture, increasing the number of neurons or sub-network branches, to break the limits of expressive capacity (Rusu et al., 2016; Aljundi et al., 2017; Xu & Zhu, 2018; Rosenfeld & Tsotsos, 2018). However, as the number of tasks growing, this approach also complicates the network architecture and increases the computation and memory consumption.
Gradient Orthogonal Projection-based Strategy Methods based on GOP strategies, which reduce catastrophic forgetting by projecting gradient or features with orthogonal projectors, have been shown to be effective in continual learning with encouraging results Farajtabar et al. (2020); Zeng et al. (2019); Saha et al. (2021); Wang et al. (2021); Chaudhry et al. (2020). According to the different ways of finding the projector, we can further divide the existing works into Context Orthogonal Projection (COP) and Subspace Orthogonal Projection (SOP). Methods based on COP, such as OWM Zeng et al. (2019), Adam-NSCL Wang et al. (2021), and GPM Saha et al. (2021), always rely on the context of previous tasks to build projectors. In contrast to COP, SOP-based methods such as ORTHOG-SUBSPACE Chaudhry et al. (2020) use hand-crafted, task-specific orthogonal projectors and yield competitive results.
This paper proposes a novel approach to continual learning called LcSP. Compared with other methods based on GOP, LcSP trains the network on the low-coherence subspaces to balance plasticity and stability and overcomes the learning capacity degradation problem, which significantly decreases the performance of GOP methods with the increasing number of tasks.
3 CONTINUAL LEARNING SETUP
This work adopts the Task-Incremental Learning (TIL) setting, where multiple tasks are learned sequentially. Let us assume that there are T tasks, denoted by Tt for t = 1, . . . , T with its training data Dt = {(xi, yi, τt)Nti=1}. Here, the data (xi, yi) ∈ X × Yt is assumed to be drawn from some independently and identically distributed random variable, and τt ∈ T denotes the task identifier. In the TIL setting, the data Dt can be accessed if and only if task Tt arrives. When episodic memory is adopted, a limited number of data samples drawn from old tasks can be stored in the replay buffer M so that Dt ∪M can be used for training when task Tt arrives. Assuming that a network f parameterized with Φ = {θ, φ} consists of two parts, where θ denotes the parameters of the backbone network and φ denotes the parameters of the classifier. Let f(x; θ) : X × T → H denote the backbone network parameterized with θ = {Wl}Ll=1, which encodes the data samples x into feature vector. Let f(x;φ) : H → Y denote the classifier parameterized with φ = w which returns the classification result of the feature vector obtained by f(x; θ). The goal of TIL is to learn T tasks sequentially with the network f and finally achieve the optimal loss on all tasks.
Evaluation Metrics Once the training on all tasks is finished, we evaluate the performance of algorithm by calculating the average accuracy A and forgetting F Chaudhry et al. (2020) of the network on the T tasks {T1, ..., TT }. Suppose all tasks come sequentially, let Acci,j denote the test accuracy of the network on task Ti after learning task Tj , where i ≤ j. The average accuracy is defined as
A = 1 T T∑ i=1 Acci,T , (1)
and the forgetting is defined as
F = 1 T − 1 T−1∑ i=1 max j∈{i,...,T−1} (Acci,j −Acci,T ). (2)
4 CONTINUAL LEARNING IN LOW-COHERENCE SUBSPACES
In this section, we first introduce how to find task-specific, low-coherence projectors for LcSP on the Oblique manifold. We then describe how to use it in a specific DNN architecture to project features and gradients. Finally, we analyze the factors that enable LcSP to maintain plasticity and stability.
4.1 CONSTRUCTING LOW-COHERENCE PROJECTORS ON OBLIQUE MANIFOLD
Here, we first introduce the concept of coherence metric. The coherence metric is usually used in compressed sensing and sparse signal recovery to describe the correlation of the columns of a measurement matrix Candes et al. (2011); Candes & Romberg (2007). Formally, the coherence of a matrix M is defined as
µ(M ,N) = maxj<k |⟨Mj ,Mk⟩| ∥Mj∥2∥Mk∥2 , M = N
maxi,j |⟨Mi,Nj⟩| ∥Mi∥2∥Nj∥2 , M ̸= N
. (3)
where Mj and Mk denote the column vectors of matrix M . Without causing confusion, we use µ(M) denote µ(M ,M). To measure the coherence between different projectors, we introduce the
Babel function Li & Lin (2018), measuring the maximum total coherence between a fixed atom and a collection of other atoms in a dictionary, which can be described as follows.
B(M) = max Λ,|Λ|=M max i/∈Λ ∑ j∈Λ |⟨Mi,Mj⟩| ∥Mi∥ ∥Mj∥
(4)
With the concept of a coherence metric in mind, we then introduce the main optimization objective in finding projectors. Specifically, suppose that the DNN has learned the task T1, T2, ..., Tt−1 in the subspace S1,S2, ...,St−1, respectively, and P1,P2, ..,Pt−1 denote the projectors of all previous tasks. When learning task Tt, we project features and gradients into a dt-dimensional low-coherence subspace St with projector Pt so that the LcSP can prevent catastrophic forgetting. The projector Pt can be found by optimizing
argminB(Pt),
s.t. Pt ∈ Rm×m, rank(Pt) = dt. (5)
Two considerations need to be taken in solving Eq. (5), i.e., considering the rank constraint and avoiding the entries of Pt being extremely large or extremely small. With these considerations in mind, we can rephrase the rank and scale constrained problem as a problem on the Riemannian manifold, more specifically on the Oblique manifold OM(m, dt), by setting Pt = OtO⊤t and normalizing the columns of Ot , i.e, diag(O⊤t Ot) = In, where diag(·) represents the diagonal matrix and In is the n × n identity matrix. With these settings, the new cost function J(·) and optimization problem can be described as follows:
J(Ot) = { λ ·B(OtO⊤t ) + γ · µ(OtO⊤t ), t > 1 µ(OtO ⊤ t ), t = 1 ,
Ot = argminJ(Ot), s.t. Ot ∈ OB(m, dt). (6)
In the cost function J(Ot), we further divided the optimization objective into inter-task B(OtO⊤t ) and intra-task µ(OtO⊤t ), and utilize parameters λ and γ to provide a trade-off between them. Here, intra-task coherence is optimized to maintain the full rank of Ot. Meeting the full-rank constraint helps to balance plasticity and stability in the case of increasing tasks. Relevant ablation studies and numerical analyses are given in the appendix.
Optimization on the Oblique manifold, i.e., the solution lies on the Oblique manifold, is a wellestablished area of research Absil et al. (2009); Absil & Gallivan (2006); Selvan et al. (2012). Here, we briefly summarize the main steps of the optimization process. Formally, the Oblique manifold OM(n, p) is defined as
OM(n, p) ≜ {X ∈ Rn×p : diag(X⊤X) = Ip}, (7) representing the set of all n× p matrices with normalized columns. OM can also be considered as an embedded Riemannian manifold of Rn×p, endowed with the canonical inner product
⟨X1,X2⟩ = trace ( X⊤1 X2 ) , (8)
where trace(·) represents the sum of the diagonal elements of the given matrix. For a given point X on OM, the tangent space at X , denoted by TXOM, is defined as
TXOM(n, p) = {U ∈ Rn×p : diag(X⊤U) = 0}. (9) Further, the tangent space projector PX at X which projects H ∈ Rn×p into TXOM, is represented as
PX(H) = H −X ddiag ( X⊤H ) , (10)
where ddiag sets all off-diagonal entries of a matrix to zero. When optimizing on OM, the kth iterate Xk must move along a descent curve on OM for the cost function, such that the next iterate Xk+1 will be fixed on the manifold. This is achieved by the retraction
RXk(U) = normalize(Xk +U), (11) where normalize scales each column of the input matrix to have unit length. Finally, with this knowledge, we can extend the gradient descent algorithm to solve any unconstrained optimization problems on OM, which can be summarized as
U = PXk(∇XkJ), Xk+1 = RXk(−αU),
(12)
where ∇XkJ denotes the Euclidean gradient at the kth iterate and α is the step size. Finally, our algorithm for finding Ot in OM to task Tt is summarized in Algorithm 1.
Algorithm 1 Construct the Ot on OM for Task Tt Input: O1, . . . , Ot−1 Output: Ot 1: RX(U) := normalize(X + U) ▷ normalize scales each column of the input matrix to have norm 1 2: X0 ← random initialization onOM 3: k ← 0 4: Set tolerance error 0 ≤ E ≪ 1 5: while True do 6: G← ∇f(Xk) ▷ Calculate Euclidean Gradient 7: U ← G−Xk ddiag(X⊤k G) ▷ Calculate Riemann Gradient 8: if ∥U∥ ≤ E then 9: break 10: end if 11: α ∈ (0, 0.5), β ∈ (0, 1) 12: t← 1 ▷ Initial step size 13: while J(RXk (−t ·U)) > J(Xk)− α · t · ∥U∥ 2 2 do ▷ Searching the step size for calculating next iterate 14: t← β · t 15: end while 16: Xk+1 ← RXk (−t ·U) ▷ Updating the next iterate Xk+1 and fixing it on manifold 17: k ← k + 1 18: end while 19: Ot = Xk 20: return Ot
4.2 THE APPLICATION OF LOW-COHERENCE PROJECTORS IN DNNS
With the LcSP at hand, the following introduces some technical details of applying LcSP in DNNs. When learning task Tt, LcSP first constructs task-specific projector P lt for each layer before training, and freezes them during training. These projectors are used to project the features and gradients, ensuring that the DNN learns in the low-coherence subspace. Specifically, suppose that a network f with L linear layers is used as DNN architecture, let W lt , x l t, z l t, σ
l, and P lt denote the model parameters, the input features, the output features, the activation function, and the introduced lowcoherence projector in layer l ∈ {1, ..., L}, respectively. LcSP introduces P lt immediately after W lt such that the pre-activation features are projected into the subspace, i.e.,
zlt = (x l tW l t )P l t ,
xl+1t = σ l(zlt).
(13)
According to the chain rule of derivation, the gradients at W lt will also be multiplied with P t l in backpropagation, as follows
∂L ∂(W lt )(i,:) = ∂L ∂zlt ∂zlt ∂(W lt )(i,:)
= ∂L zLt L−1∏ k=l ∂zk+1t ∂zkt · (xlt)i · P lt , (14)
where (W lt )(i,:) represents the ith row of W l t and (x l t)i is the ith element of x l t. In Convolutional Neural Networks (CNNs), the input and the output typically represent the image features and have more than two dimensions, e.g., input channel, output channel, height, and width. In this case, we reshape zl ∈ Rcout×(cin·h·w) to zl ∈ R(cin·h·w)×cout and align the dimension of projector with the output channel so that P lt ∈ Rcout×cout . After the projection, we recover the shape of zlt so that it can be used as input for the next layer.
Overcoming the Catastrophic Forgetting in BN based models BN is a widely used module in DNNs to make training of DNNs faster and more stable through normalization of the layers’ features by re-centering and re-scaling Ioffe & Szegedy (2015). However, re-centering and re-scaling of the layers’ features changes the data distribution (e.g., the mean and the variance) of features of previous tasks, which often leads to the catastrophic forgetting of LcSP. For example, when learning the new
task Tt, W lt may not work for Tt due to the change in data distribution caused by BN. To solve this problem, we propose two strategies in LcSP: the strategy (1) learning specific BN for each task, or the strategy (2) using GN instead of BN. We verify the effectiveness of these two strategies in experiments and compare their performance in §5.
4.3 METHOD ANALYSIS
In this section, we provide analysis on plasticity and stability of LcSP.
Stability Analysis Let θ = {W lt }Ll=1 denote the parameter set of f ; ∆θ = {∆W 1t , . . . ,∆WLt } denote set of variation values of parameters after learning task Tt; Pt = {P lt }Ll=1 denote the projectors set obtained by Algorithm 1; xlq,t and z l q,t denote the input and ouput when feeding the data of task Tq (q ≤ t) into the network f , which has been optimized in learning task Tt. Lemma 1. Assume that f is fed the data of task Tt (q < t), then f can effectively overcomes catastrophic forgetting if zlq,q ≈ zlq,t, ∀q ≤ t (15) holds for l ∈ {1, 2, ..., L}. Lemma 1 suggests that f can overcome catastrophic forgetting if the output of f to previous tasks is invariant. In the following, we prove that LcSP achieves approximate invariance to the output of previous tasks.
Proof. Suppose q = t− 1. When l = 1, xlq,t = xlq,q. Then
zlq,t = x l q,t(W l q +∆W l t )P l q
= xlq,tW l qP l q + x l q,t∆W l tP l q
= zlq,q + x l q,t∆W l tP l q .
(16)
Let glt denote the gradient when training the network on task Tt. In backpropagation, ∆W lt = gltP lt . Then xlq,t∆W l tP l q = x l q,tg l tP l tP l q . If the inter-task coherence µ(P l t ,P l q) ≈ 0, then P ltP lq ≈ 0. Projectors satisfying this condition can be found by Algorithm 1. We can prove that zlq,q ≈ zlq,t holds for all layers by repeating the above process. This proof can also be generalized to any previous task Tq .
Plasticity Analysis Let g̃lt = gltP lt denote the projected gradient at W lt . f can achieve optimal loss on task Tt if ⟨glt, g̃lt⟩ > 0 holds for each l ∈ {1, . . . , L}, where ⟨·, ·⟩ represents the inner product. Here, we prove that ⟨glt, g̃lt⟩ > 0 holds for each l ∈ {1, . . . , L}. Proof. Let g̃lt = gltP lt denote the projected gradient, we have
⟨glt, g̃lt⟩ = gtg̃lt⊤ = g l tO l tO l t ⊤glt ⊤
= ⟨gltOlt, gltOlt⟩ = ∥gltOlt∥ > 0. (17)
Note that ∥gltOlt∥ is always positive unless gltOlt is 0. This result is easy to generalize to each layer.
5 EXPERIMENTS
In this section, we evaluate our approach on several popular continual learning benchmarks and compare LcSP with previous state-of-the-art methods. The result of accuracy and forgetting demonstrate the effectiveness of our LcSP, especially when the number of tasks is large.
5.1 BENCHMARKS
Benchmarks for Learning 20 Tasks We conducted this experiment on four image classification datasets: Permuted MNIST, Rotated MNIST, Split CIFAR100 and Split miniImageNet. Permuted MNIST is constructed by randomly rearranging MNIST LeCun (1998) image pixels, using different seeds for different tasks. Rotated MNIST is constructed by rotating the MNIST image at a certain
angle. The rotation angle is a random value between [0, π] and arbitrary selection for different tasks. In this experimental setup, both of the above MNIST datasets contain 20 tasks, each task containing 10,000 samples from 10 classes. Split CIFAR is constructed by splitting CIFAR100 into multiple tasks, where each task contains the data pertaining to five random classes (without replacement) out of the total 100 classes. Split miniImageNet Vinyals et al. (2016) is a subset of ImageNet. In Split miniImageNet, each task contains the data from five random classes (without replacement) out of 100 classes. Both CIFAR100 and miniImageNet contain 20 tasks, each contains 250 samples from each of the five classes.
Benchmarks for Learning 150 Tasks and 64 Tasks This experiment was conducted on two image classification datasets: Permuted MNIST and Permuted CIFAR10. Both Permuted MNIST and Permuted CIFAR10 are obtained by randomly rearranging image pixels. In this experimental setup, Permuted MNIST contains 150 tasks, and Permuted CIFAR10 contains 64 tasks, each containing 10,000 samples from 10 classes.
5.2 BASELINES
We compare the proposed method with SOTA based on GOP. As aforementioned, we generalize GOP into COP and SOP. For methods based on the COP strategy, we compare proposed method with OWM Zeng et al. (2019), Adam-NSCL Wang et al. (2021) and GPM Saha et al. (2021). For methods based on the SOP strategy, we compare the proposed method with ORTHOG-SUBSPACE Chaudhry et al. (2020). Moreover, we also compare our method with HAT Serra et al. (2018), EWC Kirkpatrick et al. (2017), ER-Ring Chaudhry et al. (2019), AGEM Chaudhry et al. (2018b) and Kernel Continual Learning (KCL) Derakhshani et al. (2021).
5.3 IMPLEMENTATION DETAILS
Learning 20 Tasks For experiments on Permuted MNIST and Rotated MNIST, all methods use a fully connected network with two hidden layers, each with 256 neurons, using ReLU activations. For experiments on CIFAR and miniImageNet, all methods use standard ResNet18 architecture except OWM, HAT, and GPM which use AlexNet Krizhevsky et al. (2012). As described in § 4.2, the proposed LcSP makes two changes to the BN layer in standard ResNet18: (1) learning specific BN for each task and (2) replacing BN with GN. We also apply these strategies to ORTHOG-SUBSPACE
for additional comparison. For experiments on MNIST, all tasks share the same classifier. For experiments on CIFAR and miniImageNet, each task requires a task-specific classifier. For all experiments, LcSP does not use episodic memory to store data samples for data replay. For all methods, We use Stochastic Gradient Descent (SGD) uniformly. The learning rate is set to 0.01 for experiments on MNIST and 0.003 for experiments on CIFAR and ImageNet. Both λ and γ in Eq. (??) are set to 1. All experiments were run five times with five different random seeds, with the batch size besing 10.
Learning 150 Tasks and 64 Tasks For experiments on Permuted CIFAR10, LcSP uses ResNet18 architecture and applies the strategy (1) to change BN in ResNet18. To compare the performance of different GOP strategies, we did not use episodic memory for ORTHOG-SUBSPACE. Except for these changes, other experimental settings are the same as described above.
5.4 EXPERIMENTAL RESULTS
Comparisons of Learning 20 Tasks Table 1 compares the average accuracy and forgetting results of the proposed LcSP and its variants (LcSP-BN and LcSP-GN) with baselines on the four continual learning benchmarks. Therein, LcSP-BN and LcSP-GN adopt strategy (1) and (2) described in § 4.2, respectively. First, as shown in Table 1, the proposed methods outperform all baselines on MNIST and miniImageNet. On Permuted MNIST and Rotated MNIST, the average accuracy of LcSP surpasses the baselines by 23.8% ∼ 5.6% and 43% ∼ 4.8%, respectively. On miniImageNet, the average accuracy of LcSP-BN surpasses the baselines by 44.9% ∼ 4.63%. On CIFAR100, the proposed LcSP-BN achieved a competitive performance with the second highest average accuracy, 3.25% lower than Adam-NSCL, and a forgetting rate of 0. The average accuracy of LcSP-GN also outperforms most baselines, being lower than Adam-NSCL, HAT and GPM on CIFAR100 but higher than compared methods on miniImageNet. These results suggest that minimizing the inter-task and intra-task coherence with low-coherence projectors is an effective strategy for solving catastrophic forgetting. Secondly, results on CIFAR100 and miniImageNet also show that BN in ORTHOG-SUBSPACE Chaudhry et al. (2020) and LcSP may change previous tasks’ data distribution and lead to catastrophic forgetting. Both strategies (1) and (2) described in § 4.2 can effectively solve this problem.
Comparisons of Learning 150 Tasks and 64 Tasks In order to demonstrate the high advantage of proposed methods in learning a long sequence of tasks, the following experiments compare the results with 64 tasks and 150 tasks. Note that, in Fig. 1, LcSP (orthogonal) and LcSP-BN (orthogonal) use orthogonal projectors as comprisons while LcSP (low-coherence) and LcSP-BN (low-coherence) use low-coherence projectors. Fig. 1(a) and 1(b) report the average accuracy and forgetting of last 10 tasks, with learning 150 tasks on Permuted MNIST. Fig. 1(c) and 1(d) report the average accuracy and forgetting of last 5 tasks with learning 64 tasks on Permuted CIFAR10. The average accuracy of all methods, except the proposed LcSP-BN (low-coherence), dramatically degrades or is consistently low as the number of tasks increases. Furthermore, it can be seen from 1(d) that all methods except ORTHOG-SUBSPACE have almost no forgetting. This result indicates that methods using orthogonal projectors gradually lose their learning capacity with increasing number of tasks. The proposed method uses the low-coherence projector to relax the orthogonal restriction, effectively solving this problem. Fig. 2 gives the ablation study and shows the performance of our method with different λ and γ. When λ equals γ, the result of average accuracy on Permuted MNIST reached the highest. Results reach the worst when either λ or γ equals zero. These results indicate that both inter-task and intra-task coherence should be minimized to solve the plasticity and stability dilemma.
6 CONCLUSION
This paper proposed a novel gradient projection approach for continual learning to address gradient orthogonal projection’s learning capacity degradation problem. Instead of learning in orthogonal subspace, we propose projecting features and gradients via low-coherence projectors to minimize inter-task and intra-task coherence. Additionally, two strategies have been proposed to mitigate the catastrophic forgetting caused by the BN layer, i.e., replacing BN with GN or learning specific BN for each task. Extensive experiments show that our approach works well in alleviating forgetting and has a significant advantage in maintaining learning capacity, especially in learning long sequence tasks.
A APPENDIX
In this section, we give the implementation details about the experiments on Permuted MNIST and Permuted CIFAR10, to help readers reproduce these experiments. Additionally, more ablation studies and experimental results are provided here to further support the conclusions and contributions.
A.1 IMPLEMENTATION DETAILS
The main hyperparameter settings are listed in Table 2 and 3. For the baselines, we adopt the default settings provided in their code to bring out the proper performance. For fair comprision, we use the uniform Batch size for all methods.
A.2 ABLATION STUDIES AND ADDITIONAL RESULTS
Additional results on Permuted MNIST and Permuted CIFAR10 Readers may wonder whether our conclusion holds if we evaluate the average performance with more tasks (e.g., the average accuracy and forgetting on the last 20 tasks). As shown in Fig.3, LcSP still outperforms all baselines with a significant advantage. However, the phenomenon of learning capacity degradation in baselines becomes more imperceptible, e.g., the average accuracy of OWM on Permuted CIFAR10 is consistently low, rather than significantly decreasing. To further investigate the learning capacity degradation problem, we give the accuracy of baselines for the last task on Permuted MNIST and Permuted CIFAR10. As shown in Fig.7, all baselines, except LcSP, suffer from this problem with different degrees and show some decrease in accuracy compared to the initial (66.16% ∼ 24.63% on Permuted MNIST and 24.8% ∼ 3.48% on Permuted CIFAR10). These results suggest that this
problem is the critical factor that results in degrading the performance of GOP-based methods in the case of a large number of tasks.
Ablation studies and experiments for rank and scale constraints Further ablation studies and experiments are conducted to investigate the effects of the rank and scale constraints on the expressive power (plasticity) and stability of DNNs.
The result in Fig.5(a) suggests that projecting features or gradients into subspaces with low dimensions (e.g., lower than 5 in Fig.5(a) ) will decrease the expressive power of DNN. GOP methods and LcSP rely on the projectors to project the features or gradients (or both) into a d-dimensional subspace, which can also be considered as a form of dimension reduction. The dimension reduction is motivated by a consensus in the high-dimensional data analysis community, i.e., the data can be summarized in a low-dimensional space embedded in a high-dimensional space, such as a nonlinear manifold Levina & Bickel (2004). The dimension of this low-dimensional space is also known as the intrinsic dimension D Carreira-Perpinán (1997). If the d is too small, e.g., d ≪ D, important data features will be ”collapsed” onto the same dimension. From the perspective of training a DNN, if the gradients are projected into a subspace with a dimension lower than D, the DNN cannot activate sufficient parameters to learn the presentation of this task.
Due to the unknown number of tasks to be learned, the limited dimensions of the features, and the strict orthogonality constraint between projectors, the orthogonal projector cannot be constrained to a fixed-rank manifold. As shown in Fig.5(b), the rank of the orthogonal projector decreases as the number of tasks increases. Therefore, methods using orthogonal projectors usually ignore the intrinsic dimension of data (features or gradients) and finally lead to suffering from the learning capacity degradation problem. In contrast to GOP methods, LcSP relaxes the orthogonality constraint
and meets the rank constraint by optimizing intra-task coherence on the Oblique manifold and thus does not suffer from this problem.
Finally, Fig.6 gives an ablation study for scale constraints on the projector’s columns. In Fig.6, when the columns of the projector have unit length, the average accuracy reaches the highest. Result gets worse when the length of the projector’s columns is too small or too large.
Based on the RTX3060 12G, we tested the efficiency of LcSP and compared methods on Permuted CIFAR10 (which contains 64 tasks, and each image’s size is 3× 32× 32), using AlexNet for OWM and GPM, and resnet18 for the other methods. We provide 4 metrics to assess the efficiency of the method: floating point operations FLOPs, time spent per epoch during training Time (s), the number of epochs needed to train to convergence Epochs, and the average time spent on inference Mean inference time (ms). The detailed results are listed in Table.4.
A.3 METHOD ANALYSIS
The mechanism of low-coherence learning and the difference with orthogonal learning Recall the Lemma 1 described in Section 4.3. Lemma 1. Assume that f is fed the data of task Tt (q < t), then f can effectively overcomes catastrophic forgetting if
zlq,q ≈ zlq,t, ∀q ≤ t
. Here, zlq,t denotes the output feature of l-th layer when data from task Tq are fed into the DNN that has been trained on task Tt. Let us consider the case where there are only two tasks, i.e., q = t− 1. By appling the LcSP’s projection strategy, we have
zlq,t = z l q,q + x l q,t∆W l tP l q = z l q,q + x l q,tg l tP l tP l q
. Here, glt, x l q,t and P l t denote the gradient, input data, and projection matrix (which is a symmetric matrix) of task Tt in l-th layer, respectively. The term xlq,tgltP ltP lq can be thought of as the forgetting, which changes the output of DNN to previous data. Since we cannot change xlq,t and g l t, the key to reduce xlq,tg l tP l tP l q is to minimize P l tP l q . If P l t is orthogonal to P l q , the forgetting is equal to 0. Let us consider reducing forgetting when P lt cannot be orthogonal to P l q (e.g., the number of tasks is large and the dimensionality of the space is limited). If we optimize P ltP l q directly, we may get P lt = 0, or the column scale of P l t is too small. To find a useful P l t , LcSP constrains the length of the column vector to be 1, i.e., to find P lt on the Oblique manifold (OM). To reduce forgetting, we optimize the inter-task coherence µ(P lt ,P l q) on OM so that the maximum entry of P ltP lq is minimized.
In the worst case, we may obtain P lt consisting of only one column vector (i.e., P l t with rank 1). To avoid this situation, we optimize the intra-task coherence µ(P lt ) along with inter-task coherence, forcing the projector to satisfy the rank constraint in order to maintain the learning capacity of DNN (as shown in Fig. 5(a), the rank of projector affects the plasticity of DNN).
In general, low-coherence projection, which can be seen as a relaxed orthogonal constraint, aims to balance plasticity and stability better, motivated by our observation that orthogonal projections reduce the plasticity of the model and make the DNN unable to adapt to new environments. | 1. What is the focus of the paper regarding continual learning, and what are the proposed solutions?
2. What are the strengths of the approach, particularly in tackling challenging tasks?
3. What are the weaknesses of the method, such as unclear formulations or determinations?
4. How does the reviewer assess the effectiveness of the proposed approach from the experimental results?
5. What are some limitations in comparing the proposed method with other baseline methods?
6. How would you rate the writing clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposed a continual learning method by constructing low-coherence subspace projector for each new task. Given a new task, the projector matrix is constructed by minimizing the coherence to the previous task projectors and within the current projector to be optimized, in the oblique manifold. The proposed approach is evaluated for task incremental learning tasks, especially when the number of tasks is large (e.g., 64, 150). The results show improvements over the baseline methods, including, EWC, OWM, GPM, etc.
Strengths And Weaknesses
Strength:
(1) The proposed approach tackles the challenging continual learning task, and proposed a gradient projection method, based on low-coherence, instead of orthogonality.
(2) The experiments on Permuted MNIST, Rotated MNIST, Split CIFAR100 and Split miniImageNet show the improved results compared with some baseline methods, e.g., EWC, GPM, OWM, etc.
Weakness:
(1) The proposed method is based on construction of the projection matrix by minimizing the coherence of the projection matrix with respect to the projection matrices of previous task, and also within the current task projection matrix. Based on the sect. 4.1, it is unclear how to generate the projection matrix for the first task. Moreover, for the projection matrix formulation in eq. (6), it seems that the current O_t is purely determined based on the previous task O_i (i = 1, ..., t-1). It seems that the task data (or their features) are not utilized in the determination of O_t. How to guarantee that that learned O_t is optimal without seeing the task data or their features?
(2) According to the results in Table I, the LcSP achieves significantly lower results than the compared methods, on Split CIFAR100, Split miniImageNet. But LcSP-GN and LcSP-BN are better than the compared methods. How to explain this results? It seems that the learning BN for each task is essential to the good results on these two tasks. Are these compared methods also use the normalization technique as the LcSP-GN and LcSP-BN? Moreover, the comparisons of LcSP, LcSP-GN and LcSP-BN should be also included for other datasets.
(3) The comparisons are insufficient, e.g., the Adam-NSCL is also an typical continual learning method based on orthogonal projection, which should be also compared. The compared methods in Table 1 for different datasets are not consistent, for example, GPM and OWM are not compared for the two versions of MNIST dataset.
Clarity, Quality, Novelty And Reproducibility
Writing clarity can be improved. For example, how to set the O_1 for the first task? The algorithm 1 should be revised in a more clear way. For example, line 1 in Alg. 1 is not necessary. The settings of \alpha and \beta in algorithm 1 should be clear. |
ICLR | Title
MetaGL: Evaluation-Free Selection of Graph Learning Models via Meta-Learning
Abstract
Given a graph learning task, such as link prediction, on a new graph, how can we select the best method as well as its hyperparameters (collectively called a model) without having to train or evaluate any model on the new graph? Model selection for graph learning has been largely ad hoc. A typical approach has been to apply popular methods to new datasets, but this is often suboptimal. On the other hand, systematically comparing models on the new graph quickly becomes too costly, or even impractical. In this work, we develop the first meta-learning approach for evaluation-free graph learning model selection, called METAGL, which utilizes the prior performances of existing methods on various benchmark graph datasets to automatically select an effective model for the new graph, without any model training or evaluations. To quantify similarities across a wide variety of graphs, we introduce specialized meta-graph features that capture the structural characteristics of a graph. Then we design G-M network, which represents the relations among graphs and models, and develop a graph-based meta-learner operating on this G-M network, which estimates the relevance of each model to different graphs. Extensive experiments show that using METAGL to select a model for the new graph greatly outperforms several existing meta-learning techniques tailed for graph learning model selection (up to 47% better), while being extremely fast at test time (∼1 sec).
1 INTRODUCTION
Given a graph learning (GL) task, such as link prediction, for a new graph dataset, how can we select the best method as well as its hyperparameters (HPs) (collectively called a model) without performing any model training or evaluations on the new graph? GL has received increasing attention recently (Zhang et al., 2022), achieving successes across various applications, e.g., recommendation and ranking (Fan et al., 2019; Park et al., 2020), traffic forecasting (Jiang & Luo, 2021), bioinformatics (Su et al., 2020), and question answering (Park et al., 2022). However, as GL methods continue to be developed, it becomes increasingly difficult to determine which model to use for the given graph.
Model selection (i.e., selecting a method and its configuration such as HPs) for graph learning has been largely ad hoc to date. A typical approach, called “no model selection”, is to simply apply popular methods to new graphs, often with the default HP values. However, it is well known that there is no universal learning algorithm that performs best on all problem instances (Wolpert & Macready, 1997), and such consistent model selection is often suboptimal. At the other extreme lies “naive model selection” (Fig. 1b), where all candidate models are trained on the new graph, evaluated on a hold-out validation graph, and then the best performing model for the new graph is selected. This approach is very costly as all candidate models are trained when a new graph arrives. Recent methods on neural architecture search (NAS) and hyperparameter optimization (HPO) of GL methods, which we review in Section 3, adopt smarter and more efficient strategies, such as Bayesian optimization (Snoek et al., 2012; Tu et al., 2019), which carefully choose a relatively small number of HP settings to evaluate. However, they still need to evaluate multiple configurations of each GL method on the new graph.
Evaluation-free model selection is yet another paradigm, which aims to tackle the limitations of the above approaches by attempting to simultaneously achieve the speed of no model selection and the accuracy of exhaustive model selection. Recently, a seminal work by Zhao et al. (2021) proposed a technique for outlier detection (OD) model selection, which carries over the observed performance of
OD methods on benchmark datasets for selecting OD methods. However, it does not address the unique challenges of GL model selection, and cannot be directly used to solve the problem. Inspired by this work, we systematically tackle the model selection problem for graph learning, especially link prediction. We choose link prediction as it is a key task for graph-structured data: It has many applications (e.g., recommendation, knowledge graph reasoning, and entity resolution), and several inference and learning tasks can be cast as a link prediction problem (e.g., Fadaee & Haeri (2019)). In this work, we develop METAGL, the first meta-learning framework for evaluation-free selection of graph learning models, which finds an effective GL model to employ for a new graph without training or evaluating any GL model on the new graph, as Figure 1a illustrates. METAGL satisfies all of desirable features for GL model selection listed in Table 1, while no existing paradigms satisfy all.
The high-level idea of meta-learning based model selection is to estimate a candidate model’s performance on the new graph based on its observed performances on similar graphs. Our meta-learning problem for graph data presents a unique challenge of how to model graph similarities, and what characteristic features (i.e., meta-features) of a graph to consider. Note that this step is often not needed for traditional meta-learning problems on non-graph data, as features for non-graph objects (e.g., location, age of users) are often readily available. Also, the high complexity and irregularity of graphs (e.g., different number of nodes and edges, and widely varying connectivity patterns among different graphs) makes the task even more challenging. To handle these challenges, we design specialized meta-graph features that can characterize major structural properties of real-world graphs.
Then to estimate the performance of a candidate model on a given graph, METAGL learns to embed models and graphs in the shared latent space such that their embeddings reflect the graph-to-model affinity. Specifically, we design a multi-relational graph called G-M network, which captures multiple types of relations among models and graphs, and develop a meta-learner operating on this G-M network, based on an attentive graph neural network that is optimized to leverage meta-graph features and prior model performance into producing model and graph embeddings that can be effectively used to estimate the best performing model for the new graph. METAGL greatly outperforms existing metalearners in GL model selection (Fig. 1c). In sum, the key contributions of this work are as follows. • Problem Formulation. We formulate the problem of selecting effective GL models in an
evaluation-free manner (i.e., without ever having to train/evaluate any model on the new graph), To the best of our knowledge, we are the first to study this important problem. • Meta-Learning Framework and Features. We propose METAGL, the first meta-learning framework for evaluation-free GL model selection. For meta-learning on various graphs, we design metagraph features that quantify graph similarities by capturing structural characteristics of a graph. • Effectiveness. Using METAGL for GL model selection greatly outperforms existing meta-learning techniques (up to 47% better, Fig. 1c), with negligible runtime overhead at test time (∼1 sec).
Benchmark Data/Code: To facilitate further research on this important new problem, we release code and data at https://github.com/NamyongPark/MetaGL, including performances of 400+ models on 300+ graphs, and 300+ meta-graph features.
Table 1: The proposed METAGL wins on features in comparison to existing graph learning (GL)
model selection (MS) paradigms, all of which fail to satisfy some of desirable properties for GL MS.
Desiderata for Graph Learning (GL) MS
GL Model Selection (MS) Paradigms No model selection Naive model selection (Fig. 1b) Graph HPO/NAS (e.g., AutoNE, AGNN; See Sec 3 and Fig. 1b) METAGL (Ours, Fig. 1a)
Evaluation-free GL model selection X ! Capable of MS from among multiple GL algorithms X ! Capitalizing on graph similarities for GL MS ! Estimating model performance based on past observations X !
2 PROBLEM FORMULATION
Given a new unseen graph, our goal is to select the best model from a heterogeneous set of graph learning models, without requiring any model evaluations and user intervention. In comparison to traditional meta-learning problems where a model denotes a single method and its hyperparameters, a model in the graph meta-learning problem is more broadly defined to be
model M = {(graph embedding method, hyperparameters), (predictor, hyperparameters)}, (1)
as graph learning tasks usually involve two steps: (1) embedding the graph using a graph representation learning method, and (2) providing node embeddings to the predictor of a downstream task like link prediction. Both steps require learning a method with specific hyperparameters. Thus, there can be many models with the same embedding method and predictor, which have different hyperparameters. Also, the set M of models may contain many different graph representation learning methods (e.g., node2vec (Grover & Leskovec, 2016), GraphSAGE (Hamilton et al., 2017), DeepGL (Rossi et al., 2020) to name a few), as well as multiple task-specific predictors, making M heterogeneous. Given a training meta-corpus of n graphs G = {Gi}ni=1, andmmodels M = {Mj}mj=1 for GL tasks, we derive performance matrix P ∈ Rn×m where Pij is the performance (e.g., accuracy) of model j on graph i. Our meta-learning problem for evaluation-free GL model selection is defined as follows.
Problem 1 (Evaluation-Free Graph Learning Model Selection).
Given
• an unseen test graph Gtest /∈ G, and • a potentially sparse performance matrix P ∈ Rn×m of m heterogeneous graph learning models M = {M1, . . . ,Mm} on n graphs G = {G1, . . . , Gn}, Select • the best model M∗ ∈M to employ on Gtest without evaluating any model in M on Gtest.
3 RELATED WORK
A majority of works on GL focus on developing new algorithms for certain graph tasks and applications (Xia et al., 2021; Zhang et al., 2022). In comparison, there exist relatively few recent works that address the GL model selection problem (Zhang et al., 2021). They mainly focus on neural architecture search (NAS) and hyperparameter optimization (HPO) for GL models, especially graph neural networks (GNNs). Toward efficient and effective NAS and HPO in GL, they investigated several approaches, such as Bayesian optimization (AutoNE by Tu et al. (2019)), reinforcement learning (GraphNAS by Gao et al. (2020), AGNN by Zhou et al. (2019), Policy-GNN by Lai et al. (2020)), hypernets (ST-GCN by Zhu et al. (2021)), and evolutionary algorithms (Bu et al. (2021)), as well as techniques like subgraph sampling (AutoNE by Tu et al. (2019)), graph coarsening (JITuNE by Guo et al. (2021)), and hierarchical evaluation (HESGA by Yuan et al. (2021)). However, as summarized in Table 1, these methods cannot perform evaluation-free GL model selection (Problem 1), since they need to evaluate multiple configurations of each GL method on the new graph for model selection. Further, they are limited to finding the best configuration of a single algorithm, and thus cannot select a model from a heterogeneous model set M with various GL models, as Problem 1 requests. An earlier work on GNN design space (You et al., 2020) is somewhat relevant, as it proposes an approach to quantify graph similarities, which can be used to find an observed graph similar to the test graph, and select a model that performed best on it. However, their approach evaluates a set of anchor models on all graphs, and computes similarities between two graphs based on anchor models’ performance on them. As it needs to run anchor models on the new graph, it is inapplicable to Problem 1. For the first time, METAGL enables evaluation-free model selection from a heterogeneous set of GL models.
4 FRAMEWORK
In this section, we present METAGL, our meta-learning based framework that solves Problem 1 by leveraging prior performances of existing methods. METAGL consists of two phases: (1) offline meta-training phase (Section 4.1) that trains a meta-learner using observed graphs G and model performances P, and (2) online model prediction phase (Section 4.2), which selects the best model for the new graph. A summary of notations used in this work is provided in Table 3 in the Appendix.
4.1 OFFLINE META-TRAINING
Meta-learning leverages prior experience from related learning tasks to do a better job on the new task. When the new task is similar to some historical learning tasks, then the knowledge from those similar tasks can be transferred and applied to the new task. Thus effectively capturing the similarity between an input task and observed ones is fundamentally important for successful meta-learning. In meta-learning, the similarity between learning tasks is modeled using meta-features, i.e., characteristic features of the learning task that can be used to quantify the task similarity.
Meta-Graph Features. Given the graph learning model selection problem (where new graphs correspond to new learning tasks), METAGL captures the graph similarity by extracting meta-graph features such that they reflect the structural characteristics of a graph. Notably, since graphs have irregular structure, with different number of nodes and edges, METAGL designs meta-graph features to be of the same size for any arbitrary graph such that they can be easily compared using meta-graph features. We use the symbol m ∈ Rd to denote the fixed-size meta-graph feature vector of graph G. We defer the details of how METAGL computes m to Section 4.3.
Model Performance Estimation. To estimate how well a model would perform on a given graph, METAGL represent models and graphs in the latent k-dimensional space, and captures the graphto-model affinity using the dot product similarity between the two representations hGi and hMj of the i-th graph Gi and j-th model Mj , respectively, such that pij ≈ 〈hGi ,hMj 〉 where pij is the performance of model Mj on graph Gi. Then to obtain the latent representation h, we design a learnable function f(·) that takes in relevant information on models and graphs from the meta-graph features m and the prior knowledge (i.e., model performances P and observed graphs G). Below in this section, we focus on the inputs to the function f(·), and defer the details of f(·) to Section 4.4. We first factorize performance matrix P into latent graph factors U ∈ Rn×k and model factors V ∈ Rm×k, and take the model factor Vj ∈ Rk (the j-th row of V) as the input representation of model Mj . Then, METAGL obtains the latent embedding hMj of model Mj by hMj = f(Vj). For graphs, more information is available since we have both meta-graph features m and meta-train graph factors U. However, while we have the same number of models during training and inference, we observe new graphs during inference, and thus cannot obtain the graph factor Utest for the test graph as for the train graphs since matrix factorization (MF) is transductive by construction (i.e., existing models’ performance on the test graph is needed to get latent factors for the test graph directly via MF). To handle this issue, we learn an estimator φ : Rd 7→ Rk that maps the meta-graph features m into the latent factors of meta-train graphs obtained via MF above (i.e., for graph Gi with m, φ(m) = Ûi ≈ Ui) and use this estimated graph factor. We then combine both inputs ([m;φ(m)] ∈ Rd+k), and apply linear transformation to make the input representation of graph Gi to be of the same size as that of model Mj , obtaining the latent embedding of graph Gi to be hGi = f(W[m;φ(m)]) where W ∈ Rk×(d+k) is a weight matrix. Thus in METAGL, the performance pij of model Mj on graph Gi with meta-graph features m is estimated as
pij ≈ p̂ij = 〈f(W[m;φ(m)]), f(Vj)〉. (2)
Meta-Learning Objective. For tasks where the goal is to estimate real values, such as accuracy, the mean squared error (MSE) is a typical choice for the loss function. While MSE is easy to optimize and
effective for regression, it does not directly take the ranking quality into account. By contrast, in our problem setup, accurately ranking models for each graph dataset is more important than estimating the performance itself, which makes MSE a suboptimal choice. In particular, Problem 1 focuses on finding the model with the best performance on the given graph. Therefore, we consider rank-based learning objectives, and among them, we adapt the top-1 probability to the proposed Problem 1 as follows. Let P̂i ∈ Rm be the i-th row of P̂ (i.e., estimated performance of all m models on graph Gi). Given P̂i, the top-1 probability pP̂itop1(j) of j-th model Mj in the model set M is defined to be
pP̂itop1 (j) = π(p̂ij)∑m k=1 π(p̂ik) = exp(p̂ij)∑m k=1 exp(p̂ik)
(3)
where π(·) is an increasing, strictly positive function, which we define to be an exponential function.
Theorem 1 (Cao et al. (2007)). Given the performance P̂i of all models on graph Gi, pP̂itop1(j) represents the probability of model Mj to be ranked at the top of the list (i.e., all models in M). Top-1 probabilities pP̂itop1 (j) for all j = 1, . . . ,m form a probability distribution over m models.
Based on Theorem 1, we obtain two probability distributions by applying top-1 probability to the true performance Pi and estimated performance P̂i of m models, and optimize METAGL such that the distance between the two resulting distributions gets decreased. Using the cross entropy as the distance metric, we obtain the following loss over all n meta-train graphs G:
L(P, P̂) = − n∑ i=1 m∑ j=1 pPitop1 (j) log ( pP̂itop1 (j) ) (4)
When P is sparse, meta-training can be performed via slightly modified Eqs. (3) and (4) in App. G.2.
4.2 ONLINE MODEL PREDICTION
In the meta-training phase, METAGL learns estimators f(·) and φ(·), as well as weight matrix W and latent model factors V. Given a new graph Gtest, METAGL first computes the meta-graph features mtest ∈ Rd as we discuss in Section 4.3. Then mtest is regressed to obtain the (approximate) latent graph factors Ûtest = φ(mtest)∈Rk. Recall that the model factors V learned in the meta-training stage can be directly used for model prediction. Then model Mj’s performance on test graph Gtest can be estimated by applying Equation (2) with mtest and φ(mtest). Finally, the model that has the highest estimated performance is selected by METAGL as the best model M∗, i.e.,
M∗ ← arg max Mj∈M
〈 f(W[mtest;φ(mtest)]), f(Vj) 〉 (5)
Note that model selection using Equation (5) depends only on the meta-graph features mtest of the test graph and other pretrained estimators and latent factors that METAGL learned in the meta-training phase. As no model training or evaluation is involved, model prediction by METAGL is much faster than training and evaluating different models multiple times, as our experiments show in Section 5.4. Further, model prediction process is fully automatic since it does not require users to choose or fine-tune any values at test time. Figure 2 shows an overview of the model prediction process, and Algorithm 1 in the Appendix lists steps for offline meta-training and online model prediction. 4.3 STRUCTURAL META-GRAPH FEATURES
𝜓! 𝜓" 𝜓#
...
...
ΣΣΣ
...
(ψ )
][ 𝒎
MetaGraph
Features
Input Graph 𝐺
Figure 3: Meta-graph features in METAGL are derived in two steps. See Section 4.3 for details.
Meta-graph features are a crucial component of our metalearning approach METAGL since they capture important structural characteristics of an arbitrary graph. Meta-graph features enable METAGL to quantify graph similarities, and utilize prior experience with observed graphs for GL model selection. It is important that a sufficient and representative set of meta-graph features are used to capture the important structural properties of graphs from a wide variety of domains, including biological, technological, information, and social networks to name a few.
In this work, we are not able to leverage the commonly used simple statistical meta-features used by previous work on model selection-based meta-learning, as they cannot be used directly
over irregular and complex graph data. To address this problem, we introduce the notion of meta-graph features and develop a general framework for computing them on any arbitrary graph.
Meta-graph features in METAGL are derived in two steps, which is shown in Figure 3. First, we apply a set of structural meta-feature extractors Ψ = {ψ1, . . . , ψq} to the input graph G, obtaining Ψ(G) = {ψ1(G), . . . , ψq(G)}. Applying ψ ∈ Ψ to G yields a vector or a distribution of values for the nodes (or edges) in the graph, such as degree distribution and PageRank scores. That is, in Figure 3, ψ1 can be a degree distribution, ψ2 can be PageRank scores of all nodes, and so on. Specifically, we use both local and global structural feature extractors. To capture the local structural properties around a node or an edge, we compute node degree, number of wedges (i.e., a path of length 2), triangles centered at each node, and also frequency of triangles for every edge. To capture global structural properties of a node, we derive eccentricity, PageRank score, and k-core number of each node. Appendix D summarizes meta-feature extractors used in this work.
Let ψ denote the local structural extractors for nodes. Given a graph Gi = (Vi, Ei) and ψ, we obtain an |Vi|-dimensional node vector ψ(Gi). Since any two graphs Gi and Gj are likely to have a different number of nodes and edges, the resulting structural feature matrices ψ(Gi) and ψ(Gj) for these graphs are also likely to be of different sizes as the rows of these matrices correspond to nodes or edges of the corresponding graph. Thus, in general, these structural feature-based representations of the graphs cannot be used directly to derive similarity between graphs.
Now, to address this issue, we apply the set Σ of global statistical meta-graph feature extractors to every ψi(G), ∀i = 1, . . . , q, which summarizes each ψi(G) to a fixed-size vector. Specifically, Σ(ψi(G)) applies each of the statistical functions in Σ (e.g., mean, kurtosis, etc) to the distribution ψi(G), which computes a real number that summarizes the given feature distribution ψi(G) from different statistical point of view, producing a vector Σ(ψi(G)) ∈ R|Σ|. Then we obtain the metagraph feature vector m of graph G by concatenating the resulting meta-graph feature vectors:
m = [Σ(ψ1(G)) · · · Σ(ψq(G))] ∈ Rd. (6) Table 5 in Appendix D lists the global statistical functions Σ used in this work to derive metagraph features. Further, in addition to the node- and edge-level structural features, we also compute global graph statistics (scalars directly derived from the graph, e.g., density and degree assortativity coefficient), and append them to m, i.e., the node- or edge-level structural features obtained above.
Most importantly, given any arbitrary graph G′, the proposed approach is guaranteed to output a fixed d-dimensional meta-graph feature vector characterizing G′. Hence, the structural similarity of any two graphs G and G′ can be quantified using a similarity function over m and m′, respectively.
4.4 EMBEDDING MODELS AND GRAPHS
Given an informative context (i.e., input features) of models and graphs that METAGL learns from model performances P and meta-graph features M (Sections 4.1 and 4.3), how can we use it to effectively learn model and graph embeddings that capture graph-to-model affinity? We note that similar entities can make each other’s context more accurate and informative. For instance, in our problem setup, similar models tend to have similar performance distributions over graphs, and likewise similar graphs are likely to exhibit similar affinity to different models. With this consideration, we model the task as a graph representation learning problem, where we construct a graph called G-M network that connects similar graphs and models, and learn the graph and model embeddings over it.
G-M Network. We define G-M network to be a multi-relational graph with two types of nodes (i.e., models and graphs) where edges connect similar model nodes and graph nodes. To measure similarity among graphs and models, we utilize the latent graph and model factors (U and V, respectively) obtained by factorizing P, as well as the meta-graph features M. More precisely, we use the estimated graph factor Û instead of U to let the same graph construction process work for new graphs. Note that this gives us two types of features for graph nodes (i.e., Û and M), and one type of features for model nodes (i.e., V). To let different features influence the embedding step differently as needed, we connect graph nodes and model nodes using five type of edges: M-g2g, P-g2g, P-m2m, P-g2m, P-m2g where g and m denote the type of nodes that an edge connects (graph and model, respectively), and M and P denote that the edge is based on meta-graph features and model performance, respectively. For example, M-g2g and P-g2g edges connect two graph nodes that are similar in terms of M and Û, respectively. Then for each edge type, we construct a k-NN graph by connecting nodes to their top-k similar nodes, where node-to-node similarity is defined as the cosine similarity between the
corresponding node features. For instance, for P-g2m edge type, graph nodes and model nodes are linked based on the similarity between Û and V. Fig. 7 in the Appendix illustrates the G-M network.
Learning Over G-M Network. Given the G-M network Gtrain with meta-train graphs and models, graph neural networks (GNNs) provide an effective framework to embed models and graphs via (weighted) neighborhood aggregation over Gtrain. However, since the structure of G-M network is induced by simple k-NN search, some of the neighbors may not provide the same amount of information as others, or may even provide noisy information. We found it helpful to perform attentive neighborhood aggregation, so more informative neighbors can be given more weights. To this end, we choose to use attentive GNNs designed for multi-relational networks, and specifically use HGT (Hu et al., 2020). Then the embedding function f(·) in Section 4.1 is defined to be f(x) = HGT(x,Gtrain) during training, which transforms the input node feature x into an embedding via attentive neighborhood aggregation over Gtrain. Further details of HGT are provided in Appendix G.4. Inference Over G-M Network. For inference at test time, we extend Gtrain to be a larger G-M network Gtest that additionally contains test graph nodes, and edges between test graph nodes, and existing graphs and models in Gtrain. The extension is done in the same way as in the training phase, by finding top-k similar nodes. Then the embedding at test time can be done by f(x) = HGT(x,Gtest).
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
Models and Evaluation. A model in our problem (Eqn. 1) consists of two components. The first component performs graph representation learning (GRL), and the other component leverages the learned embeddings for a downstream task of interest. In this work, we focus on link prediction, which is a key task for graph-structured data as we discuss in Section 1. We evaluate the performance of selecting a link prediction model for new graphs without any model evaluation. For the first component, we use 12 popular GRL methods, and for the second component for link scoring, we use a simple estimator that computes the cosine similarity between two node embeddings. This results in a model set M with 423 models. The full list of models is given in Table 4 in Appendix C. For evaluation, we create a testbed containing benchmark graphs, meta-graph features, and a performance matrix. We construct the performance matrix by evaluating each link prediction model in M on the graphs in the testbed, in terms of mean average precision score. Then we evaluate METAGL and baselines via 5-fold cross validation where the benchmark graphs are split into meta-train Gtrain and meta-test Gtest graphs for each fold, and meta-learners trained over the meta-train graph data are evaluated using the meta-test graph datasets. Thus, model performances over the meta-test graphs Gtest and the meta-graph features of Gtest were unseen during training, but used only for testing.
Table 2: The proposed METAGL outperforms existing meta-learners, given fully observed performance matrix. Best results are in bold, and second best results are underlined. “METAGL_Baseline” notation (e.g., METAGL_S2) indicate that the baseline metalearner uses METAGL’s meta-graph features.
Method MRR AUC NDCG@1
Random Selection 0.011 0.490 0.745
Si m
pl e
Global Best-AvgPerf 0.163 0.877 0.932 Global Best-AvgRank 0.103 0.867 0.930
METAGL_AS 0.222 0.905 0.947 METAGL_ISAC 0.202 0.887 0.939
O pt
im iz
at io nba se d METAGL_S2 0.170 0.910 0.945 METAGL_ALORS 0.190 0.897 0.950 METAGL_NCF 0.140 0.869 0.934 METAGL_MetaOD 0.075 0.599 0.889
METAGL 0.259 0.941 0.962
Since model selection aims to accurately predict the best model for a new graph, we evaluate the top-1 prediction performance in terms of MRR (Mean Reciprocal Rank), AUC, and NDCG (Normalized Discounted Cumulative Gain). To apply MRR and AUC, we label models such that the top1 model (i.e., the model with the best performance for the given graph) is labeled as 1, while all others are labeled as 0. For NDCG, we report NDCG@1, which evaluates the relevance of top-1 predicted model. All metrics range from 0 to 1, with larger values indicating better performance.
Baselines. Being the first work for evaluation-free model selection in GL, we do not have immediate baselines for comparison. Instead, we adapt baselines used for OD model selection (Zhao et al., 2021) and collaborative filtering for our problem setting. In Appendix A, we describe baselines in detail. Baselines are grouped into two categories: (a) Simple meta-learners select a model that performs generally well, either globally or locally: Global Best (GB)-AvgPerf, GB-AvgRank, ISAC (Kadioglu et al., 2010), and ARGOSMART (AS) (Nikolić et al., 2013); (b) Optimization-based meta-
learners learn to estimate the model performance by modeling the relation between meta features and model performances: Supervised Surrogates (S2) (Xu et al., 2012), ALORS (Misir & Sebag, 2017), NCF (He et al., 2017), and MetaOD (Zhao et al., 2021). We also include Random Selection (RS) as a baseline to see how these methods compare to random scoring.
Note that, except the simplest meta-learners RS and GB, no baselines can handle graph data, and thus they cannot estimate model performance on the new graph on their own. In that sense, they are not direct competitors of METAGL. We enable them to be used for GL model selection by providing the proposed meta-graph features. MetaOD, which was originally designed for OD model selection, is also given the same meta-graph features to perform GL model selection. We denote baselines either by combining METAGL with their names (e.g., METAGL_S2) to clearly show that they use METAGL’s meta-graph features, or using their name alone (e.g., S2) for simplicity.
5.2 MODEL SELECTION ACCURACY
Fully Observed Performance Matrix. In this setup, meta-learners are trained using a full performance matrix P with no missing entries. The model selection accuracy of all meta-learners in this setup is reported in Table 2, where METAGL achieves the best performance in all metrics, with 17% higher MRR than the best baseline (AS).
• Among simple meta-learners, Global Best meta-learners, which simply average model performance or rank over all observed graphs, are outperformed by more sophisticated meta-learners AS and ISAC, which leverage dataset similarities for model selection using meta-graph features. • For optimization-based meta-learners, it is important to be aware of how models and graphs relate to each other, and have high flexibility to capture that complex relationship. In methods like ALORS and MetaOD, relations between models and datasets (i.e., relative position of models and datasets in the embedding subspace) are learned rather indirectly via reconstructing the performance matrix. • METAGL, in contrast, directly captures graph-to-model affinity by modeling their relations via employing flexible GNNs over the G-M network, as well as reconstructing the performance matrix. As a result, METAGL consistently outperforms other optimization-based meta-learners.
Partially Observed Performance Matrix. In this setup, meta-learners are trained using a sparse performance matrix P, obtained by randomly masking out a full P. Figure 4 reports results obtained with varying sparsity, ranging up to 0.9. In this more challenging setup, METAGL consistently performs the best across all levels of sparsity, achieving up to 47% higher MRR than the best baseline.
• With increased sparsity, nearly all meta-learners perform increasingly worse, as one might expect. • While AS was the best baseline given a full P, its accuracy decreased rapidly as sparsity increased.
Since AS selects a model based on the 1NN meta-train graph, it is highly sensitive to P’s sparsity. • Baselines such as the Global Best baselines are more stable as they average across multiple graphs. • Optimization-based methods like METAGL and S2 perform favorably to simple meta-learners in
5.3 EFFECTIVENESS OF META-GRAPH FEATURES
In Figure 5, we evaluate how the performance of meta-learners obtained with the proposed meta-graph feature (Section 4.3) compares to that obtained with existing graph-level embedding (GLE) techniques, GL2Vec (Chen & Koga, 2019), Graph2Vec (Narayanan et al., 2017), and GraphLoG (Xu et al., 2021).
Figure 8 in App. I.1 provides results for three other GLE methods, WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019).
• Most points are below the diagonal in Figure 5, i.e., all meta-learners nearly consistently perform better when they use METAGL’s meta-graph features than when they use existing GLE methods. This shows the effectiveness of METAGL’s features for the proposed task of GL model selection. • METAGL performs the best, whether METAGL’s features or existing GLE methods are used.
5.4 MODEL SELECTION EFFICIENCY
To evaluate how efficient METAGL’s model selection is, we measure its runtime (i.e., the time to create metagraph features for the new graph at test time, plus the time to predict the best model), and compare it with the time to train a GL model. Figure 6 shows the distribution in box plots, where red and green lines denote the median and mean, respectively.
Results show that METAGL is fast, and incurs negligible runtime overhead: its runtime is just around 1 seconds or less in most cases (Figure 6a). Notably, compared to training each GL model for only 5% of its available model configurations, METAGL takes considerably less
time, i.e., a median of 5% and a mean of 11% of the time required for model training (Figure 6b). Given large-scale test graphs in practice, the speed-up enabled by METAGL will be greater than that reported in Figure 6b, due to the increased training time on such graphs. Also, METAGL’s model selection process can be further streamlined, e.g., by parallelizing meta-feature generation process. We provide additional results on the runtime of METAGL and naive model selection in Appendix I.2.
5.5 ADDITIONAL RESULTS
We present ablation study in Appendix I.3, which shows the effectiveness of METAGL’s proposed components, e.g., the meta-learning objective, G-M network, and the graph encoder used by METAGL. We evaluate the sensitivity of model selection approaches to the variance of performance matrix P in Appendix I.4, and compare the predicted model performance with the actual best performance in Appendix I.5.
6 CONCLUSION
As more and more GL models are developed, selecting which one to use is becoming increasingly hard. Toward near-instantaneous, automatic GL model selection, we make the following contributions.
• Problem Formulation. We present the first problem formulation to select effective GL models in an evaluation-free manner (i.e., without ever having to train/evaluate any model on the new graph). • Meta-Learning Framework and Features. We propose METAGL, the first meta-learning framework for evaluation-free GL model selection, and meta-graph features to quantify graph similarities. • Effectiveness. Using METAGL for model selection greatly outperforms existing meta-learning techniques (up to 47% better), while incurring negligible runtime overhead at test time (∼1 sec).
A BASELINES
Being the first work for evaluation-free model selection in graph learning (GL), we do not have immediate baselines for comparison. Instead, we adapt baselines used in MetaOD (Zhao et al., 2021) for outlier detection (OD) model selection as well as collaborative filtering for our problem setting. The baselines used in experiments can be organized into the following two categories.
(a) Simple meta-learners select a model that performs generally well, either globally or locally.
• Global Best (GB)-AvgPerf selects the model that has the largest average performance over all meta-train graphs.
• Global Best (GB)-AvgRank computes the rank of all models (in percentile) for each graph, and selects the model with the largest average ranking over all meta-train graphs.
• ISAC (Kadioglu et al., 2010) first clusters meta-train datasets using meta-graph features, and at test time, finds the cluster closest to the test graph, and selects the model with the largest average performance over all graphs in that cluster.
• ARGOSMART (AS) (Nikolić et al., 2013) finds the meta-train graph closest to the test graph (i.e., 1NN) in terms of meta-graph feature similarity, and selects the model with the best result on the 1NN graph.
(b) Optimization-based meta-learners learn to estimate the model performance by modeling the relation between meta-graph features and model performances.
• Supervised Surrogates (S2) (Xu et al., 2012) learns a surrogate model (a regressor) that maps meta-graph features to model performances.
• ALORS (Misir & Sebag, 2017) factorizes the performance matrix into latent factors on graphs and models, and estimates the performance to be the dot product between the two factors, where a non-linear regressor maps meta-graph features into the latent graph factors.
• NCF (He et al., 2017) replaces the dot product used in ALORS with a more general neural architecture that estimates performance by combining the linearity of matrix factorization and non-linearity of deep neural networks.
• MetaOD (Zhao et al., 2021) pioneered the field of unsupervised OD model selection by designing meta-features specialized to capture the outlying characteristics of datasets, as well as improving upon ALORS with the adoption of an NDCG-based meta-training objective. We enable MetaOD to be applicable to our problem setting, by applying our proposed meta-graph features to MetaOD.
In addition, we also include Random Selection (RS) as a baseline, to see how meta-learners perform in comparison to random scoring. Note that among the above approaches, only GB-AvgPerf and GB-AvgRank do not rely on meta-features for model selection. All other meta-learners make use of the proposed meta-graph features to estimate model performances on an unseen test graph.
B NOTATIONS
Table 3 provides a list of notations frequently used in this work.
C MODEL SET
A model in the model set M refers to a graph representation learning (GRL) method along with its hyperparameters settings, and a predictor that makes a downstream task-specific prediction given the node embeddings from the GRL method. In this work, we use a link predictor which scores a given link by computing the cosine similarity between the two nodes’ embeddings. Table 4 shows the
Methods Hyperparameter Settings Count
SGC (Wu et al., 2019a) # (number of) hops k ∈ {1, 2, 3} 3 GCN (Kipf & Welling, 2017) # layers L ∈ {1, 2, 3}, # epochs N ∈ {1, 10} 6 GraphSAGE (Hamilton et al., 2017) # layers L ∈ {1, 2, 3}, # epochs N ∈ {1, 10}, aggregation functions
f ∈ {mean, gcn, lstm} 18
node2vec (Grover & Leskovec, 2016) p, q ∈ {1, 2, 4} 9 role2vec (Ahmed et al., 2018) p, q ∈ {0.25, 1, 4}, α ∈ {0.01, 0.1, 0.5, 0.9, 0.99}, motif combinations
H ∈ {{H1}, {H2, H3}, {H2, H3, H4, H6, H8}, {H1, H2, . . . , H8}} 180
GraRep (Cao et al., 2015) k ∈ {1, 2} 2 DeepWalk (Perozzi et al., 2014) p = 1, q = 1 1 HONE (Rossi et al., 2018) k ∈ {1, 2}, Dlocal ∈ {4, 8, 16}, variant v = {1, 2, 3, 4, 5} 30 node2bits (Jin et al., 2019) walk num wn ∈ {5, 10, 20}, walk len wl ∈ {5, 10, 20}, log base b ∈
{2, 4, 8, 10}, feats f ∈ {16} 36
DeepGL (Rossi et al., 2020) α ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, motif size ∈ {4}, eps tolerance t ∈ {0.01, 0.05, 0.1}, relational aggr. ∈ {{m}, {p}, {s}, {v}, {m, p}, {m, v}, {s,m}, {s, p}, {s, v}} where m, p, s, v denote mean, product, sum, var 135
LINE (Tang et al., 2015) # hops/order k ∈ {1, 2} 2 Spectral Emb. (Luo et al., 2003) tolerance t ∈ {0.001} 1
Total Count 423
complete list of 12 popular GRL methods and their specific hyperparameter settings, which compose 412 unique models in the model set M. Note that the link predictor is omitted from Table 4 since we employ the same link predictor based on cosine similarity to all GRL methods.
D META-GRAPH FEATURES
Structural Meta-Feature Extractors. To capture the local structural properties around a node or an edge, we compute the distribution of node degrees, number of wedges (i.e., a path of length 2), triangles centered at each node, as well as the frequency of triangles for each edge. To capture the global structural properties of a node, we derive the eccentricity, PageRank score, and k-core number of each node. We also capture the global graph-level statistics (i.e., different from local
node/edge-level structural properties), such as the density of A and AAT where A is the adjacency matrix, and also the degree assortativity coefficient r.
Global Statistical Functions. For each of the structural property distributions (degree, k-core numbers, and so on) derived by the above structural meta-feature extractors, we apply the set Σ of global statistical functions (Table 5) over it to obtain a fixed-length vector representation for the node/edge/graph-level structural feature distribution.
After obtaining a set of meta-graph features, we concatenate all of them together to create the final meta-graph feature vector m for the graph.
E GRAPH DATASETS
The testbed used in this work comprises 301 graphs that have widely varying structural characteristics. Table 6 provides a list of all graph datasets in the testbed. All graph data are from (Rossi & Ahmed, 2015); they are publicly available under the Creative Commons Attribution-ShareAlike License.
Table 6 – Continued from the previous page
Graph # Nodes # Edges Graph # Nodes # Edges
66 CL-1000-1d8-trial3 925 3714 217 nci1_g3711 89 106 67 CL-1000-1d9-trial1 928 3510 218 nci1_g3901 93 102 68 CL-1000-1d9-trial2 912 3053 219 nci1_g3990 90 105 69 CL-1000-1d9-trial3 932 3278 220 nci1_g4094 90 98 70 CL-1000-2d0-trial1 909 2795 221 power-1138-bus 1138 2596 71 CL-1000-2d0-trial2 899 2941 222 power-494-bus 494 1080 72 CL-1000-2d0-trial3 916 3010 223 power-662-bus 662 1568 73 CL-1000-2d1-trial1 903 2430 224 power-685-bus 685 1967 74 CL-1000-2d1-trial2 911 2734 225 power-bcspwr09 1723 4117 75 CL-1000-2d1-trial3 915 2782 226 power-eris1176 1176 9864 76 DD_g1 327 899 227 rec-amazon 91813 125704 77 DD_g10 146 328 228 rec-movielens-tag-movies-10m 16528 71081 78 DD_g100 349 1005 229 road-chesapeake 39 170 79 DD_g1000 183 408 230 road-ChicagoRegional 1467 1298 80 DD_g1001 88 203 231 road-euroroad 1174 1417 81 DD_g1002 104 255 232 road-luxembourg-osm 114599 119666 82 DD_g1003 53 116 233 road-minnesota 2642 3303 83 DD_g1004 94 230 234 road-usroads-48 126146 161950 84 DD_g1005 370 903 235 rt-retweet 96 117 85 DD_g1006 246 568 236 rt-twitter-copen 761 1029 86 DD_g1007 309 732 237 rt_alwefaq 4171 7063 87 DD_g1008 109 304 238 rt_assad 2139 2788 88 DD_g1009 129 272 239 rt_bahrain 4676 7979 89 DD_g101 306 728 240 rt_barackobama 9631 9775 90 DD_g1010 157 363 241 rt_damascus 3052 3869 91 DD_g1011 47 136 242 rt_dash 6288 7436 92 DD_g1012 146 365 243 rt_gmanews 8373 8721 93 DD_g1013 93 211 244 rt_gop 4687 5529 94 DD_g1014 119 273 245 rt_http 8917 10314 95 DD_g1015 102 244 246 rt_islam 4497 4616 96 DD_g1016 113 291 247 rt_israel 3698 4165 97 DD_g1017 162 376 248 rt_lebanon 3961 4436 98 DD_g1018 296 680 249 rt_libya 5067 5541 99 DD_g1019 131 353 250 rt_lolgop 9765 10075 100 DD_g102 561 1422 251 rt_obama 3212 3423 101 DD_g1020 228 541 252 rt_occupy 3225 3944 102 DD_g1021 329 787 253 rt_occupywallstnyc 3609 3833 103 DD_g1022 294 730 254 rt_oman 4904 6230 104 DD_g1023 172 425 255 rt_onedirection 7987 8103 105 DD_g1024 59 160 256 rt_p2 4902 6018 106 DD_g1025 88 205 257 rt_saudi 7252 8061 107 DD_g1026 247 578 258 rt_tcot 4547 5503 108 DD_g1027 108 223 259 rt_tlot 3665 4475 109 DD_g1028 72 137 260 rt_uae 5248 6387 110 DD_g1029 99 215 261 rt_voteonedirection 2280 2464 111 DD_g103 265 647 262 sc-nasasrb 54870 1311227 112 DD_g1030 136 351 263 soc-advogato 6551 43427 113 DD_g104 372 999 264 soc-dolphins 62 159 114 DD_g105 423 1192 265 soc-firm-hi-tech 33 91 115 DD_g106 574 1355 266 soc-gplus 23628 39194 116 DD_g107 130 292 267 soc-hamsterster 2426 16630 117 DD_g108 483 1137 268 soc-highschool-moreno 70 274 118 DD_g109 132 315 269 soc-physicians 241 923 119 DD_g11 312 761 270 soc-sign-bitcoinalpha 3783 14124 120 DD_g110 394 1137 271 soc-student-coop 185 311 121 DD_g111 483 1520 272 soc-wiki-Vote 889 2914 122 DD_g112 266 631 273 socfb-Amherst 2235 90954 123 DD_g113 347 853 274 socfb-Bowdoin47 2252 84387 124 DD_g114 334 761 275 socfb-Caltech 769 16656 125 DD_g115 336 946 276 socfb-Hamilton46 2314 96394 126 eco-everglades 69 885 277 socfb-Haverford76 1446 59589 127 eco-florida 128 2075 278 socfb-nips-ego 2888 2981 128 eco-foodweb-baydry 128 2106 279 socfb-Oberlin44 2920 89912 129 eco-foodweb-baywet 128 2075 280 socfb-Reed98 962 18812 130 eco-mangwet 97 1446 281 socfb-Simmons81 1518 32988 131 eco-stmarks 54 353 282 socfb-Smith60 2970 97133 132 email-dnc-corecipient 906 10429 283 socfb-Swarthmore42 1659 61050 133 email-dnc-leak 1891 4465 284 socfb-Trinity100 2613 111996 134 email-enron-only 143 623 285 socfb-USFCA72 2682 65252 135 email-EU 32430 54397 286 socfb-Vassar85 3068 119161 136 email-radoslaw 167 3251 287 socfb-Villanova62 7772 314989 137 email-univ 1133 5451 288 socfb-Wellesley22 2970 94899 138 enzymes_g103 59 115 289 socfb-Williams40 2790 112986 139 enzymes_g118 95 121 290 tech-routers-rf 2113 6632
Continued on the next page
Table 6 – Continued from the previous page
Graph # Nodes # Edges Graph # Nodes # Edges
140 enzymes_g123 90 127 291 tech-routers-rf 2113 6632 141 enzymes_g199 62 108 292 web-BerkStan 12305 19500 142 enzymes_g204 57 105 293 web-edu 3031 6474 143 enzymes_g209 57 101 294 web-EPA 4271 8909 144 enzymes_g215 48 104 295 web-google 1299 2773 145 enzymes_g224 54 105 296 web-indochina-2004 11358 47606 146 enzymes_g279 60 107 297 web-polblogs 643 2280 147 enzymes_g291 62 104 298 web-spam 4767 37375 148 enzymes_g292 60 100 299 web-webbase-2001 16062 25593 149 enzymes_g293 96 109 300 web-wiki-chameleon 2277 31421 150 enzymes_g295 123 139 301 web-wiki-crocodile 11631 170918 151 enzymes_g296 125 141
F EXPERIMENTAL DETAILS
F.1 EXPERIMENTAL SETTINGS
Software. We used PyTorch1 for implementing the training and inference pipeline, and used the DGL’s implementation of HGT2. For MetaOD (Zhao et al., 2021), we used the implementation provided by the authors3. We used the Karate Club library (Rozemberczki et al., 2020) for the implementations of the following graph-level embedding (GLE) methods, Graph2Vec (Narayanan et al., 2017), GL2Vec (Chen & Koga, 2019), WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019). For GraphLoG (Xu et al., 2021), we used the authors’ implementation4. We used open source libraries, such as NetworkX5 and NumPy6, for implementing meta-graph feature extractors.
Hyperparameters. We set the embedding size k to 32 for METAGL and other meta-learners that learn embeddings of models and graphs. For METAGL, we created the G-M network by connecting nodes to their top-30 similar nodes. As an the embedding function f(·) in METAGL, we used HGT (Hu et al., 2020) with 2 layers and 4 heads per layer. HGT is included in the Deep Graph Library (DGL), which is licensed under the Apache License 2.0. For training, we used the Adam optimizer with a learning rate of 0.00075 and a weight decay of 0.0001. For GLE approaches, we used the default hyperparameter settings specified in the corresponding library and GitHub repository.
Link Prediction Model Training. Given a graph G, we first hold out 10% of the edges in graph G to be used for evaluation, and train GL models with the resulting subgraph for link prediction. The training of GL models was performed by sampling 20 negative edges per positive edge, computing the link score by applying a dot product between the two corresponding node embeddings, followed by a sigmoid function, and then optimizing a binary cross entropy loss for the positive and negative edge scores. For evaluation, we randomly sampled the same number of negative edges as the positive edges, and evaluated the predicted link scores in terms of mean average precision.
F.2 EVALUATION OF MODEL SELECTION ACCURACY (SECTION 5.2)
In our evaluation involving a partially observed performance matrix, we extended baseline metalearners as follows so they can operate in the presence of missing entries in the performance matrix.
• Global Best-AvgPerf averaged observed performance entries alone, ignoring missing values. If an average performance cannot be computed for some model (which is the case when a model has no observed performance entries for all graphs), we use the mean of averaged performance for other models in its place.
1https://pytorch.org/ 2https://www.dgl.ai/ 3https://github.com/yzhao062/MetaOD 4https://github.com/DeepGraphLearning/GraphLoG 5https://networkx.org/ 6https://numpy.org/
• Global Best-AvgRank computed the model rankings for each graph in percentile, as the number of observed model performances may be different for different graphs, and averaged the rank percentiles for observed cases only, as in Global Best-AvgPerf. • ISAC handled the sparse performance matrix in the same way as Global Best-AvgPerf, except that only a subset of graphs, which is similar to the test graph, is considered in ISAC. • ARGOSMART (AS) computed the mean of observed performance entries of the 1NN graph, and used this quantity in place of missing values. • ALORS factorized the sparse performance matrix using a missing value-aware non-negative matrix factorization algorithm. • Supervised Sur. (S2), NCF, and MetaOD performed optimization by only considering observed performance values in the loss function, while skipping over missing entries. Early stopping based on the validation performance was also done with respect to the observed performances alone.
F.3 EVALUATION OF META-GRAPH FEATURES (SECTION 5.3)
Except for WaveletCharacteristic and GraphLoG, we applied graph-level embedding (GLE) approaches to all graphs in our testbed, and meta-learners were trained and evaluated using the representations of all graphs via 5-fold cross validation. Since WaveletCharacteristic and GraphLoG could not scale up to some of the largest graphs in the testbed (e.g., due to out-of-memory error), we excluded 9% and 27% largest graphs for GraphLoG and WaveletCharacteristic, respectively, and evaluated meta-learners using the resulting subset of graphs. Note that, in these cases, METAGL was also trained and evaluated using the same subset of graphs.
G ADDITIONAL DETAILS AND ANALYSIS OF METAGL
G.1 METAGL ALGORITHM AND META-GRAPH FEATURES
Algorithm 1 provides detailed steps of METAGL, for both offline meta-training (top) and online model selection (bottom). In METAGL, we log-transform the meta-graph features, and extend the meta-graph features with them, as it helps with model selection. We use the notation m in Algorithm 1 as well as in the text to refer to these meta-graph features used by METAGL.
G.2 META-LEARNING OBJECTIVE FOR SPARSE PERFORMANCE MATRIX
Given a sparse performance matrix P, meta-training of METAGL can be performed by modifying the top-1 probability (Equation (3)) and the loss function (Equation (4)), such that the missing entries in P are ignored as follows:
pP̂itop1 (j) = Ipij (π(p̂ij))∑m k=1 Ipik(π(p̂ik)) = Ipij (exp(p̂ij))∑m k=1 Ipik(exp(p̂ik)) , (7)
L(P, P̂) = − n∑ i=1 m∑ j=1 Ipij ( pPitop1 (j) log ( pP̂itop1 (j) )) . (8)
where Ipij (·) is defined as
Ipij (x) = { x if pij exists in the observed performance matrix P, 0 if pij is missing in the observed performance matrix P.
(9)
Thus the supervision signal for each graph comes only from the model performances observed on it. If an entire row in P is empty, the loss terms for the corresponding graph are dropped from Equation (8).
G.3 G-M NETWORK
Figure 7 illustrates the G-M network (graph-model network) (Section 4.4), which is a multi-relational bipartite network between graph nodes and model nodes. In the G-M network, model and graph nodes are connected via five types of edges (e.g., P-m2m, P-g2m, M-g2g), which is shown as edges with distinct line styles and colors. Note that while Figure 7 shows only one edge per edge type, in the G-M network, each node is connected to its top-k similar nodes.
G.4 ATTENTIVE GRAPH NEURAL NETWORKS AND HETEROGENEOUS GRAPH TRANSFORMER
The embedding function f(·) in METAGL (Section 4.4) produces embeddings of models and graphs via weighted neighborhood aggregation over the multi-relational G-M network. Specifically, we define f(·) using Heterogeneous Graph Transformer (HGT) (Hu et al., 2020), which is a relationaware graph neural network (GNN) that performs attentive neighborhood aggregation over the G-M network. Let z`t denote the node t’s embedding produced by the `-th HGT layer, which becomes the
Algorithm 1: METAGL: Offline Meta-Training (Top) and Online Model Selection (Bottom) Input: Meta-train graph database G, model set M, embedding dimension k Output: Meta-learner for model selection /* (Offline) Meta-Learner Training (Sec. 4.1) */
1 Train & evaluate models in M on graphs in G to get performance matrix P 2 Extract meta-graph features M for each graph Gi in G (Sec. 4.3) 3 Factorize P to obtain latent graph factors U and model factors V, i.e., P ≈ UVᵀ
4 Learn an estimator φ(·) such that φ(m) = Ûi ≈ Ui 5 Create meta-train graph Gtrain (Sec. 4.4) 6 while not converged 7 for i = 1, . . . , n do 8 Get embeddings f(W[m;φ(m)]) of train graph Gi on Gtrain 9 for j = 1, . . . ,m do
10 Get embeddings f(Vj) of each model Mj on Gtrain 11 Estimate p̂ij = 〈f(W[m;φ(m)]), f(Vj)〉 (Eqn. 2) 12 end 13 end 14 Compute meta-training loss L(P, P̂) (Eqn. 4) and optimize parameters 15 end
Input: new graph Gtest Output: selected model M∗ for Gtest /* (Online) Model Selection (Sec. 4.2) */
16 Extract meta-graph features mtest = ψ(Gtest) 17 Estimate latent factor Ûtest = φ(mtest) for test graph Gtest 18 Create the test G-M network Gtest by extending Gtrain with new edges between test graph node
and existing nodes in Gtrain (Sec. 4.4) 19 Get embeddings f(W[mtest; Ûtest]) of test graph on Gtest 20 Get embeddings f(Vj) of each model Mj on Gtest 21 Return the best model M∗ ← arg maxMj∈M 〈 f(W[mtest; Ûtest]), f(Vj) 〉
input of the (`+ 1)-th layer. Given L total layers, the final embedding ht of node t is obtained to be the output from the last layer, i.e., zLt . In general, node embeddings z ` t produced by the `-th layer in an attention-based GNN, such as HGT, can be expressed as:
z`t = Aggregate ∀s∈N(t),∀e∈E(s,t)
( Attention(s, t) ·Message(s) ) (10)
where s and t are source and target nodes, respectively; N(t) denotes all the source nodes of node t; and E(s, t) denotes all edges from node s to t. There are three basic operators: Attention, which assigns different weights to neighbors based on the estimated importance of node s with respect to target node t; Message, which extracts the message vector from the source node s; and Aggregate, which aggregates the neighborhood messages by the attention weight.
HGT effectively processes multi-relational graphs, such as the proposed G-M network, by designing all of the above three operators to be aware of node types and edge types, e.g., by employing distinct set of projection weights for each type of nodes and edges, and utilizing node- and edge-type dependent attention mechanisms. We refer the reader to (Hu et al., 2020) for the details of how HGT defines the above three operators. In summary, METAGL computes the embedding function f(xt) by providing node t’s input features xt as the initial embedding (i.e., z0t ) to HGT, and returning z L t , the output from the last layer, which is computed over the G-M network via relation-aware attentive neighborhood aggregation.
G.5 TIME COMPLEXITY ANALYSIS
We now state the time complexity of our approach for inferring the best model given a new unseen graph G′ = (V ′, E′). Let G = (V, E) be the G-M network, which is comprised of model nodes and graph nodes, and induced by c-NN (nearest neighbor) search (Section 4.4 and Appendix G.3). Let k denote the embedding size, and h be the number of attention heads in HGT (Appendix G.4). The time complexity of METAGL is O(q|E′|∆ + |V|ck2/h) (11) where q is the number of meta-graph feature extractors, and |E′| is the number of edges in the new unseen graph G′. Note that both q and ∆ are small and thus negligible. Hence, METAGL is fast and efficient.
Meta-Graph Feature Extraction: The first term of the above time complexity includes the time required to estimate the frequency of all network motifs with {2, 3, 4}-nodes, which is O(|E′|∆) in the worst case where ∆ is a small constant representing the maximum sampled degree which can be set by the user. See Ahmed et al. (2016) for further details. The other structural meta-feature extractors such as PageRank all take at most O(|E′|) time. Furthermore, our approach is flexible and supports any set of meta-graph feature extractors. Thus, it is straightforward to see that we can achieve a slightly better time by restricting the set of such meta-graph feature extractors to those that can be computed in time that is linear in the number of edges of any arbitrary graph. Hence, in this case, the ∆ term is dropped and we have simply O(q|E′|+ |V|ck2/h). Also, note that feature extractors are independent of each other, and thus can be run in parallel.
Embedding Models and Graphs: To augment the G-M network G given a new graph G′, we find a fixed number of nearest neighbors for G′, which takes O(|V|k) time. Then we embed models and graphs by applying HGT over the G-M network G = (V, E). Assuming an HGT with h attention heads, the time to employ HGT over G is O(|V|k2 + |E|k2/h). More specifically, the time taken for Attention(·) and Message(·) functions is O(|V|k2 + |E|k2/h), where O(|V|k2) is for feature transformation by h heads for all nodes, and O(|E|k2/h) is for message transformation/attention computation over each edge. Similarly, Aggregate(·) step takes O(|V|k2 + |E|k) time. Assuming k2/h > k, the time for Aggregate(·) can be absorbed into the time for other steps. Further, given that the G-M network is induced by c-NN search, we have that O(|E|) = O(c|V|), and thus the time complexity for embedding models and graphs is O(|V|ck2/h).
H ADDITIONAL RELATED WORK
H.1 MODEL SELECTION IN MACHINE LEARNING
In this section, we provide a further review of model selection in machine learning, which we group into two categories.
Evaluation-Based Model Selection: A majority of model selection methods belong to this category. Representative techniques used by these methods include grid search (Liashchynskyi & Liashchynskyi, 2019), random search (Bergstra & Bengio, 2012), early stopping-based (Golovin et al., 2017) and bandit-based (Li et al., 2017b) approaches, and Bayesian optimization (BO) (Snoek et al., 2012; Wu et al., 2019b; Falkner et al., 2018). Among them, BO methods are more efficient than grid or random search, requiring fewer evaluations of hyperparameter configurations (HCs), as they determine which HC to try next in a guided manner using prior experience from previous trials. Since these methods perform model training or evaluation multiple times using different HCs, they are much less efficient than the following group of methods.
Evaluation-Free Model Selection: Methods in this category do not require model evaluation for model selection. A simple approach (Abdulrahman et al., 2018) identifies the best model by considering the models’ rankings observed on prior datasets. Instead of finding the globally best model, ISAC (Kadioglu et al., 2010) and AS (Nikolić et al., 2013) select a model that performed well on similar datasets, where the dataset similarity is modeled in the meta-feature space via clustering (Kadioglu et al., 2010) or k-nearest neighbor search (Nikolić et al., 2013). A different group of methods perform optimization-based model selection, where the model performance is estimated by modeling the relation between meta-features and model performances. Supervised Surrogates (Xu et al., 2012) learns a surrogate model that maps meta-features to model performance. Recently, MetaOD (Zhao et al., 2021) outperformed all of these methods in selecting outlier detection algorithms. As a method in this category, the proposed METAGL builds upon MetaOD and extends it for an effective and automatic GL model selection. Most importantly, METAGL selects a graph learning model (e.g., link predictor) for the given graph, while MetaOD selects an outlier detection (OD) model for the given dataset (n-dimensional input features). To this end, METAGL designs meta features to capture the characteristics of graphs, while MetaOD designs meta features specialized for OD tasks. Also, they adopt different meta-training objectives: METAGL adapts the top-1 probability for meta-training, whereas MetaOD uses an NDCG-based objective. Furthermore, METAGL learns the embeddings of models and graphs by applying a heterogeneous GNN over the G-M network, which allows a flexible modeling of the relations between different models and graphs. By contrast, in MetaOD, the embeddings of models and datasets are optimized separately, where the relations between models and datasets are modeled rather indirectly via reconstructing the performance matrix. Note that all of these earlier methods, except the first simple approach, rely on meta-features, and they focus on non-graph datasets. By using the proposed meta-graph features, they could be applied to the graph learning model selection task.
H.2 COMPARISON WITH MODEL-AGNOSTIC META-LEARNING (MAML) (FINN ET AL., 2017)
MAML employs meta-learning to train a model’s initial parameters such that the model can perform well on a new task after the parameters have been updated via a few gradient steps using the data from the new task. In other words, given a model, MAML initializes one specific model’s parameters via meta-learning over multiple observed tasks, such that the meta-trained model can quickly adapt to a new task after learning from a small number of new data (i.e., few-shot learning). On the other hand, METAGL employs meta-learning to carry over the prior knowledge of multiple different models’ performance on different graphs for evaluation-free selection of graph learning algorithms.
Since MAML meta-trains a specific model for fast adaptation to a new dataset, it is not for selecting a model from a model set consisting of a wide variety of learning algorithms. Further, MAML fine-tunes a meta-trained model in a few-shot learning setup, whereas in our problem setup, no training and evaluation is to be done given a new graph dataset. Due to these reasons, MAML is not applicable to the proposed evaluation-free GL model selection problem (Problem 1).
I ADDITIONAL RESULTS
I.1 EFFECTIVENESS OF META-GRAPH FEATURES
Figure 8 shows how accurately meta-learners can perform model selection when they use the proposed meta-graph features (Section 4.3) vs. six state-of-the-art graph-level embedding (GLE) techniques, i.e., GL2Vec (Chen & Koga, 2019), Graph2Vec (Narayanan et al., 2017), GraphLoG (Xu et al., 2021), WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019). As discussed in Section 5.3, all meta-learners achieve a higher model selection accuracy nearly
consistently by using METAGL’s meta-graph features than when they use these GLE techniques, and METAGL outperform all meta-learners, regardless of which features are used.
I.2 MODEL SELECTION TIME
50% 100% 150% 200% 250%
Figure 9: Distribution of the time for METAGL to make a prediction / the time to create meta-graph features (in percentage). Red and green lines denote the median and mean, respectively.
Table 7 shows results comparing the runtime (in seconds) for naive model selection with the runtime of METAGL. Note that naive model selection requires training and evaluating each method in the model set, while in METAGL, the runtime involves only the time to generate meta-graph feature (penultimate row) and predict the best model via a forward | 1. What is the focus and contribution of the paper on graph learning?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and robustness?
3. What are the weaknesses of the paper regarding its experimental design and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or recommendations for future improvements or extensions of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Authors provide the meta-learning framework for graph learning models that estimates the model performance based on the meta-graph features such as network structures. The proposed method optimizes the numerical performance values based on the top-1 probability based cross-entry loss function. Experimental results show that the proposed method can be robustly optimal for various datasets.
Strengths And Weaknesses
Strength
The proposed meta-learning estimates the performance and leverages the predictive power for more robustly good model learning.
The proposed method is a novel way of ensembling graph learning algorithms.
The small initial overhead results in more reliable performance.
Weakness
The model performance matrix P is a random matrix since the performance is not deterministic. It would be great to study the sensitivity to the variance of the performance matrix P, not just the observation rate.
It would be great if the proposed meta-learning is not only used for the model selection, but also for the insights about the performance characteristics of each graph model algorithm -- such as some algorithm working better for a homophily networks or clustering the similar flavor of algorithms, and so on.
It would be interesting to compare the performance from MetaGL with the actual best performance. Since we have a fixed set of algorithms and perform training for each method, we are able to compare with the "optimal" selection.
Clarity, Quality, Novelty And Reproducibility
The manuscript is well-written so that readers can easily follow.
Through analysis has been made so the impact by the proposed meta-learning can be well understood.
The idea of using graph structure as the meta-learner features is novel enough. It is worth being studied more in the research community.
Without the shared performance matrix, it may not be trivial to reproduce the results. For the full reproducibility, all the hyperparameters chosen for performance matrix need to be shared. |
ICLR | Title
MetaGL: Evaluation-Free Selection of Graph Learning Models via Meta-Learning
Abstract
Given a graph learning task, such as link prediction, on a new graph, how can we select the best method as well as its hyperparameters (collectively called a model) without having to train or evaluate any model on the new graph? Model selection for graph learning has been largely ad hoc. A typical approach has been to apply popular methods to new datasets, but this is often suboptimal. On the other hand, systematically comparing models on the new graph quickly becomes too costly, or even impractical. In this work, we develop the first meta-learning approach for evaluation-free graph learning model selection, called METAGL, which utilizes the prior performances of existing methods on various benchmark graph datasets to automatically select an effective model for the new graph, without any model training or evaluations. To quantify similarities across a wide variety of graphs, we introduce specialized meta-graph features that capture the structural characteristics of a graph. Then we design G-M network, which represents the relations among graphs and models, and develop a graph-based meta-learner operating on this G-M network, which estimates the relevance of each model to different graphs. Extensive experiments show that using METAGL to select a model for the new graph greatly outperforms several existing meta-learning techniques tailed for graph learning model selection (up to 47% better), while being extremely fast at test time (∼1 sec).
1 INTRODUCTION
Given a graph learning (GL) task, such as link prediction, for a new graph dataset, how can we select the best method as well as its hyperparameters (HPs) (collectively called a model) without performing any model training or evaluations on the new graph? GL has received increasing attention recently (Zhang et al., 2022), achieving successes across various applications, e.g., recommendation and ranking (Fan et al., 2019; Park et al., 2020), traffic forecasting (Jiang & Luo, 2021), bioinformatics (Su et al., 2020), and question answering (Park et al., 2022). However, as GL methods continue to be developed, it becomes increasingly difficult to determine which model to use for the given graph.
Model selection (i.e., selecting a method and its configuration such as HPs) for graph learning has been largely ad hoc to date. A typical approach, called “no model selection”, is to simply apply popular methods to new graphs, often with the default HP values. However, it is well known that there is no universal learning algorithm that performs best on all problem instances (Wolpert & Macready, 1997), and such consistent model selection is often suboptimal. At the other extreme lies “naive model selection” (Fig. 1b), where all candidate models are trained on the new graph, evaluated on a hold-out validation graph, and then the best performing model for the new graph is selected. This approach is very costly as all candidate models are trained when a new graph arrives. Recent methods on neural architecture search (NAS) and hyperparameter optimization (HPO) of GL methods, which we review in Section 3, adopt smarter and more efficient strategies, such as Bayesian optimization (Snoek et al., 2012; Tu et al., 2019), which carefully choose a relatively small number of HP settings to evaluate. However, they still need to evaluate multiple configurations of each GL method on the new graph.
Evaluation-free model selection is yet another paradigm, which aims to tackle the limitations of the above approaches by attempting to simultaneously achieve the speed of no model selection and the accuracy of exhaustive model selection. Recently, a seminal work by Zhao et al. (2021) proposed a technique for outlier detection (OD) model selection, which carries over the observed performance of
OD methods on benchmark datasets for selecting OD methods. However, it does not address the unique challenges of GL model selection, and cannot be directly used to solve the problem. Inspired by this work, we systematically tackle the model selection problem for graph learning, especially link prediction. We choose link prediction as it is a key task for graph-structured data: It has many applications (e.g., recommendation, knowledge graph reasoning, and entity resolution), and several inference and learning tasks can be cast as a link prediction problem (e.g., Fadaee & Haeri (2019)). In this work, we develop METAGL, the first meta-learning framework for evaluation-free selection of graph learning models, which finds an effective GL model to employ for a new graph without training or evaluating any GL model on the new graph, as Figure 1a illustrates. METAGL satisfies all of desirable features for GL model selection listed in Table 1, while no existing paradigms satisfy all.
The high-level idea of meta-learning based model selection is to estimate a candidate model’s performance on the new graph based on its observed performances on similar graphs. Our meta-learning problem for graph data presents a unique challenge of how to model graph similarities, and what characteristic features (i.e., meta-features) of a graph to consider. Note that this step is often not needed for traditional meta-learning problems on non-graph data, as features for non-graph objects (e.g., location, age of users) are often readily available. Also, the high complexity and irregularity of graphs (e.g., different number of nodes and edges, and widely varying connectivity patterns among different graphs) makes the task even more challenging. To handle these challenges, we design specialized meta-graph features that can characterize major structural properties of real-world graphs.
Then to estimate the performance of a candidate model on a given graph, METAGL learns to embed models and graphs in the shared latent space such that their embeddings reflect the graph-to-model affinity. Specifically, we design a multi-relational graph called G-M network, which captures multiple types of relations among models and graphs, and develop a meta-learner operating on this G-M network, based on an attentive graph neural network that is optimized to leverage meta-graph features and prior model performance into producing model and graph embeddings that can be effectively used to estimate the best performing model for the new graph. METAGL greatly outperforms existing metalearners in GL model selection (Fig. 1c). In sum, the key contributions of this work are as follows. • Problem Formulation. We formulate the problem of selecting effective GL models in an
evaluation-free manner (i.e., without ever having to train/evaluate any model on the new graph), To the best of our knowledge, we are the first to study this important problem. • Meta-Learning Framework and Features. We propose METAGL, the first meta-learning framework for evaluation-free GL model selection. For meta-learning on various graphs, we design metagraph features that quantify graph similarities by capturing structural characteristics of a graph. • Effectiveness. Using METAGL for GL model selection greatly outperforms existing meta-learning techniques (up to 47% better, Fig. 1c), with negligible runtime overhead at test time (∼1 sec).
Benchmark Data/Code: To facilitate further research on this important new problem, we release code and data at https://github.com/NamyongPark/MetaGL, including performances of 400+ models on 300+ graphs, and 300+ meta-graph features.
Table 1: The proposed METAGL wins on features in comparison to existing graph learning (GL)
model selection (MS) paradigms, all of which fail to satisfy some of desirable properties for GL MS.
Desiderata for Graph Learning (GL) MS
GL Model Selection (MS) Paradigms No model selection Naive model selection (Fig. 1b) Graph HPO/NAS (e.g., AutoNE, AGNN; See Sec 3 and Fig. 1b) METAGL (Ours, Fig. 1a)
Evaluation-free GL model selection X ! Capable of MS from among multiple GL algorithms X ! Capitalizing on graph similarities for GL MS ! Estimating model performance based on past observations X !
2 PROBLEM FORMULATION
Given a new unseen graph, our goal is to select the best model from a heterogeneous set of graph learning models, without requiring any model evaluations and user intervention. In comparison to traditional meta-learning problems where a model denotes a single method and its hyperparameters, a model in the graph meta-learning problem is more broadly defined to be
model M = {(graph embedding method, hyperparameters), (predictor, hyperparameters)}, (1)
as graph learning tasks usually involve two steps: (1) embedding the graph using a graph representation learning method, and (2) providing node embeddings to the predictor of a downstream task like link prediction. Both steps require learning a method with specific hyperparameters. Thus, there can be many models with the same embedding method and predictor, which have different hyperparameters. Also, the set M of models may contain many different graph representation learning methods (e.g., node2vec (Grover & Leskovec, 2016), GraphSAGE (Hamilton et al., 2017), DeepGL (Rossi et al., 2020) to name a few), as well as multiple task-specific predictors, making M heterogeneous. Given a training meta-corpus of n graphs G = {Gi}ni=1, andmmodels M = {Mj}mj=1 for GL tasks, we derive performance matrix P ∈ Rn×m where Pij is the performance (e.g., accuracy) of model j on graph i. Our meta-learning problem for evaluation-free GL model selection is defined as follows.
Problem 1 (Evaluation-Free Graph Learning Model Selection).
Given
• an unseen test graph Gtest /∈ G, and • a potentially sparse performance matrix P ∈ Rn×m of m heterogeneous graph learning models M = {M1, . . . ,Mm} on n graphs G = {G1, . . . , Gn}, Select • the best model M∗ ∈M to employ on Gtest without evaluating any model in M on Gtest.
3 RELATED WORK
A majority of works on GL focus on developing new algorithms for certain graph tasks and applications (Xia et al., 2021; Zhang et al., 2022). In comparison, there exist relatively few recent works that address the GL model selection problem (Zhang et al., 2021). They mainly focus on neural architecture search (NAS) and hyperparameter optimization (HPO) for GL models, especially graph neural networks (GNNs). Toward efficient and effective NAS and HPO in GL, they investigated several approaches, such as Bayesian optimization (AutoNE by Tu et al. (2019)), reinforcement learning (GraphNAS by Gao et al. (2020), AGNN by Zhou et al. (2019), Policy-GNN by Lai et al. (2020)), hypernets (ST-GCN by Zhu et al. (2021)), and evolutionary algorithms (Bu et al. (2021)), as well as techniques like subgraph sampling (AutoNE by Tu et al. (2019)), graph coarsening (JITuNE by Guo et al. (2021)), and hierarchical evaluation (HESGA by Yuan et al. (2021)). However, as summarized in Table 1, these methods cannot perform evaluation-free GL model selection (Problem 1), since they need to evaluate multiple configurations of each GL method on the new graph for model selection. Further, they are limited to finding the best configuration of a single algorithm, and thus cannot select a model from a heterogeneous model set M with various GL models, as Problem 1 requests. An earlier work on GNN design space (You et al., 2020) is somewhat relevant, as it proposes an approach to quantify graph similarities, which can be used to find an observed graph similar to the test graph, and select a model that performed best on it. However, their approach evaluates a set of anchor models on all graphs, and computes similarities between two graphs based on anchor models’ performance on them. As it needs to run anchor models on the new graph, it is inapplicable to Problem 1. For the first time, METAGL enables evaluation-free model selection from a heterogeneous set of GL models.
4 FRAMEWORK
In this section, we present METAGL, our meta-learning based framework that solves Problem 1 by leveraging prior performances of existing methods. METAGL consists of two phases: (1) offline meta-training phase (Section 4.1) that trains a meta-learner using observed graphs G and model performances P, and (2) online model prediction phase (Section 4.2), which selects the best model for the new graph. A summary of notations used in this work is provided in Table 3 in the Appendix.
4.1 OFFLINE META-TRAINING
Meta-learning leverages prior experience from related learning tasks to do a better job on the new task. When the new task is similar to some historical learning tasks, then the knowledge from those similar tasks can be transferred and applied to the new task. Thus effectively capturing the similarity between an input task and observed ones is fundamentally important for successful meta-learning. In meta-learning, the similarity between learning tasks is modeled using meta-features, i.e., characteristic features of the learning task that can be used to quantify the task similarity.
Meta-Graph Features. Given the graph learning model selection problem (where new graphs correspond to new learning tasks), METAGL captures the graph similarity by extracting meta-graph features such that they reflect the structural characteristics of a graph. Notably, since graphs have irregular structure, with different number of nodes and edges, METAGL designs meta-graph features to be of the same size for any arbitrary graph such that they can be easily compared using meta-graph features. We use the symbol m ∈ Rd to denote the fixed-size meta-graph feature vector of graph G. We defer the details of how METAGL computes m to Section 4.3.
Model Performance Estimation. To estimate how well a model would perform on a given graph, METAGL represent models and graphs in the latent k-dimensional space, and captures the graphto-model affinity using the dot product similarity between the two representations hGi and hMj of the i-th graph Gi and j-th model Mj , respectively, such that pij ≈ 〈hGi ,hMj 〉 where pij is the performance of model Mj on graph Gi. Then to obtain the latent representation h, we design a learnable function f(·) that takes in relevant information on models and graphs from the meta-graph features m and the prior knowledge (i.e., model performances P and observed graphs G). Below in this section, we focus on the inputs to the function f(·), and defer the details of f(·) to Section 4.4. We first factorize performance matrix P into latent graph factors U ∈ Rn×k and model factors V ∈ Rm×k, and take the model factor Vj ∈ Rk (the j-th row of V) as the input representation of model Mj . Then, METAGL obtains the latent embedding hMj of model Mj by hMj = f(Vj). For graphs, more information is available since we have both meta-graph features m and meta-train graph factors U. However, while we have the same number of models during training and inference, we observe new graphs during inference, and thus cannot obtain the graph factor Utest for the test graph as for the train graphs since matrix factorization (MF) is transductive by construction (i.e., existing models’ performance on the test graph is needed to get latent factors for the test graph directly via MF). To handle this issue, we learn an estimator φ : Rd 7→ Rk that maps the meta-graph features m into the latent factors of meta-train graphs obtained via MF above (i.e., for graph Gi with m, φ(m) = Ûi ≈ Ui) and use this estimated graph factor. We then combine both inputs ([m;φ(m)] ∈ Rd+k), and apply linear transformation to make the input representation of graph Gi to be of the same size as that of model Mj , obtaining the latent embedding of graph Gi to be hGi = f(W[m;φ(m)]) where W ∈ Rk×(d+k) is a weight matrix. Thus in METAGL, the performance pij of model Mj on graph Gi with meta-graph features m is estimated as
pij ≈ p̂ij = 〈f(W[m;φ(m)]), f(Vj)〉. (2)
Meta-Learning Objective. For tasks where the goal is to estimate real values, such as accuracy, the mean squared error (MSE) is a typical choice for the loss function. While MSE is easy to optimize and
effective for regression, it does not directly take the ranking quality into account. By contrast, in our problem setup, accurately ranking models for each graph dataset is more important than estimating the performance itself, which makes MSE a suboptimal choice. In particular, Problem 1 focuses on finding the model with the best performance on the given graph. Therefore, we consider rank-based learning objectives, and among them, we adapt the top-1 probability to the proposed Problem 1 as follows. Let P̂i ∈ Rm be the i-th row of P̂ (i.e., estimated performance of all m models on graph Gi). Given P̂i, the top-1 probability pP̂itop1(j) of j-th model Mj in the model set M is defined to be
pP̂itop1 (j) = π(p̂ij)∑m k=1 π(p̂ik) = exp(p̂ij)∑m k=1 exp(p̂ik)
(3)
where π(·) is an increasing, strictly positive function, which we define to be an exponential function.
Theorem 1 (Cao et al. (2007)). Given the performance P̂i of all models on graph Gi, pP̂itop1(j) represents the probability of model Mj to be ranked at the top of the list (i.e., all models in M). Top-1 probabilities pP̂itop1 (j) for all j = 1, . . . ,m form a probability distribution over m models.
Based on Theorem 1, we obtain two probability distributions by applying top-1 probability to the true performance Pi and estimated performance P̂i of m models, and optimize METAGL such that the distance between the two resulting distributions gets decreased. Using the cross entropy as the distance metric, we obtain the following loss over all n meta-train graphs G:
L(P, P̂) = − n∑ i=1 m∑ j=1 pPitop1 (j) log ( pP̂itop1 (j) ) (4)
When P is sparse, meta-training can be performed via slightly modified Eqs. (3) and (4) in App. G.2.
4.2 ONLINE MODEL PREDICTION
In the meta-training phase, METAGL learns estimators f(·) and φ(·), as well as weight matrix W and latent model factors V. Given a new graph Gtest, METAGL first computes the meta-graph features mtest ∈ Rd as we discuss in Section 4.3. Then mtest is regressed to obtain the (approximate) latent graph factors Ûtest = φ(mtest)∈Rk. Recall that the model factors V learned in the meta-training stage can be directly used for model prediction. Then model Mj’s performance on test graph Gtest can be estimated by applying Equation (2) with mtest and φ(mtest). Finally, the model that has the highest estimated performance is selected by METAGL as the best model M∗, i.e.,
M∗ ← arg max Mj∈M
〈 f(W[mtest;φ(mtest)]), f(Vj) 〉 (5)
Note that model selection using Equation (5) depends only on the meta-graph features mtest of the test graph and other pretrained estimators and latent factors that METAGL learned in the meta-training phase. As no model training or evaluation is involved, model prediction by METAGL is much faster than training and evaluating different models multiple times, as our experiments show in Section 5.4. Further, model prediction process is fully automatic since it does not require users to choose or fine-tune any values at test time. Figure 2 shows an overview of the model prediction process, and Algorithm 1 in the Appendix lists steps for offline meta-training and online model prediction. 4.3 STRUCTURAL META-GRAPH FEATURES
𝜓! 𝜓" 𝜓#
...
...
ΣΣΣ
...
(ψ )
][ 𝒎
MetaGraph
Features
Input Graph 𝐺
Figure 3: Meta-graph features in METAGL are derived in two steps. See Section 4.3 for details.
Meta-graph features are a crucial component of our metalearning approach METAGL since they capture important structural characteristics of an arbitrary graph. Meta-graph features enable METAGL to quantify graph similarities, and utilize prior experience with observed graphs for GL model selection. It is important that a sufficient and representative set of meta-graph features are used to capture the important structural properties of graphs from a wide variety of domains, including biological, technological, information, and social networks to name a few.
In this work, we are not able to leverage the commonly used simple statistical meta-features used by previous work on model selection-based meta-learning, as they cannot be used directly
over irregular and complex graph data. To address this problem, we introduce the notion of meta-graph features and develop a general framework for computing them on any arbitrary graph.
Meta-graph features in METAGL are derived in two steps, which is shown in Figure 3. First, we apply a set of structural meta-feature extractors Ψ = {ψ1, . . . , ψq} to the input graph G, obtaining Ψ(G) = {ψ1(G), . . . , ψq(G)}. Applying ψ ∈ Ψ to G yields a vector or a distribution of values for the nodes (or edges) in the graph, such as degree distribution and PageRank scores. That is, in Figure 3, ψ1 can be a degree distribution, ψ2 can be PageRank scores of all nodes, and so on. Specifically, we use both local and global structural feature extractors. To capture the local structural properties around a node or an edge, we compute node degree, number of wedges (i.e., a path of length 2), triangles centered at each node, and also frequency of triangles for every edge. To capture global structural properties of a node, we derive eccentricity, PageRank score, and k-core number of each node. Appendix D summarizes meta-feature extractors used in this work.
Let ψ denote the local structural extractors for nodes. Given a graph Gi = (Vi, Ei) and ψ, we obtain an |Vi|-dimensional node vector ψ(Gi). Since any two graphs Gi and Gj are likely to have a different number of nodes and edges, the resulting structural feature matrices ψ(Gi) and ψ(Gj) for these graphs are also likely to be of different sizes as the rows of these matrices correspond to nodes or edges of the corresponding graph. Thus, in general, these structural feature-based representations of the graphs cannot be used directly to derive similarity between graphs.
Now, to address this issue, we apply the set Σ of global statistical meta-graph feature extractors to every ψi(G), ∀i = 1, . . . , q, which summarizes each ψi(G) to a fixed-size vector. Specifically, Σ(ψi(G)) applies each of the statistical functions in Σ (e.g., mean, kurtosis, etc) to the distribution ψi(G), which computes a real number that summarizes the given feature distribution ψi(G) from different statistical point of view, producing a vector Σ(ψi(G)) ∈ R|Σ|. Then we obtain the metagraph feature vector m of graph G by concatenating the resulting meta-graph feature vectors:
m = [Σ(ψ1(G)) · · · Σ(ψq(G))] ∈ Rd. (6) Table 5 in Appendix D lists the global statistical functions Σ used in this work to derive metagraph features. Further, in addition to the node- and edge-level structural features, we also compute global graph statistics (scalars directly derived from the graph, e.g., density and degree assortativity coefficient), and append them to m, i.e., the node- or edge-level structural features obtained above.
Most importantly, given any arbitrary graph G′, the proposed approach is guaranteed to output a fixed d-dimensional meta-graph feature vector characterizing G′. Hence, the structural similarity of any two graphs G and G′ can be quantified using a similarity function over m and m′, respectively.
4.4 EMBEDDING MODELS AND GRAPHS
Given an informative context (i.e., input features) of models and graphs that METAGL learns from model performances P and meta-graph features M (Sections 4.1 and 4.3), how can we use it to effectively learn model and graph embeddings that capture graph-to-model affinity? We note that similar entities can make each other’s context more accurate and informative. For instance, in our problem setup, similar models tend to have similar performance distributions over graphs, and likewise similar graphs are likely to exhibit similar affinity to different models. With this consideration, we model the task as a graph representation learning problem, where we construct a graph called G-M network that connects similar graphs and models, and learn the graph and model embeddings over it.
G-M Network. We define G-M network to be a multi-relational graph with two types of nodes (i.e., models and graphs) where edges connect similar model nodes and graph nodes. To measure similarity among graphs and models, we utilize the latent graph and model factors (U and V, respectively) obtained by factorizing P, as well as the meta-graph features M. More precisely, we use the estimated graph factor Û instead of U to let the same graph construction process work for new graphs. Note that this gives us two types of features for graph nodes (i.e., Û and M), and one type of features for model nodes (i.e., V). To let different features influence the embedding step differently as needed, we connect graph nodes and model nodes using five type of edges: M-g2g, P-g2g, P-m2m, P-g2m, P-m2g where g and m denote the type of nodes that an edge connects (graph and model, respectively), and M and P denote that the edge is based on meta-graph features and model performance, respectively. For example, M-g2g and P-g2g edges connect two graph nodes that are similar in terms of M and Û, respectively. Then for each edge type, we construct a k-NN graph by connecting nodes to their top-k similar nodes, where node-to-node similarity is defined as the cosine similarity between the
corresponding node features. For instance, for P-g2m edge type, graph nodes and model nodes are linked based on the similarity between Û and V. Fig. 7 in the Appendix illustrates the G-M network.
Learning Over G-M Network. Given the G-M network Gtrain with meta-train graphs and models, graph neural networks (GNNs) provide an effective framework to embed models and graphs via (weighted) neighborhood aggregation over Gtrain. However, since the structure of G-M network is induced by simple k-NN search, some of the neighbors may not provide the same amount of information as others, or may even provide noisy information. We found it helpful to perform attentive neighborhood aggregation, so more informative neighbors can be given more weights. To this end, we choose to use attentive GNNs designed for multi-relational networks, and specifically use HGT (Hu et al., 2020). Then the embedding function f(·) in Section 4.1 is defined to be f(x) = HGT(x,Gtrain) during training, which transforms the input node feature x into an embedding via attentive neighborhood aggregation over Gtrain. Further details of HGT are provided in Appendix G.4. Inference Over G-M Network. For inference at test time, we extend Gtrain to be a larger G-M network Gtest that additionally contains test graph nodes, and edges between test graph nodes, and existing graphs and models in Gtrain. The extension is done in the same way as in the training phase, by finding top-k similar nodes. Then the embedding at test time can be done by f(x) = HGT(x,Gtest).
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
Models and Evaluation. A model in our problem (Eqn. 1) consists of two components. The first component performs graph representation learning (GRL), and the other component leverages the learned embeddings for a downstream task of interest. In this work, we focus on link prediction, which is a key task for graph-structured data as we discuss in Section 1. We evaluate the performance of selecting a link prediction model for new graphs without any model evaluation. For the first component, we use 12 popular GRL methods, and for the second component for link scoring, we use a simple estimator that computes the cosine similarity between two node embeddings. This results in a model set M with 423 models. The full list of models is given in Table 4 in Appendix C. For evaluation, we create a testbed containing benchmark graphs, meta-graph features, and a performance matrix. We construct the performance matrix by evaluating each link prediction model in M on the graphs in the testbed, in terms of mean average precision score. Then we evaluate METAGL and baselines via 5-fold cross validation where the benchmark graphs are split into meta-train Gtrain and meta-test Gtest graphs for each fold, and meta-learners trained over the meta-train graph data are evaluated using the meta-test graph datasets. Thus, model performances over the meta-test graphs Gtest and the meta-graph features of Gtest were unseen during training, but used only for testing.
Table 2: The proposed METAGL outperforms existing meta-learners, given fully observed performance matrix. Best results are in bold, and second best results are underlined. “METAGL_Baseline” notation (e.g., METAGL_S2) indicate that the baseline metalearner uses METAGL’s meta-graph features.
Method MRR AUC NDCG@1
Random Selection 0.011 0.490 0.745
Si m
pl e
Global Best-AvgPerf 0.163 0.877 0.932 Global Best-AvgRank 0.103 0.867 0.930
METAGL_AS 0.222 0.905 0.947 METAGL_ISAC 0.202 0.887 0.939
O pt
im iz
at io nba se d METAGL_S2 0.170 0.910 0.945 METAGL_ALORS 0.190 0.897 0.950 METAGL_NCF 0.140 0.869 0.934 METAGL_MetaOD 0.075 0.599 0.889
METAGL 0.259 0.941 0.962
Since model selection aims to accurately predict the best model for a new graph, we evaluate the top-1 prediction performance in terms of MRR (Mean Reciprocal Rank), AUC, and NDCG (Normalized Discounted Cumulative Gain). To apply MRR and AUC, we label models such that the top1 model (i.e., the model with the best performance for the given graph) is labeled as 1, while all others are labeled as 0. For NDCG, we report NDCG@1, which evaluates the relevance of top-1 predicted model. All metrics range from 0 to 1, with larger values indicating better performance.
Baselines. Being the first work for evaluation-free model selection in GL, we do not have immediate baselines for comparison. Instead, we adapt baselines used for OD model selection (Zhao et al., 2021) and collaborative filtering for our problem setting. In Appendix A, we describe baselines in detail. Baselines are grouped into two categories: (a) Simple meta-learners select a model that performs generally well, either globally or locally: Global Best (GB)-AvgPerf, GB-AvgRank, ISAC (Kadioglu et al., 2010), and ARGOSMART (AS) (Nikolić et al., 2013); (b) Optimization-based meta-
learners learn to estimate the model performance by modeling the relation between meta features and model performances: Supervised Surrogates (S2) (Xu et al., 2012), ALORS (Misir & Sebag, 2017), NCF (He et al., 2017), and MetaOD (Zhao et al., 2021). We also include Random Selection (RS) as a baseline to see how these methods compare to random scoring.
Note that, except the simplest meta-learners RS and GB, no baselines can handle graph data, and thus they cannot estimate model performance on the new graph on their own. In that sense, they are not direct competitors of METAGL. We enable them to be used for GL model selection by providing the proposed meta-graph features. MetaOD, which was originally designed for OD model selection, is also given the same meta-graph features to perform GL model selection. We denote baselines either by combining METAGL with their names (e.g., METAGL_S2) to clearly show that they use METAGL’s meta-graph features, or using their name alone (e.g., S2) for simplicity.
5.2 MODEL SELECTION ACCURACY
Fully Observed Performance Matrix. In this setup, meta-learners are trained using a full performance matrix P with no missing entries. The model selection accuracy of all meta-learners in this setup is reported in Table 2, where METAGL achieves the best performance in all metrics, with 17% higher MRR than the best baseline (AS).
• Among simple meta-learners, Global Best meta-learners, which simply average model performance or rank over all observed graphs, are outperformed by more sophisticated meta-learners AS and ISAC, which leverage dataset similarities for model selection using meta-graph features. • For optimization-based meta-learners, it is important to be aware of how models and graphs relate to each other, and have high flexibility to capture that complex relationship. In methods like ALORS and MetaOD, relations between models and datasets (i.e., relative position of models and datasets in the embedding subspace) are learned rather indirectly via reconstructing the performance matrix. • METAGL, in contrast, directly captures graph-to-model affinity by modeling their relations via employing flexible GNNs over the G-M network, as well as reconstructing the performance matrix. As a result, METAGL consistently outperforms other optimization-based meta-learners.
Partially Observed Performance Matrix. In this setup, meta-learners are trained using a sparse performance matrix P, obtained by randomly masking out a full P. Figure 4 reports results obtained with varying sparsity, ranging up to 0.9. In this more challenging setup, METAGL consistently performs the best across all levels of sparsity, achieving up to 47% higher MRR than the best baseline.
• With increased sparsity, nearly all meta-learners perform increasingly worse, as one might expect. • While AS was the best baseline given a full P, its accuracy decreased rapidly as sparsity increased.
Since AS selects a model based on the 1NN meta-train graph, it is highly sensitive to P’s sparsity. • Baselines such as the Global Best baselines are more stable as they average across multiple graphs. • Optimization-based methods like METAGL and S2 perform favorably to simple meta-learners in
5.3 EFFECTIVENESS OF META-GRAPH FEATURES
In Figure 5, we evaluate how the performance of meta-learners obtained with the proposed meta-graph feature (Section 4.3) compares to that obtained with existing graph-level embedding (GLE) techniques, GL2Vec (Chen & Koga, 2019), Graph2Vec (Narayanan et al., 2017), and GraphLoG (Xu et al., 2021).
Figure 8 in App. I.1 provides results for three other GLE methods, WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019).
• Most points are below the diagonal in Figure 5, i.e., all meta-learners nearly consistently perform better when they use METAGL’s meta-graph features than when they use existing GLE methods. This shows the effectiveness of METAGL’s features for the proposed task of GL model selection. • METAGL performs the best, whether METAGL’s features or existing GLE methods are used.
5.4 MODEL SELECTION EFFICIENCY
To evaluate how efficient METAGL’s model selection is, we measure its runtime (i.e., the time to create metagraph features for the new graph at test time, plus the time to predict the best model), and compare it with the time to train a GL model. Figure 6 shows the distribution in box plots, where red and green lines denote the median and mean, respectively.
Results show that METAGL is fast, and incurs negligible runtime overhead: its runtime is just around 1 seconds or less in most cases (Figure 6a). Notably, compared to training each GL model for only 5% of its available model configurations, METAGL takes considerably less
time, i.e., a median of 5% and a mean of 11% of the time required for model training (Figure 6b). Given large-scale test graphs in practice, the speed-up enabled by METAGL will be greater than that reported in Figure 6b, due to the increased training time on such graphs. Also, METAGL’s model selection process can be further streamlined, e.g., by parallelizing meta-feature generation process. We provide additional results on the runtime of METAGL and naive model selection in Appendix I.2.
5.5 ADDITIONAL RESULTS
We present ablation study in Appendix I.3, which shows the effectiveness of METAGL’s proposed components, e.g., the meta-learning objective, G-M network, and the graph encoder used by METAGL. We evaluate the sensitivity of model selection approaches to the variance of performance matrix P in Appendix I.4, and compare the predicted model performance with the actual best performance in Appendix I.5.
6 CONCLUSION
As more and more GL models are developed, selecting which one to use is becoming increasingly hard. Toward near-instantaneous, automatic GL model selection, we make the following contributions.
• Problem Formulation. We present the first problem formulation to select effective GL models in an evaluation-free manner (i.e., without ever having to train/evaluate any model on the new graph). • Meta-Learning Framework and Features. We propose METAGL, the first meta-learning framework for evaluation-free GL model selection, and meta-graph features to quantify graph similarities. • Effectiveness. Using METAGL for model selection greatly outperforms existing meta-learning techniques (up to 47% better), while incurring negligible runtime overhead at test time (∼1 sec).
A BASELINES
Being the first work for evaluation-free model selection in graph learning (GL), we do not have immediate baselines for comparison. Instead, we adapt baselines used in MetaOD (Zhao et al., 2021) for outlier detection (OD) model selection as well as collaborative filtering for our problem setting. The baselines used in experiments can be organized into the following two categories.
(a) Simple meta-learners select a model that performs generally well, either globally or locally.
• Global Best (GB)-AvgPerf selects the model that has the largest average performance over all meta-train graphs.
• Global Best (GB)-AvgRank computes the rank of all models (in percentile) for each graph, and selects the model with the largest average ranking over all meta-train graphs.
• ISAC (Kadioglu et al., 2010) first clusters meta-train datasets using meta-graph features, and at test time, finds the cluster closest to the test graph, and selects the model with the largest average performance over all graphs in that cluster.
• ARGOSMART (AS) (Nikolić et al., 2013) finds the meta-train graph closest to the test graph (i.e., 1NN) in terms of meta-graph feature similarity, and selects the model with the best result on the 1NN graph.
(b) Optimization-based meta-learners learn to estimate the model performance by modeling the relation between meta-graph features and model performances.
• Supervised Surrogates (S2) (Xu et al., 2012) learns a surrogate model (a regressor) that maps meta-graph features to model performances.
• ALORS (Misir & Sebag, 2017) factorizes the performance matrix into latent factors on graphs and models, and estimates the performance to be the dot product between the two factors, where a non-linear regressor maps meta-graph features into the latent graph factors.
• NCF (He et al., 2017) replaces the dot product used in ALORS with a more general neural architecture that estimates performance by combining the linearity of matrix factorization and non-linearity of deep neural networks.
• MetaOD (Zhao et al., 2021) pioneered the field of unsupervised OD model selection by designing meta-features specialized to capture the outlying characteristics of datasets, as well as improving upon ALORS with the adoption of an NDCG-based meta-training objective. We enable MetaOD to be applicable to our problem setting, by applying our proposed meta-graph features to MetaOD.
In addition, we also include Random Selection (RS) as a baseline, to see how meta-learners perform in comparison to random scoring. Note that among the above approaches, only GB-AvgPerf and GB-AvgRank do not rely on meta-features for model selection. All other meta-learners make use of the proposed meta-graph features to estimate model performances on an unseen test graph.
B NOTATIONS
Table 3 provides a list of notations frequently used in this work.
C MODEL SET
A model in the model set M refers to a graph representation learning (GRL) method along with its hyperparameters settings, and a predictor that makes a downstream task-specific prediction given the node embeddings from the GRL method. In this work, we use a link predictor which scores a given link by computing the cosine similarity between the two nodes’ embeddings. Table 4 shows the
Methods Hyperparameter Settings Count
SGC (Wu et al., 2019a) # (number of) hops k ∈ {1, 2, 3} 3 GCN (Kipf & Welling, 2017) # layers L ∈ {1, 2, 3}, # epochs N ∈ {1, 10} 6 GraphSAGE (Hamilton et al., 2017) # layers L ∈ {1, 2, 3}, # epochs N ∈ {1, 10}, aggregation functions
f ∈ {mean, gcn, lstm} 18
node2vec (Grover & Leskovec, 2016) p, q ∈ {1, 2, 4} 9 role2vec (Ahmed et al., 2018) p, q ∈ {0.25, 1, 4}, α ∈ {0.01, 0.1, 0.5, 0.9, 0.99}, motif combinations
H ∈ {{H1}, {H2, H3}, {H2, H3, H4, H6, H8}, {H1, H2, . . . , H8}} 180
GraRep (Cao et al., 2015) k ∈ {1, 2} 2 DeepWalk (Perozzi et al., 2014) p = 1, q = 1 1 HONE (Rossi et al., 2018) k ∈ {1, 2}, Dlocal ∈ {4, 8, 16}, variant v = {1, 2, 3, 4, 5} 30 node2bits (Jin et al., 2019) walk num wn ∈ {5, 10, 20}, walk len wl ∈ {5, 10, 20}, log base b ∈
{2, 4, 8, 10}, feats f ∈ {16} 36
DeepGL (Rossi et al., 2020) α ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, motif size ∈ {4}, eps tolerance t ∈ {0.01, 0.05, 0.1}, relational aggr. ∈ {{m}, {p}, {s}, {v}, {m, p}, {m, v}, {s,m}, {s, p}, {s, v}} where m, p, s, v denote mean, product, sum, var 135
LINE (Tang et al., 2015) # hops/order k ∈ {1, 2} 2 Spectral Emb. (Luo et al., 2003) tolerance t ∈ {0.001} 1
Total Count 423
complete list of 12 popular GRL methods and their specific hyperparameter settings, which compose 412 unique models in the model set M. Note that the link predictor is omitted from Table 4 since we employ the same link predictor based on cosine similarity to all GRL methods.
D META-GRAPH FEATURES
Structural Meta-Feature Extractors. To capture the local structural properties around a node or an edge, we compute the distribution of node degrees, number of wedges (i.e., a path of length 2), triangles centered at each node, as well as the frequency of triangles for each edge. To capture the global structural properties of a node, we derive the eccentricity, PageRank score, and k-core number of each node. We also capture the global graph-level statistics (i.e., different from local
node/edge-level structural properties), such as the density of A and AAT where A is the adjacency matrix, and also the degree assortativity coefficient r.
Global Statistical Functions. For each of the structural property distributions (degree, k-core numbers, and so on) derived by the above structural meta-feature extractors, we apply the set Σ of global statistical functions (Table 5) over it to obtain a fixed-length vector representation for the node/edge/graph-level structural feature distribution.
After obtaining a set of meta-graph features, we concatenate all of them together to create the final meta-graph feature vector m for the graph.
E GRAPH DATASETS
The testbed used in this work comprises 301 graphs that have widely varying structural characteristics. Table 6 provides a list of all graph datasets in the testbed. All graph data are from (Rossi & Ahmed, 2015); they are publicly available under the Creative Commons Attribution-ShareAlike License.
Table 6 – Continued from the previous page
Graph # Nodes # Edges Graph # Nodes # Edges
66 CL-1000-1d8-trial3 925 3714 217 nci1_g3711 89 106 67 CL-1000-1d9-trial1 928 3510 218 nci1_g3901 93 102 68 CL-1000-1d9-trial2 912 3053 219 nci1_g3990 90 105 69 CL-1000-1d9-trial3 932 3278 220 nci1_g4094 90 98 70 CL-1000-2d0-trial1 909 2795 221 power-1138-bus 1138 2596 71 CL-1000-2d0-trial2 899 2941 222 power-494-bus 494 1080 72 CL-1000-2d0-trial3 916 3010 223 power-662-bus 662 1568 73 CL-1000-2d1-trial1 903 2430 224 power-685-bus 685 1967 74 CL-1000-2d1-trial2 911 2734 225 power-bcspwr09 1723 4117 75 CL-1000-2d1-trial3 915 2782 226 power-eris1176 1176 9864 76 DD_g1 327 899 227 rec-amazon 91813 125704 77 DD_g10 146 328 228 rec-movielens-tag-movies-10m 16528 71081 78 DD_g100 349 1005 229 road-chesapeake 39 170 79 DD_g1000 183 408 230 road-ChicagoRegional 1467 1298 80 DD_g1001 88 203 231 road-euroroad 1174 1417 81 DD_g1002 104 255 232 road-luxembourg-osm 114599 119666 82 DD_g1003 53 116 233 road-minnesota 2642 3303 83 DD_g1004 94 230 234 road-usroads-48 126146 161950 84 DD_g1005 370 903 235 rt-retweet 96 117 85 DD_g1006 246 568 236 rt-twitter-copen 761 1029 86 DD_g1007 309 732 237 rt_alwefaq 4171 7063 87 DD_g1008 109 304 238 rt_assad 2139 2788 88 DD_g1009 129 272 239 rt_bahrain 4676 7979 89 DD_g101 306 728 240 rt_barackobama 9631 9775 90 DD_g1010 157 363 241 rt_damascus 3052 3869 91 DD_g1011 47 136 242 rt_dash 6288 7436 92 DD_g1012 146 365 243 rt_gmanews 8373 8721 93 DD_g1013 93 211 244 rt_gop 4687 5529 94 DD_g1014 119 273 245 rt_http 8917 10314 95 DD_g1015 102 244 246 rt_islam 4497 4616 96 DD_g1016 113 291 247 rt_israel 3698 4165 97 DD_g1017 162 376 248 rt_lebanon 3961 4436 98 DD_g1018 296 680 249 rt_libya 5067 5541 99 DD_g1019 131 353 250 rt_lolgop 9765 10075 100 DD_g102 561 1422 251 rt_obama 3212 3423 101 DD_g1020 228 541 252 rt_occupy 3225 3944 102 DD_g1021 329 787 253 rt_occupywallstnyc 3609 3833 103 DD_g1022 294 730 254 rt_oman 4904 6230 104 DD_g1023 172 425 255 rt_onedirection 7987 8103 105 DD_g1024 59 160 256 rt_p2 4902 6018 106 DD_g1025 88 205 257 rt_saudi 7252 8061 107 DD_g1026 247 578 258 rt_tcot 4547 5503 108 DD_g1027 108 223 259 rt_tlot 3665 4475 109 DD_g1028 72 137 260 rt_uae 5248 6387 110 DD_g1029 99 215 261 rt_voteonedirection 2280 2464 111 DD_g103 265 647 262 sc-nasasrb 54870 1311227 112 DD_g1030 136 351 263 soc-advogato 6551 43427 113 DD_g104 372 999 264 soc-dolphins 62 159 114 DD_g105 423 1192 265 soc-firm-hi-tech 33 91 115 DD_g106 574 1355 266 soc-gplus 23628 39194 116 DD_g107 130 292 267 soc-hamsterster 2426 16630 117 DD_g108 483 1137 268 soc-highschool-moreno 70 274 118 DD_g109 132 315 269 soc-physicians 241 923 119 DD_g11 312 761 270 soc-sign-bitcoinalpha 3783 14124 120 DD_g110 394 1137 271 soc-student-coop 185 311 121 DD_g111 483 1520 272 soc-wiki-Vote 889 2914 122 DD_g112 266 631 273 socfb-Amherst 2235 90954 123 DD_g113 347 853 274 socfb-Bowdoin47 2252 84387 124 DD_g114 334 761 275 socfb-Caltech 769 16656 125 DD_g115 336 946 276 socfb-Hamilton46 2314 96394 126 eco-everglades 69 885 277 socfb-Haverford76 1446 59589 127 eco-florida 128 2075 278 socfb-nips-ego 2888 2981 128 eco-foodweb-baydry 128 2106 279 socfb-Oberlin44 2920 89912 129 eco-foodweb-baywet 128 2075 280 socfb-Reed98 962 18812 130 eco-mangwet 97 1446 281 socfb-Simmons81 1518 32988 131 eco-stmarks 54 353 282 socfb-Smith60 2970 97133 132 email-dnc-corecipient 906 10429 283 socfb-Swarthmore42 1659 61050 133 email-dnc-leak 1891 4465 284 socfb-Trinity100 2613 111996 134 email-enron-only 143 623 285 socfb-USFCA72 2682 65252 135 email-EU 32430 54397 286 socfb-Vassar85 3068 119161 136 email-radoslaw 167 3251 287 socfb-Villanova62 7772 314989 137 email-univ 1133 5451 288 socfb-Wellesley22 2970 94899 138 enzymes_g103 59 115 289 socfb-Williams40 2790 112986 139 enzymes_g118 95 121 290 tech-routers-rf 2113 6632
Continued on the next page
Table 6 – Continued from the previous page
Graph # Nodes # Edges Graph # Nodes # Edges
140 enzymes_g123 90 127 291 tech-routers-rf 2113 6632 141 enzymes_g199 62 108 292 web-BerkStan 12305 19500 142 enzymes_g204 57 105 293 web-edu 3031 6474 143 enzymes_g209 57 101 294 web-EPA 4271 8909 144 enzymes_g215 48 104 295 web-google 1299 2773 145 enzymes_g224 54 105 296 web-indochina-2004 11358 47606 146 enzymes_g279 60 107 297 web-polblogs 643 2280 147 enzymes_g291 62 104 298 web-spam 4767 37375 148 enzymes_g292 60 100 299 web-webbase-2001 16062 25593 149 enzymes_g293 96 109 300 web-wiki-chameleon 2277 31421 150 enzymes_g295 123 139 301 web-wiki-crocodile 11631 170918 151 enzymes_g296 125 141
F EXPERIMENTAL DETAILS
F.1 EXPERIMENTAL SETTINGS
Software. We used PyTorch1 for implementing the training and inference pipeline, and used the DGL’s implementation of HGT2. For MetaOD (Zhao et al., 2021), we used the implementation provided by the authors3. We used the Karate Club library (Rozemberczki et al., 2020) for the implementations of the following graph-level embedding (GLE) methods, Graph2Vec (Narayanan et al., 2017), GL2Vec (Chen & Koga, 2019), WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019). For GraphLoG (Xu et al., 2021), we used the authors’ implementation4. We used open source libraries, such as NetworkX5 and NumPy6, for implementing meta-graph feature extractors.
Hyperparameters. We set the embedding size k to 32 for METAGL and other meta-learners that learn embeddings of models and graphs. For METAGL, we created the G-M network by connecting nodes to their top-30 similar nodes. As an the embedding function f(·) in METAGL, we used HGT (Hu et al., 2020) with 2 layers and 4 heads per layer. HGT is included in the Deep Graph Library (DGL), which is licensed under the Apache License 2.0. For training, we used the Adam optimizer with a learning rate of 0.00075 and a weight decay of 0.0001. For GLE approaches, we used the default hyperparameter settings specified in the corresponding library and GitHub repository.
Link Prediction Model Training. Given a graph G, we first hold out 10% of the edges in graph G to be used for evaluation, and train GL models with the resulting subgraph for link prediction. The training of GL models was performed by sampling 20 negative edges per positive edge, computing the link score by applying a dot product between the two corresponding node embeddings, followed by a sigmoid function, and then optimizing a binary cross entropy loss for the positive and negative edge scores. For evaluation, we randomly sampled the same number of negative edges as the positive edges, and evaluated the predicted link scores in terms of mean average precision.
F.2 EVALUATION OF MODEL SELECTION ACCURACY (SECTION 5.2)
In our evaluation involving a partially observed performance matrix, we extended baseline metalearners as follows so they can operate in the presence of missing entries in the performance matrix.
• Global Best-AvgPerf averaged observed performance entries alone, ignoring missing values. If an average performance cannot be computed for some model (which is the case when a model has no observed performance entries for all graphs), we use the mean of averaged performance for other models in its place.
1https://pytorch.org/ 2https://www.dgl.ai/ 3https://github.com/yzhao062/MetaOD 4https://github.com/DeepGraphLearning/GraphLoG 5https://networkx.org/ 6https://numpy.org/
• Global Best-AvgRank computed the model rankings for each graph in percentile, as the number of observed model performances may be different for different graphs, and averaged the rank percentiles for observed cases only, as in Global Best-AvgPerf. • ISAC handled the sparse performance matrix in the same way as Global Best-AvgPerf, except that only a subset of graphs, which is similar to the test graph, is considered in ISAC. • ARGOSMART (AS) computed the mean of observed performance entries of the 1NN graph, and used this quantity in place of missing values. • ALORS factorized the sparse performance matrix using a missing value-aware non-negative matrix factorization algorithm. • Supervised Sur. (S2), NCF, and MetaOD performed optimization by only considering observed performance values in the loss function, while skipping over missing entries. Early stopping based on the validation performance was also done with respect to the observed performances alone.
F.3 EVALUATION OF META-GRAPH FEATURES (SECTION 5.3)
Except for WaveletCharacteristic and GraphLoG, we applied graph-level embedding (GLE) approaches to all graphs in our testbed, and meta-learners were trained and evaluated using the representations of all graphs via 5-fold cross validation. Since WaveletCharacteristic and GraphLoG could not scale up to some of the largest graphs in the testbed (e.g., due to out-of-memory error), we excluded 9% and 27% largest graphs for GraphLoG and WaveletCharacteristic, respectively, and evaluated meta-learners using the resulting subset of graphs. Note that, in these cases, METAGL was also trained and evaluated using the same subset of graphs.
G ADDITIONAL DETAILS AND ANALYSIS OF METAGL
G.1 METAGL ALGORITHM AND META-GRAPH FEATURES
Algorithm 1 provides detailed steps of METAGL, for both offline meta-training (top) and online model selection (bottom). In METAGL, we log-transform the meta-graph features, and extend the meta-graph features with them, as it helps with model selection. We use the notation m in Algorithm 1 as well as in the text to refer to these meta-graph features used by METAGL.
G.2 META-LEARNING OBJECTIVE FOR SPARSE PERFORMANCE MATRIX
Given a sparse performance matrix P, meta-training of METAGL can be performed by modifying the top-1 probability (Equation (3)) and the loss function (Equation (4)), such that the missing entries in P are ignored as follows:
pP̂itop1 (j) = Ipij (π(p̂ij))∑m k=1 Ipik(π(p̂ik)) = Ipij (exp(p̂ij))∑m k=1 Ipik(exp(p̂ik)) , (7)
L(P, P̂) = − n∑ i=1 m∑ j=1 Ipij ( pPitop1 (j) log ( pP̂itop1 (j) )) . (8)
where Ipij (·) is defined as
Ipij (x) = { x if pij exists in the observed performance matrix P, 0 if pij is missing in the observed performance matrix P.
(9)
Thus the supervision signal for each graph comes only from the model performances observed on it. If an entire row in P is empty, the loss terms for the corresponding graph are dropped from Equation (8).
G.3 G-M NETWORK
Figure 7 illustrates the G-M network (graph-model network) (Section 4.4), which is a multi-relational bipartite network between graph nodes and model nodes. In the G-M network, model and graph nodes are connected via five types of edges (e.g., P-m2m, P-g2m, M-g2g), which is shown as edges with distinct line styles and colors. Note that while Figure 7 shows only one edge per edge type, in the G-M network, each node is connected to its top-k similar nodes.
G.4 ATTENTIVE GRAPH NEURAL NETWORKS AND HETEROGENEOUS GRAPH TRANSFORMER
The embedding function f(·) in METAGL (Section 4.4) produces embeddings of models and graphs via weighted neighborhood aggregation over the multi-relational G-M network. Specifically, we define f(·) using Heterogeneous Graph Transformer (HGT) (Hu et al., 2020), which is a relationaware graph neural network (GNN) that performs attentive neighborhood aggregation over the G-M network. Let z`t denote the node t’s embedding produced by the `-th HGT layer, which becomes the
Algorithm 1: METAGL: Offline Meta-Training (Top) and Online Model Selection (Bottom) Input: Meta-train graph database G, model set M, embedding dimension k Output: Meta-learner for model selection /* (Offline) Meta-Learner Training (Sec. 4.1) */
1 Train & evaluate models in M on graphs in G to get performance matrix P 2 Extract meta-graph features M for each graph Gi in G (Sec. 4.3) 3 Factorize P to obtain latent graph factors U and model factors V, i.e., P ≈ UVᵀ
4 Learn an estimator φ(·) such that φ(m) = Ûi ≈ Ui 5 Create meta-train graph Gtrain (Sec. 4.4) 6 while not converged 7 for i = 1, . . . , n do 8 Get embeddings f(W[m;φ(m)]) of train graph Gi on Gtrain 9 for j = 1, . . . ,m do
10 Get embeddings f(Vj) of each model Mj on Gtrain 11 Estimate p̂ij = 〈f(W[m;φ(m)]), f(Vj)〉 (Eqn. 2) 12 end 13 end 14 Compute meta-training loss L(P, P̂) (Eqn. 4) and optimize parameters 15 end
Input: new graph Gtest Output: selected model M∗ for Gtest /* (Online) Model Selection (Sec. 4.2) */
16 Extract meta-graph features mtest = ψ(Gtest) 17 Estimate latent factor Ûtest = φ(mtest) for test graph Gtest 18 Create the test G-M network Gtest by extending Gtrain with new edges between test graph node
and existing nodes in Gtrain (Sec. 4.4) 19 Get embeddings f(W[mtest; Ûtest]) of test graph on Gtest 20 Get embeddings f(Vj) of each model Mj on Gtest 21 Return the best model M∗ ← arg maxMj∈M 〈 f(W[mtest; Ûtest]), f(Vj) 〉
input of the (`+ 1)-th layer. Given L total layers, the final embedding ht of node t is obtained to be the output from the last layer, i.e., zLt . In general, node embeddings z ` t produced by the `-th layer in an attention-based GNN, such as HGT, can be expressed as:
z`t = Aggregate ∀s∈N(t),∀e∈E(s,t)
( Attention(s, t) ·Message(s) ) (10)
where s and t are source and target nodes, respectively; N(t) denotes all the source nodes of node t; and E(s, t) denotes all edges from node s to t. There are three basic operators: Attention, which assigns different weights to neighbors based on the estimated importance of node s with respect to target node t; Message, which extracts the message vector from the source node s; and Aggregate, which aggregates the neighborhood messages by the attention weight.
HGT effectively processes multi-relational graphs, such as the proposed G-M network, by designing all of the above three operators to be aware of node types and edge types, e.g., by employing distinct set of projection weights for each type of nodes and edges, and utilizing node- and edge-type dependent attention mechanisms. We refer the reader to (Hu et al., 2020) for the details of how HGT defines the above three operators. In summary, METAGL computes the embedding function f(xt) by providing node t’s input features xt as the initial embedding (i.e., z0t ) to HGT, and returning z L t , the output from the last layer, which is computed over the G-M network via relation-aware attentive neighborhood aggregation.
G.5 TIME COMPLEXITY ANALYSIS
We now state the time complexity of our approach for inferring the best model given a new unseen graph G′ = (V ′, E′). Let G = (V, E) be the G-M network, which is comprised of model nodes and graph nodes, and induced by c-NN (nearest neighbor) search (Section 4.4 and Appendix G.3). Let k denote the embedding size, and h be the number of attention heads in HGT (Appendix G.4). The time complexity of METAGL is O(q|E′|∆ + |V|ck2/h) (11) where q is the number of meta-graph feature extractors, and |E′| is the number of edges in the new unseen graph G′. Note that both q and ∆ are small and thus negligible. Hence, METAGL is fast and efficient.
Meta-Graph Feature Extraction: The first term of the above time complexity includes the time required to estimate the frequency of all network motifs with {2, 3, 4}-nodes, which is O(|E′|∆) in the worst case where ∆ is a small constant representing the maximum sampled degree which can be set by the user. See Ahmed et al. (2016) for further details. The other structural meta-feature extractors such as PageRank all take at most O(|E′|) time. Furthermore, our approach is flexible and supports any set of meta-graph feature extractors. Thus, it is straightforward to see that we can achieve a slightly better time by restricting the set of such meta-graph feature extractors to those that can be computed in time that is linear in the number of edges of any arbitrary graph. Hence, in this case, the ∆ term is dropped and we have simply O(q|E′|+ |V|ck2/h). Also, note that feature extractors are independent of each other, and thus can be run in parallel.
Embedding Models and Graphs: To augment the G-M network G given a new graph G′, we find a fixed number of nearest neighbors for G′, which takes O(|V|k) time. Then we embed models and graphs by applying HGT over the G-M network G = (V, E). Assuming an HGT with h attention heads, the time to employ HGT over G is O(|V|k2 + |E|k2/h). More specifically, the time taken for Attention(·) and Message(·) functions is O(|V|k2 + |E|k2/h), where O(|V|k2) is for feature transformation by h heads for all nodes, and O(|E|k2/h) is for message transformation/attention computation over each edge. Similarly, Aggregate(·) step takes O(|V|k2 + |E|k) time. Assuming k2/h > k, the time for Aggregate(·) can be absorbed into the time for other steps. Further, given that the G-M network is induced by c-NN search, we have that O(|E|) = O(c|V|), and thus the time complexity for embedding models and graphs is O(|V|ck2/h).
H ADDITIONAL RELATED WORK
H.1 MODEL SELECTION IN MACHINE LEARNING
In this section, we provide a further review of model selection in machine learning, which we group into two categories.
Evaluation-Based Model Selection: A majority of model selection methods belong to this category. Representative techniques used by these methods include grid search (Liashchynskyi & Liashchynskyi, 2019), random search (Bergstra & Bengio, 2012), early stopping-based (Golovin et al., 2017) and bandit-based (Li et al., 2017b) approaches, and Bayesian optimization (BO) (Snoek et al., 2012; Wu et al., 2019b; Falkner et al., 2018). Among them, BO methods are more efficient than grid or random search, requiring fewer evaluations of hyperparameter configurations (HCs), as they determine which HC to try next in a guided manner using prior experience from previous trials. Since these methods perform model training or evaluation multiple times using different HCs, they are much less efficient than the following group of methods.
Evaluation-Free Model Selection: Methods in this category do not require model evaluation for model selection. A simple approach (Abdulrahman et al., 2018) identifies the best model by considering the models’ rankings observed on prior datasets. Instead of finding the globally best model, ISAC (Kadioglu et al., 2010) and AS (Nikolić et al., 2013) select a model that performed well on similar datasets, where the dataset similarity is modeled in the meta-feature space via clustering (Kadioglu et al., 2010) or k-nearest neighbor search (Nikolić et al., 2013). A different group of methods perform optimization-based model selection, where the model performance is estimated by modeling the relation between meta-features and model performances. Supervised Surrogates (Xu et al., 2012) learns a surrogate model that maps meta-features to model performance. Recently, MetaOD (Zhao et al., 2021) outperformed all of these methods in selecting outlier detection algorithms. As a method in this category, the proposed METAGL builds upon MetaOD and extends it for an effective and automatic GL model selection. Most importantly, METAGL selects a graph learning model (e.g., link predictor) for the given graph, while MetaOD selects an outlier detection (OD) model for the given dataset (n-dimensional input features). To this end, METAGL designs meta features to capture the characteristics of graphs, while MetaOD designs meta features specialized for OD tasks. Also, they adopt different meta-training objectives: METAGL adapts the top-1 probability for meta-training, whereas MetaOD uses an NDCG-based objective. Furthermore, METAGL learns the embeddings of models and graphs by applying a heterogeneous GNN over the G-M network, which allows a flexible modeling of the relations between different models and graphs. By contrast, in MetaOD, the embeddings of models and datasets are optimized separately, where the relations between models and datasets are modeled rather indirectly via reconstructing the performance matrix. Note that all of these earlier methods, except the first simple approach, rely on meta-features, and they focus on non-graph datasets. By using the proposed meta-graph features, they could be applied to the graph learning model selection task.
H.2 COMPARISON WITH MODEL-AGNOSTIC META-LEARNING (MAML) (FINN ET AL., 2017)
MAML employs meta-learning to train a model’s initial parameters such that the model can perform well on a new task after the parameters have been updated via a few gradient steps using the data from the new task. In other words, given a model, MAML initializes one specific model’s parameters via meta-learning over multiple observed tasks, such that the meta-trained model can quickly adapt to a new task after learning from a small number of new data (i.e., few-shot learning). On the other hand, METAGL employs meta-learning to carry over the prior knowledge of multiple different models’ performance on different graphs for evaluation-free selection of graph learning algorithms.
Since MAML meta-trains a specific model for fast adaptation to a new dataset, it is not for selecting a model from a model set consisting of a wide variety of learning algorithms. Further, MAML fine-tunes a meta-trained model in a few-shot learning setup, whereas in our problem setup, no training and evaluation is to be done given a new graph dataset. Due to these reasons, MAML is not applicable to the proposed evaluation-free GL model selection problem (Problem 1).
I ADDITIONAL RESULTS
I.1 EFFECTIVENESS OF META-GRAPH FEATURES
Figure 8 shows how accurately meta-learners can perform model selection when they use the proposed meta-graph features (Section 4.3) vs. six state-of-the-art graph-level embedding (GLE) techniques, i.e., GL2Vec (Chen & Koga, 2019), Graph2Vec (Narayanan et al., 2017), GraphLoG (Xu et al., 2021), WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019). As discussed in Section 5.3, all meta-learners achieve a higher model selection accuracy nearly
consistently by using METAGL’s meta-graph features than when they use these GLE techniques, and METAGL outperform all meta-learners, regardless of which features are used.
I.2 MODEL SELECTION TIME
50% 100% 150% 200% 250%
Figure 9: Distribution of the time for METAGL to make a prediction / the time to create meta-graph features (in percentage). Red and green lines denote the median and mean, respectively.
Table 7 shows results comparing the runtime (in seconds) for naive model selection with the runtime of METAGL. Note that naive model selection requires training and evaluating each method in the model set, while in METAGL, the runtime involves only the time to generate meta-graph feature (penultimate row) and predict the best model via a forward | 1. What is the focus of the paper regarding graph model selection?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application of MetaOD principles to graph data?
3. How does the reviewer assess the novelty and generalization capabilities of the method?
4. Are there any concerns regarding the method's ability to capture node feature distribution differences and handle knowledge gaps between graphs?
5. Can the proposed method be applied to real-world scenarios, and what kind of applications would be suitable for it? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work focuses on graph model selection without training by matching the meta-feature and model representations. Extensive experiments are conducted to demonstrate the effective and efficiency of the proposed method. Basically, the overall idea has close relation to pioneering work MetaOD. By comparison, this work adopts the key idea to explore the graph model selection task with meta-knowledge.
Strengths And Weaknesses
Strength:
The presentation and writing are good enough for clearly presenting the motivation and corresponding solution.
Meta-features for graph data are clearly defined by fully extracting the graph structural knowledge.
Extensive experiments are conducted to demonstrate the superiority of proposed method to state-of-the-art meta model selection method.
Weakness:
The novelty is limited to applying the principles of MetaOD to graph data by matching meta-features of graph data with model representations. It's interesting to compare the way to run "meta-learning" in this work with the optimization-based meta-learning methods like MAML [1]. The common thing shared by them is that finally selected model should have impressive prediction performance. The principles emphasized in this work pays attention to select the best one from the stored models which are trained on historical data. While optimization-based meta-learning can capture the unique knowledge existing in the input data just by fine-tuning models. I strongly suggest to have a discussion on the difference between them and make a comparison over their prediction performance.
Though the proposed method can fast search one model from the candidates, it's still difficult to believe that only comparing the meta-features extracted from graph structures can select the exact model for the input data. What if the extracted meta-feature can not represent the node features distribution differences existing in the graphs? Is it still the best choice for model selection? or is the learning-free model selection suitable to deal with the knowledge gap between the new graph and those graphs in the data pool?
Last but not least, I still have concern on the generalization of the proposed method. It's well known that graph consists of unique node sets. How is possible that the models trained on different graphs can be applied to a graph with totally different nodes? Maybe authors should present some concrete applications that are suitable to be applied the proposed method.
References:
Finn, Chelsea, Pieter Abbeel, and Sergey Levine. "Model-agnostic meta-learning for fast adaptation of deep networks." In International conference on machine learning, pp. 1126-1135. PMLR, 2017.
Clarity, Quality, Novelty And Reproducibility
The presentation is overall clear enough the catch the main idea. However, I have concerns on the novelty of the proposed method. |
ICLR | Title
MetaGL: Evaluation-Free Selection of Graph Learning Models via Meta-Learning
Abstract
Given a graph learning task, such as link prediction, on a new graph, how can we select the best method as well as its hyperparameters (collectively called a model) without having to train or evaluate any model on the new graph? Model selection for graph learning has been largely ad hoc. A typical approach has been to apply popular methods to new datasets, but this is often suboptimal. On the other hand, systematically comparing models on the new graph quickly becomes too costly, or even impractical. In this work, we develop the first meta-learning approach for evaluation-free graph learning model selection, called METAGL, which utilizes the prior performances of existing methods on various benchmark graph datasets to automatically select an effective model for the new graph, without any model training or evaluations. To quantify similarities across a wide variety of graphs, we introduce specialized meta-graph features that capture the structural characteristics of a graph. Then we design G-M network, which represents the relations among graphs and models, and develop a graph-based meta-learner operating on this G-M network, which estimates the relevance of each model to different graphs. Extensive experiments show that using METAGL to select a model for the new graph greatly outperforms several existing meta-learning techniques tailed for graph learning model selection (up to 47% better), while being extremely fast at test time (∼1 sec).
1 INTRODUCTION
Given a graph learning (GL) task, such as link prediction, for a new graph dataset, how can we select the best method as well as its hyperparameters (HPs) (collectively called a model) without performing any model training or evaluations on the new graph? GL has received increasing attention recently (Zhang et al., 2022), achieving successes across various applications, e.g., recommendation and ranking (Fan et al., 2019; Park et al., 2020), traffic forecasting (Jiang & Luo, 2021), bioinformatics (Su et al., 2020), and question answering (Park et al., 2022). However, as GL methods continue to be developed, it becomes increasingly difficult to determine which model to use for the given graph.
Model selection (i.e., selecting a method and its configuration such as HPs) for graph learning has been largely ad hoc to date. A typical approach, called “no model selection”, is to simply apply popular methods to new graphs, often with the default HP values. However, it is well known that there is no universal learning algorithm that performs best on all problem instances (Wolpert & Macready, 1997), and such consistent model selection is often suboptimal. At the other extreme lies “naive model selection” (Fig. 1b), where all candidate models are trained on the new graph, evaluated on a hold-out validation graph, and then the best performing model for the new graph is selected. This approach is very costly as all candidate models are trained when a new graph arrives. Recent methods on neural architecture search (NAS) and hyperparameter optimization (HPO) of GL methods, which we review in Section 3, adopt smarter and more efficient strategies, such as Bayesian optimization (Snoek et al., 2012; Tu et al., 2019), which carefully choose a relatively small number of HP settings to evaluate. However, they still need to evaluate multiple configurations of each GL method on the new graph.
Evaluation-free model selection is yet another paradigm, which aims to tackle the limitations of the above approaches by attempting to simultaneously achieve the speed of no model selection and the accuracy of exhaustive model selection. Recently, a seminal work by Zhao et al. (2021) proposed a technique for outlier detection (OD) model selection, which carries over the observed performance of
OD methods on benchmark datasets for selecting OD methods. However, it does not address the unique challenges of GL model selection, and cannot be directly used to solve the problem. Inspired by this work, we systematically tackle the model selection problem for graph learning, especially link prediction. We choose link prediction as it is a key task for graph-structured data: It has many applications (e.g., recommendation, knowledge graph reasoning, and entity resolution), and several inference and learning tasks can be cast as a link prediction problem (e.g., Fadaee & Haeri (2019)). In this work, we develop METAGL, the first meta-learning framework for evaluation-free selection of graph learning models, which finds an effective GL model to employ for a new graph without training or evaluating any GL model on the new graph, as Figure 1a illustrates. METAGL satisfies all of desirable features for GL model selection listed in Table 1, while no existing paradigms satisfy all.
The high-level idea of meta-learning based model selection is to estimate a candidate model’s performance on the new graph based on its observed performances on similar graphs. Our meta-learning problem for graph data presents a unique challenge of how to model graph similarities, and what characteristic features (i.e., meta-features) of a graph to consider. Note that this step is often not needed for traditional meta-learning problems on non-graph data, as features for non-graph objects (e.g., location, age of users) are often readily available. Also, the high complexity and irregularity of graphs (e.g., different number of nodes and edges, and widely varying connectivity patterns among different graphs) makes the task even more challenging. To handle these challenges, we design specialized meta-graph features that can characterize major structural properties of real-world graphs.
Then to estimate the performance of a candidate model on a given graph, METAGL learns to embed models and graphs in the shared latent space such that their embeddings reflect the graph-to-model affinity. Specifically, we design a multi-relational graph called G-M network, which captures multiple types of relations among models and graphs, and develop a meta-learner operating on this G-M network, based on an attentive graph neural network that is optimized to leverage meta-graph features and prior model performance into producing model and graph embeddings that can be effectively used to estimate the best performing model for the new graph. METAGL greatly outperforms existing metalearners in GL model selection (Fig. 1c). In sum, the key contributions of this work are as follows. • Problem Formulation. We formulate the problem of selecting effective GL models in an
evaluation-free manner (i.e., without ever having to train/evaluate any model on the new graph), To the best of our knowledge, we are the first to study this important problem. • Meta-Learning Framework and Features. We propose METAGL, the first meta-learning framework for evaluation-free GL model selection. For meta-learning on various graphs, we design metagraph features that quantify graph similarities by capturing structural characteristics of a graph. • Effectiveness. Using METAGL for GL model selection greatly outperforms existing meta-learning techniques (up to 47% better, Fig. 1c), with negligible runtime overhead at test time (∼1 sec).
Benchmark Data/Code: To facilitate further research on this important new problem, we release code and data at https://github.com/NamyongPark/MetaGL, including performances of 400+ models on 300+ graphs, and 300+ meta-graph features.
Table 1: The proposed METAGL wins on features in comparison to existing graph learning (GL)
model selection (MS) paradigms, all of which fail to satisfy some of desirable properties for GL MS.
Desiderata for Graph Learning (GL) MS
GL Model Selection (MS) Paradigms No model selection Naive model selection (Fig. 1b) Graph HPO/NAS (e.g., AutoNE, AGNN; See Sec 3 and Fig. 1b) METAGL (Ours, Fig. 1a)
Evaluation-free GL model selection X ! Capable of MS from among multiple GL algorithms X ! Capitalizing on graph similarities for GL MS ! Estimating model performance based on past observations X !
2 PROBLEM FORMULATION
Given a new unseen graph, our goal is to select the best model from a heterogeneous set of graph learning models, without requiring any model evaluations and user intervention. In comparison to traditional meta-learning problems where a model denotes a single method and its hyperparameters, a model in the graph meta-learning problem is more broadly defined to be
model M = {(graph embedding method, hyperparameters), (predictor, hyperparameters)}, (1)
as graph learning tasks usually involve two steps: (1) embedding the graph using a graph representation learning method, and (2) providing node embeddings to the predictor of a downstream task like link prediction. Both steps require learning a method with specific hyperparameters. Thus, there can be many models with the same embedding method and predictor, which have different hyperparameters. Also, the set M of models may contain many different graph representation learning methods (e.g., node2vec (Grover & Leskovec, 2016), GraphSAGE (Hamilton et al., 2017), DeepGL (Rossi et al., 2020) to name a few), as well as multiple task-specific predictors, making M heterogeneous. Given a training meta-corpus of n graphs G = {Gi}ni=1, andmmodels M = {Mj}mj=1 for GL tasks, we derive performance matrix P ∈ Rn×m where Pij is the performance (e.g., accuracy) of model j on graph i. Our meta-learning problem for evaluation-free GL model selection is defined as follows.
Problem 1 (Evaluation-Free Graph Learning Model Selection).
Given
• an unseen test graph Gtest /∈ G, and • a potentially sparse performance matrix P ∈ Rn×m of m heterogeneous graph learning models M = {M1, . . . ,Mm} on n graphs G = {G1, . . . , Gn}, Select • the best model M∗ ∈M to employ on Gtest without evaluating any model in M on Gtest.
3 RELATED WORK
A majority of works on GL focus on developing new algorithms for certain graph tasks and applications (Xia et al., 2021; Zhang et al., 2022). In comparison, there exist relatively few recent works that address the GL model selection problem (Zhang et al., 2021). They mainly focus on neural architecture search (NAS) and hyperparameter optimization (HPO) for GL models, especially graph neural networks (GNNs). Toward efficient and effective NAS and HPO in GL, they investigated several approaches, such as Bayesian optimization (AutoNE by Tu et al. (2019)), reinforcement learning (GraphNAS by Gao et al. (2020), AGNN by Zhou et al. (2019), Policy-GNN by Lai et al. (2020)), hypernets (ST-GCN by Zhu et al. (2021)), and evolutionary algorithms (Bu et al. (2021)), as well as techniques like subgraph sampling (AutoNE by Tu et al. (2019)), graph coarsening (JITuNE by Guo et al. (2021)), and hierarchical evaluation (HESGA by Yuan et al. (2021)). However, as summarized in Table 1, these methods cannot perform evaluation-free GL model selection (Problem 1), since they need to evaluate multiple configurations of each GL method on the new graph for model selection. Further, they are limited to finding the best configuration of a single algorithm, and thus cannot select a model from a heterogeneous model set M with various GL models, as Problem 1 requests. An earlier work on GNN design space (You et al., 2020) is somewhat relevant, as it proposes an approach to quantify graph similarities, which can be used to find an observed graph similar to the test graph, and select a model that performed best on it. However, their approach evaluates a set of anchor models on all graphs, and computes similarities between two graphs based on anchor models’ performance on them. As it needs to run anchor models on the new graph, it is inapplicable to Problem 1. For the first time, METAGL enables evaluation-free model selection from a heterogeneous set of GL models.
4 FRAMEWORK
In this section, we present METAGL, our meta-learning based framework that solves Problem 1 by leveraging prior performances of existing methods. METAGL consists of two phases: (1) offline meta-training phase (Section 4.1) that trains a meta-learner using observed graphs G and model performances P, and (2) online model prediction phase (Section 4.2), which selects the best model for the new graph. A summary of notations used in this work is provided in Table 3 in the Appendix.
4.1 OFFLINE META-TRAINING
Meta-learning leverages prior experience from related learning tasks to do a better job on the new task. When the new task is similar to some historical learning tasks, then the knowledge from those similar tasks can be transferred and applied to the new task. Thus effectively capturing the similarity between an input task and observed ones is fundamentally important for successful meta-learning. In meta-learning, the similarity between learning tasks is modeled using meta-features, i.e., characteristic features of the learning task that can be used to quantify the task similarity.
Meta-Graph Features. Given the graph learning model selection problem (where new graphs correspond to new learning tasks), METAGL captures the graph similarity by extracting meta-graph features such that they reflect the structural characteristics of a graph. Notably, since graphs have irregular structure, with different number of nodes and edges, METAGL designs meta-graph features to be of the same size for any arbitrary graph such that they can be easily compared using meta-graph features. We use the symbol m ∈ Rd to denote the fixed-size meta-graph feature vector of graph G. We defer the details of how METAGL computes m to Section 4.3.
Model Performance Estimation. To estimate how well a model would perform on a given graph, METAGL represent models and graphs in the latent k-dimensional space, and captures the graphto-model affinity using the dot product similarity between the two representations hGi and hMj of the i-th graph Gi and j-th model Mj , respectively, such that pij ≈ 〈hGi ,hMj 〉 where pij is the performance of model Mj on graph Gi. Then to obtain the latent representation h, we design a learnable function f(·) that takes in relevant information on models and graphs from the meta-graph features m and the prior knowledge (i.e., model performances P and observed graphs G). Below in this section, we focus on the inputs to the function f(·), and defer the details of f(·) to Section 4.4. We first factorize performance matrix P into latent graph factors U ∈ Rn×k and model factors V ∈ Rm×k, and take the model factor Vj ∈ Rk (the j-th row of V) as the input representation of model Mj . Then, METAGL obtains the latent embedding hMj of model Mj by hMj = f(Vj). For graphs, more information is available since we have both meta-graph features m and meta-train graph factors U. However, while we have the same number of models during training and inference, we observe new graphs during inference, and thus cannot obtain the graph factor Utest for the test graph as for the train graphs since matrix factorization (MF) is transductive by construction (i.e., existing models’ performance on the test graph is needed to get latent factors for the test graph directly via MF). To handle this issue, we learn an estimator φ : Rd 7→ Rk that maps the meta-graph features m into the latent factors of meta-train graphs obtained via MF above (i.e., for graph Gi with m, φ(m) = Ûi ≈ Ui) and use this estimated graph factor. We then combine both inputs ([m;φ(m)] ∈ Rd+k), and apply linear transformation to make the input representation of graph Gi to be of the same size as that of model Mj , obtaining the latent embedding of graph Gi to be hGi = f(W[m;φ(m)]) where W ∈ Rk×(d+k) is a weight matrix. Thus in METAGL, the performance pij of model Mj on graph Gi with meta-graph features m is estimated as
pij ≈ p̂ij = 〈f(W[m;φ(m)]), f(Vj)〉. (2)
Meta-Learning Objective. For tasks where the goal is to estimate real values, such as accuracy, the mean squared error (MSE) is a typical choice for the loss function. While MSE is easy to optimize and
effective for regression, it does not directly take the ranking quality into account. By contrast, in our problem setup, accurately ranking models for each graph dataset is more important than estimating the performance itself, which makes MSE a suboptimal choice. In particular, Problem 1 focuses on finding the model with the best performance on the given graph. Therefore, we consider rank-based learning objectives, and among them, we adapt the top-1 probability to the proposed Problem 1 as follows. Let P̂i ∈ Rm be the i-th row of P̂ (i.e., estimated performance of all m models on graph Gi). Given P̂i, the top-1 probability pP̂itop1(j) of j-th model Mj in the model set M is defined to be
pP̂itop1 (j) = π(p̂ij)∑m k=1 π(p̂ik) = exp(p̂ij)∑m k=1 exp(p̂ik)
(3)
where π(·) is an increasing, strictly positive function, which we define to be an exponential function.
Theorem 1 (Cao et al. (2007)). Given the performance P̂i of all models on graph Gi, pP̂itop1(j) represents the probability of model Mj to be ranked at the top of the list (i.e., all models in M). Top-1 probabilities pP̂itop1 (j) for all j = 1, . . . ,m form a probability distribution over m models.
Based on Theorem 1, we obtain two probability distributions by applying top-1 probability to the true performance Pi and estimated performance P̂i of m models, and optimize METAGL such that the distance between the two resulting distributions gets decreased. Using the cross entropy as the distance metric, we obtain the following loss over all n meta-train graphs G:
L(P, P̂) = − n∑ i=1 m∑ j=1 pPitop1 (j) log ( pP̂itop1 (j) ) (4)
When P is sparse, meta-training can be performed via slightly modified Eqs. (3) and (4) in App. G.2.
4.2 ONLINE MODEL PREDICTION
In the meta-training phase, METAGL learns estimators f(·) and φ(·), as well as weight matrix W and latent model factors V. Given a new graph Gtest, METAGL first computes the meta-graph features mtest ∈ Rd as we discuss in Section 4.3. Then mtest is regressed to obtain the (approximate) latent graph factors Ûtest = φ(mtest)∈Rk. Recall that the model factors V learned in the meta-training stage can be directly used for model prediction. Then model Mj’s performance on test graph Gtest can be estimated by applying Equation (2) with mtest and φ(mtest). Finally, the model that has the highest estimated performance is selected by METAGL as the best model M∗, i.e.,
M∗ ← arg max Mj∈M
〈 f(W[mtest;φ(mtest)]), f(Vj) 〉 (5)
Note that model selection using Equation (5) depends only on the meta-graph features mtest of the test graph and other pretrained estimators and latent factors that METAGL learned in the meta-training phase. As no model training or evaluation is involved, model prediction by METAGL is much faster than training and evaluating different models multiple times, as our experiments show in Section 5.4. Further, model prediction process is fully automatic since it does not require users to choose or fine-tune any values at test time. Figure 2 shows an overview of the model prediction process, and Algorithm 1 in the Appendix lists steps for offline meta-training and online model prediction. 4.3 STRUCTURAL META-GRAPH FEATURES
𝜓! 𝜓" 𝜓#
...
...
ΣΣΣ
...
(ψ )
][ 𝒎
MetaGraph
Features
Input Graph 𝐺
Figure 3: Meta-graph features in METAGL are derived in two steps. See Section 4.3 for details.
Meta-graph features are a crucial component of our metalearning approach METAGL since they capture important structural characteristics of an arbitrary graph. Meta-graph features enable METAGL to quantify graph similarities, and utilize prior experience with observed graphs for GL model selection. It is important that a sufficient and representative set of meta-graph features are used to capture the important structural properties of graphs from a wide variety of domains, including biological, technological, information, and social networks to name a few.
In this work, we are not able to leverage the commonly used simple statistical meta-features used by previous work on model selection-based meta-learning, as they cannot be used directly
over irregular and complex graph data. To address this problem, we introduce the notion of meta-graph features and develop a general framework for computing them on any arbitrary graph.
Meta-graph features in METAGL are derived in two steps, which is shown in Figure 3. First, we apply a set of structural meta-feature extractors Ψ = {ψ1, . . . , ψq} to the input graph G, obtaining Ψ(G) = {ψ1(G), . . . , ψq(G)}. Applying ψ ∈ Ψ to G yields a vector or a distribution of values for the nodes (or edges) in the graph, such as degree distribution and PageRank scores. That is, in Figure 3, ψ1 can be a degree distribution, ψ2 can be PageRank scores of all nodes, and so on. Specifically, we use both local and global structural feature extractors. To capture the local structural properties around a node or an edge, we compute node degree, number of wedges (i.e., a path of length 2), triangles centered at each node, and also frequency of triangles for every edge. To capture global structural properties of a node, we derive eccentricity, PageRank score, and k-core number of each node. Appendix D summarizes meta-feature extractors used in this work.
Let ψ denote the local structural extractors for nodes. Given a graph Gi = (Vi, Ei) and ψ, we obtain an |Vi|-dimensional node vector ψ(Gi). Since any two graphs Gi and Gj are likely to have a different number of nodes and edges, the resulting structural feature matrices ψ(Gi) and ψ(Gj) for these graphs are also likely to be of different sizes as the rows of these matrices correspond to nodes or edges of the corresponding graph. Thus, in general, these structural feature-based representations of the graphs cannot be used directly to derive similarity between graphs.
Now, to address this issue, we apply the set Σ of global statistical meta-graph feature extractors to every ψi(G), ∀i = 1, . . . , q, which summarizes each ψi(G) to a fixed-size vector. Specifically, Σ(ψi(G)) applies each of the statistical functions in Σ (e.g., mean, kurtosis, etc) to the distribution ψi(G), which computes a real number that summarizes the given feature distribution ψi(G) from different statistical point of view, producing a vector Σ(ψi(G)) ∈ R|Σ|. Then we obtain the metagraph feature vector m of graph G by concatenating the resulting meta-graph feature vectors:
m = [Σ(ψ1(G)) · · · Σ(ψq(G))] ∈ Rd. (6) Table 5 in Appendix D lists the global statistical functions Σ used in this work to derive metagraph features. Further, in addition to the node- and edge-level structural features, we also compute global graph statistics (scalars directly derived from the graph, e.g., density and degree assortativity coefficient), and append them to m, i.e., the node- or edge-level structural features obtained above.
Most importantly, given any arbitrary graph G′, the proposed approach is guaranteed to output a fixed d-dimensional meta-graph feature vector characterizing G′. Hence, the structural similarity of any two graphs G and G′ can be quantified using a similarity function over m and m′, respectively.
4.4 EMBEDDING MODELS AND GRAPHS
Given an informative context (i.e., input features) of models and graphs that METAGL learns from model performances P and meta-graph features M (Sections 4.1 and 4.3), how can we use it to effectively learn model and graph embeddings that capture graph-to-model affinity? We note that similar entities can make each other’s context more accurate and informative. For instance, in our problem setup, similar models tend to have similar performance distributions over graphs, and likewise similar graphs are likely to exhibit similar affinity to different models. With this consideration, we model the task as a graph representation learning problem, where we construct a graph called G-M network that connects similar graphs and models, and learn the graph and model embeddings over it.
G-M Network. We define G-M network to be a multi-relational graph with two types of nodes (i.e., models and graphs) where edges connect similar model nodes and graph nodes. To measure similarity among graphs and models, we utilize the latent graph and model factors (U and V, respectively) obtained by factorizing P, as well as the meta-graph features M. More precisely, we use the estimated graph factor Û instead of U to let the same graph construction process work for new graphs. Note that this gives us two types of features for graph nodes (i.e., Û and M), and one type of features for model nodes (i.e., V). To let different features influence the embedding step differently as needed, we connect graph nodes and model nodes using five type of edges: M-g2g, P-g2g, P-m2m, P-g2m, P-m2g where g and m denote the type of nodes that an edge connects (graph and model, respectively), and M and P denote that the edge is based on meta-graph features and model performance, respectively. For example, M-g2g and P-g2g edges connect two graph nodes that are similar in terms of M and Û, respectively. Then for each edge type, we construct a k-NN graph by connecting nodes to their top-k similar nodes, where node-to-node similarity is defined as the cosine similarity between the
corresponding node features. For instance, for P-g2m edge type, graph nodes and model nodes are linked based on the similarity between Û and V. Fig. 7 in the Appendix illustrates the G-M network.
Learning Over G-M Network. Given the G-M network Gtrain with meta-train graphs and models, graph neural networks (GNNs) provide an effective framework to embed models and graphs via (weighted) neighborhood aggregation over Gtrain. However, since the structure of G-M network is induced by simple k-NN search, some of the neighbors may not provide the same amount of information as others, or may even provide noisy information. We found it helpful to perform attentive neighborhood aggregation, so more informative neighbors can be given more weights. To this end, we choose to use attentive GNNs designed for multi-relational networks, and specifically use HGT (Hu et al., 2020). Then the embedding function f(·) in Section 4.1 is defined to be f(x) = HGT(x,Gtrain) during training, which transforms the input node feature x into an embedding via attentive neighborhood aggregation over Gtrain. Further details of HGT are provided in Appendix G.4. Inference Over G-M Network. For inference at test time, we extend Gtrain to be a larger G-M network Gtest that additionally contains test graph nodes, and edges between test graph nodes, and existing graphs and models in Gtrain. The extension is done in the same way as in the training phase, by finding top-k similar nodes. Then the embedding at test time can be done by f(x) = HGT(x,Gtest).
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
Models and Evaluation. A model in our problem (Eqn. 1) consists of two components. The first component performs graph representation learning (GRL), and the other component leverages the learned embeddings for a downstream task of interest. In this work, we focus on link prediction, which is a key task for graph-structured data as we discuss in Section 1. We evaluate the performance of selecting a link prediction model for new graphs without any model evaluation. For the first component, we use 12 popular GRL methods, and for the second component for link scoring, we use a simple estimator that computes the cosine similarity between two node embeddings. This results in a model set M with 423 models. The full list of models is given in Table 4 in Appendix C. For evaluation, we create a testbed containing benchmark graphs, meta-graph features, and a performance matrix. We construct the performance matrix by evaluating each link prediction model in M on the graphs in the testbed, in terms of mean average precision score. Then we evaluate METAGL and baselines via 5-fold cross validation where the benchmark graphs are split into meta-train Gtrain and meta-test Gtest graphs for each fold, and meta-learners trained over the meta-train graph data are evaluated using the meta-test graph datasets. Thus, model performances over the meta-test graphs Gtest and the meta-graph features of Gtest were unseen during training, but used only for testing.
Table 2: The proposed METAGL outperforms existing meta-learners, given fully observed performance matrix. Best results are in bold, and second best results are underlined. “METAGL_Baseline” notation (e.g., METAGL_S2) indicate that the baseline metalearner uses METAGL’s meta-graph features.
Method MRR AUC NDCG@1
Random Selection 0.011 0.490 0.745
Si m
pl e
Global Best-AvgPerf 0.163 0.877 0.932 Global Best-AvgRank 0.103 0.867 0.930
METAGL_AS 0.222 0.905 0.947 METAGL_ISAC 0.202 0.887 0.939
O pt
im iz
at io nba se d METAGL_S2 0.170 0.910 0.945 METAGL_ALORS 0.190 0.897 0.950 METAGL_NCF 0.140 0.869 0.934 METAGL_MetaOD 0.075 0.599 0.889
METAGL 0.259 0.941 0.962
Since model selection aims to accurately predict the best model for a new graph, we evaluate the top-1 prediction performance in terms of MRR (Mean Reciprocal Rank), AUC, and NDCG (Normalized Discounted Cumulative Gain). To apply MRR and AUC, we label models such that the top1 model (i.e., the model with the best performance for the given graph) is labeled as 1, while all others are labeled as 0. For NDCG, we report NDCG@1, which evaluates the relevance of top-1 predicted model. All metrics range from 0 to 1, with larger values indicating better performance.
Baselines. Being the first work for evaluation-free model selection in GL, we do not have immediate baselines for comparison. Instead, we adapt baselines used for OD model selection (Zhao et al., 2021) and collaborative filtering for our problem setting. In Appendix A, we describe baselines in detail. Baselines are grouped into two categories: (a) Simple meta-learners select a model that performs generally well, either globally or locally: Global Best (GB)-AvgPerf, GB-AvgRank, ISAC (Kadioglu et al., 2010), and ARGOSMART (AS) (Nikolić et al., 2013); (b) Optimization-based meta-
learners learn to estimate the model performance by modeling the relation between meta features and model performances: Supervised Surrogates (S2) (Xu et al., 2012), ALORS (Misir & Sebag, 2017), NCF (He et al., 2017), and MetaOD (Zhao et al., 2021). We also include Random Selection (RS) as a baseline to see how these methods compare to random scoring.
Note that, except the simplest meta-learners RS and GB, no baselines can handle graph data, and thus they cannot estimate model performance on the new graph on their own. In that sense, they are not direct competitors of METAGL. We enable them to be used for GL model selection by providing the proposed meta-graph features. MetaOD, which was originally designed for OD model selection, is also given the same meta-graph features to perform GL model selection. We denote baselines either by combining METAGL with their names (e.g., METAGL_S2) to clearly show that they use METAGL’s meta-graph features, or using their name alone (e.g., S2) for simplicity.
5.2 MODEL SELECTION ACCURACY
Fully Observed Performance Matrix. In this setup, meta-learners are trained using a full performance matrix P with no missing entries. The model selection accuracy of all meta-learners in this setup is reported in Table 2, where METAGL achieves the best performance in all metrics, with 17% higher MRR than the best baseline (AS).
• Among simple meta-learners, Global Best meta-learners, which simply average model performance or rank over all observed graphs, are outperformed by more sophisticated meta-learners AS and ISAC, which leverage dataset similarities for model selection using meta-graph features. • For optimization-based meta-learners, it is important to be aware of how models and graphs relate to each other, and have high flexibility to capture that complex relationship. In methods like ALORS and MetaOD, relations between models and datasets (i.e., relative position of models and datasets in the embedding subspace) are learned rather indirectly via reconstructing the performance matrix. • METAGL, in contrast, directly captures graph-to-model affinity by modeling their relations via employing flexible GNNs over the G-M network, as well as reconstructing the performance matrix. As a result, METAGL consistently outperforms other optimization-based meta-learners.
Partially Observed Performance Matrix. In this setup, meta-learners are trained using a sparse performance matrix P, obtained by randomly masking out a full P. Figure 4 reports results obtained with varying sparsity, ranging up to 0.9. In this more challenging setup, METAGL consistently performs the best across all levels of sparsity, achieving up to 47% higher MRR than the best baseline.
• With increased sparsity, nearly all meta-learners perform increasingly worse, as one might expect. • While AS was the best baseline given a full P, its accuracy decreased rapidly as sparsity increased.
Since AS selects a model based on the 1NN meta-train graph, it is highly sensitive to P’s sparsity. • Baselines such as the Global Best baselines are more stable as they average across multiple graphs. • Optimization-based methods like METAGL and S2 perform favorably to simple meta-learners in
5.3 EFFECTIVENESS OF META-GRAPH FEATURES
In Figure 5, we evaluate how the performance of meta-learners obtained with the proposed meta-graph feature (Section 4.3) compares to that obtained with existing graph-level embedding (GLE) techniques, GL2Vec (Chen & Koga, 2019), Graph2Vec (Narayanan et al., 2017), and GraphLoG (Xu et al., 2021).
Figure 8 in App. I.1 provides results for three other GLE methods, WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019).
• Most points are below the diagonal in Figure 5, i.e., all meta-learners nearly consistently perform better when they use METAGL’s meta-graph features than when they use existing GLE methods. This shows the effectiveness of METAGL’s features for the proposed task of GL model selection. • METAGL performs the best, whether METAGL’s features or existing GLE methods are used.
5.4 MODEL SELECTION EFFICIENCY
To evaluate how efficient METAGL’s model selection is, we measure its runtime (i.e., the time to create metagraph features for the new graph at test time, plus the time to predict the best model), and compare it with the time to train a GL model. Figure 6 shows the distribution in box plots, where red and green lines denote the median and mean, respectively.
Results show that METAGL is fast, and incurs negligible runtime overhead: its runtime is just around 1 seconds or less in most cases (Figure 6a). Notably, compared to training each GL model for only 5% of its available model configurations, METAGL takes considerably less
time, i.e., a median of 5% and a mean of 11% of the time required for model training (Figure 6b). Given large-scale test graphs in practice, the speed-up enabled by METAGL will be greater than that reported in Figure 6b, due to the increased training time on such graphs. Also, METAGL’s model selection process can be further streamlined, e.g., by parallelizing meta-feature generation process. We provide additional results on the runtime of METAGL and naive model selection in Appendix I.2.
5.5 ADDITIONAL RESULTS
We present ablation study in Appendix I.3, which shows the effectiveness of METAGL’s proposed components, e.g., the meta-learning objective, G-M network, and the graph encoder used by METAGL. We evaluate the sensitivity of model selection approaches to the variance of performance matrix P in Appendix I.4, and compare the predicted model performance with the actual best performance in Appendix I.5.
6 CONCLUSION
As more and more GL models are developed, selecting which one to use is becoming increasingly hard. Toward near-instantaneous, automatic GL model selection, we make the following contributions.
• Problem Formulation. We present the first problem formulation to select effective GL models in an evaluation-free manner (i.e., without ever having to train/evaluate any model on the new graph). • Meta-Learning Framework and Features. We propose METAGL, the first meta-learning framework for evaluation-free GL model selection, and meta-graph features to quantify graph similarities. • Effectiveness. Using METAGL for model selection greatly outperforms existing meta-learning techniques (up to 47% better), while incurring negligible runtime overhead at test time (∼1 sec).
A BASELINES
Being the first work for evaluation-free model selection in graph learning (GL), we do not have immediate baselines for comparison. Instead, we adapt baselines used in MetaOD (Zhao et al., 2021) for outlier detection (OD) model selection as well as collaborative filtering for our problem setting. The baselines used in experiments can be organized into the following two categories.
(a) Simple meta-learners select a model that performs generally well, either globally or locally.
• Global Best (GB)-AvgPerf selects the model that has the largest average performance over all meta-train graphs.
• Global Best (GB)-AvgRank computes the rank of all models (in percentile) for each graph, and selects the model with the largest average ranking over all meta-train graphs.
• ISAC (Kadioglu et al., 2010) first clusters meta-train datasets using meta-graph features, and at test time, finds the cluster closest to the test graph, and selects the model with the largest average performance over all graphs in that cluster.
• ARGOSMART (AS) (Nikolić et al., 2013) finds the meta-train graph closest to the test graph (i.e., 1NN) in terms of meta-graph feature similarity, and selects the model with the best result on the 1NN graph.
(b) Optimization-based meta-learners learn to estimate the model performance by modeling the relation between meta-graph features and model performances.
• Supervised Surrogates (S2) (Xu et al., 2012) learns a surrogate model (a regressor) that maps meta-graph features to model performances.
• ALORS (Misir & Sebag, 2017) factorizes the performance matrix into latent factors on graphs and models, and estimates the performance to be the dot product between the two factors, where a non-linear regressor maps meta-graph features into the latent graph factors.
• NCF (He et al., 2017) replaces the dot product used in ALORS with a more general neural architecture that estimates performance by combining the linearity of matrix factorization and non-linearity of deep neural networks.
• MetaOD (Zhao et al., 2021) pioneered the field of unsupervised OD model selection by designing meta-features specialized to capture the outlying characteristics of datasets, as well as improving upon ALORS with the adoption of an NDCG-based meta-training objective. We enable MetaOD to be applicable to our problem setting, by applying our proposed meta-graph features to MetaOD.
In addition, we also include Random Selection (RS) as a baseline, to see how meta-learners perform in comparison to random scoring. Note that among the above approaches, only GB-AvgPerf and GB-AvgRank do not rely on meta-features for model selection. All other meta-learners make use of the proposed meta-graph features to estimate model performances on an unseen test graph.
B NOTATIONS
Table 3 provides a list of notations frequently used in this work.
C MODEL SET
A model in the model set M refers to a graph representation learning (GRL) method along with its hyperparameters settings, and a predictor that makes a downstream task-specific prediction given the node embeddings from the GRL method. In this work, we use a link predictor which scores a given link by computing the cosine similarity between the two nodes’ embeddings. Table 4 shows the
Methods Hyperparameter Settings Count
SGC (Wu et al., 2019a) # (number of) hops k ∈ {1, 2, 3} 3 GCN (Kipf & Welling, 2017) # layers L ∈ {1, 2, 3}, # epochs N ∈ {1, 10} 6 GraphSAGE (Hamilton et al., 2017) # layers L ∈ {1, 2, 3}, # epochs N ∈ {1, 10}, aggregation functions
f ∈ {mean, gcn, lstm} 18
node2vec (Grover & Leskovec, 2016) p, q ∈ {1, 2, 4} 9 role2vec (Ahmed et al., 2018) p, q ∈ {0.25, 1, 4}, α ∈ {0.01, 0.1, 0.5, 0.9, 0.99}, motif combinations
H ∈ {{H1}, {H2, H3}, {H2, H3, H4, H6, H8}, {H1, H2, . . . , H8}} 180
GraRep (Cao et al., 2015) k ∈ {1, 2} 2 DeepWalk (Perozzi et al., 2014) p = 1, q = 1 1 HONE (Rossi et al., 2018) k ∈ {1, 2}, Dlocal ∈ {4, 8, 16}, variant v = {1, 2, 3, 4, 5} 30 node2bits (Jin et al., 2019) walk num wn ∈ {5, 10, 20}, walk len wl ∈ {5, 10, 20}, log base b ∈
{2, 4, 8, 10}, feats f ∈ {16} 36
DeepGL (Rossi et al., 2020) α ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, motif size ∈ {4}, eps tolerance t ∈ {0.01, 0.05, 0.1}, relational aggr. ∈ {{m}, {p}, {s}, {v}, {m, p}, {m, v}, {s,m}, {s, p}, {s, v}} where m, p, s, v denote mean, product, sum, var 135
LINE (Tang et al., 2015) # hops/order k ∈ {1, 2} 2 Spectral Emb. (Luo et al., 2003) tolerance t ∈ {0.001} 1
Total Count 423
complete list of 12 popular GRL methods and their specific hyperparameter settings, which compose 412 unique models in the model set M. Note that the link predictor is omitted from Table 4 since we employ the same link predictor based on cosine similarity to all GRL methods.
D META-GRAPH FEATURES
Structural Meta-Feature Extractors. To capture the local structural properties around a node or an edge, we compute the distribution of node degrees, number of wedges (i.e., a path of length 2), triangles centered at each node, as well as the frequency of triangles for each edge. To capture the global structural properties of a node, we derive the eccentricity, PageRank score, and k-core number of each node. We also capture the global graph-level statistics (i.e., different from local
node/edge-level structural properties), such as the density of A and AAT where A is the adjacency matrix, and also the degree assortativity coefficient r.
Global Statistical Functions. For each of the structural property distributions (degree, k-core numbers, and so on) derived by the above structural meta-feature extractors, we apply the set Σ of global statistical functions (Table 5) over it to obtain a fixed-length vector representation for the node/edge/graph-level structural feature distribution.
After obtaining a set of meta-graph features, we concatenate all of them together to create the final meta-graph feature vector m for the graph.
E GRAPH DATASETS
The testbed used in this work comprises 301 graphs that have widely varying structural characteristics. Table 6 provides a list of all graph datasets in the testbed. All graph data are from (Rossi & Ahmed, 2015); they are publicly available under the Creative Commons Attribution-ShareAlike License.
Table 6 – Continued from the previous page
Graph # Nodes # Edges Graph # Nodes # Edges
66 CL-1000-1d8-trial3 925 3714 217 nci1_g3711 89 106 67 CL-1000-1d9-trial1 928 3510 218 nci1_g3901 93 102 68 CL-1000-1d9-trial2 912 3053 219 nci1_g3990 90 105 69 CL-1000-1d9-trial3 932 3278 220 nci1_g4094 90 98 70 CL-1000-2d0-trial1 909 2795 221 power-1138-bus 1138 2596 71 CL-1000-2d0-trial2 899 2941 222 power-494-bus 494 1080 72 CL-1000-2d0-trial3 916 3010 223 power-662-bus 662 1568 73 CL-1000-2d1-trial1 903 2430 224 power-685-bus 685 1967 74 CL-1000-2d1-trial2 911 2734 225 power-bcspwr09 1723 4117 75 CL-1000-2d1-trial3 915 2782 226 power-eris1176 1176 9864 76 DD_g1 327 899 227 rec-amazon 91813 125704 77 DD_g10 146 328 228 rec-movielens-tag-movies-10m 16528 71081 78 DD_g100 349 1005 229 road-chesapeake 39 170 79 DD_g1000 183 408 230 road-ChicagoRegional 1467 1298 80 DD_g1001 88 203 231 road-euroroad 1174 1417 81 DD_g1002 104 255 232 road-luxembourg-osm 114599 119666 82 DD_g1003 53 116 233 road-minnesota 2642 3303 83 DD_g1004 94 230 234 road-usroads-48 126146 161950 84 DD_g1005 370 903 235 rt-retweet 96 117 85 DD_g1006 246 568 236 rt-twitter-copen 761 1029 86 DD_g1007 309 732 237 rt_alwefaq 4171 7063 87 DD_g1008 109 304 238 rt_assad 2139 2788 88 DD_g1009 129 272 239 rt_bahrain 4676 7979 89 DD_g101 306 728 240 rt_barackobama 9631 9775 90 DD_g1010 157 363 241 rt_damascus 3052 3869 91 DD_g1011 47 136 242 rt_dash 6288 7436 92 DD_g1012 146 365 243 rt_gmanews 8373 8721 93 DD_g1013 93 211 244 rt_gop 4687 5529 94 DD_g1014 119 273 245 rt_http 8917 10314 95 DD_g1015 102 244 246 rt_islam 4497 4616 96 DD_g1016 113 291 247 rt_israel 3698 4165 97 DD_g1017 162 376 248 rt_lebanon 3961 4436 98 DD_g1018 296 680 249 rt_libya 5067 5541 99 DD_g1019 131 353 250 rt_lolgop 9765 10075 100 DD_g102 561 1422 251 rt_obama 3212 3423 101 DD_g1020 228 541 252 rt_occupy 3225 3944 102 DD_g1021 329 787 253 rt_occupywallstnyc 3609 3833 103 DD_g1022 294 730 254 rt_oman 4904 6230 104 DD_g1023 172 425 255 rt_onedirection 7987 8103 105 DD_g1024 59 160 256 rt_p2 4902 6018 106 DD_g1025 88 205 257 rt_saudi 7252 8061 107 DD_g1026 247 578 258 rt_tcot 4547 5503 108 DD_g1027 108 223 259 rt_tlot 3665 4475 109 DD_g1028 72 137 260 rt_uae 5248 6387 110 DD_g1029 99 215 261 rt_voteonedirection 2280 2464 111 DD_g103 265 647 262 sc-nasasrb 54870 1311227 112 DD_g1030 136 351 263 soc-advogato 6551 43427 113 DD_g104 372 999 264 soc-dolphins 62 159 114 DD_g105 423 1192 265 soc-firm-hi-tech 33 91 115 DD_g106 574 1355 266 soc-gplus 23628 39194 116 DD_g107 130 292 267 soc-hamsterster 2426 16630 117 DD_g108 483 1137 268 soc-highschool-moreno 70 274 118 DD_g109 132 315 269 soc-physicians 241 923 119 DD_g11 312 761 270 soc-sign-bitcoinalpha 3783 14124 120 DD_g110 394 1137 271 soc-student-coop 185 311 121 DD_g111 483 1520 272 soc-wiki-Vote 889 2914 122 DD_g112 266 631 273 socfb-Amherst 2235 90954 123 DD_g113 347 853 274 socfb-Bowdoin47 2252 84387 124 DD_g114 334 761 275 socfb-Caltech 769 16656 125 DD_g115 336 946 276 socfb-Hamilton46 2314 96394 126 eco-everglades 69 885 277 socfb-Haverford76 1446 59589 127 eco-florida 128 2075 278 socfb-nips-ego 2888 2981 128 eco-foodweb-baydry 128 2106 279 socfb-Oberlin44 2920 89912 129 eco-foodweb-baywet 128 2075 280 socfb-Reed98 962 18812 130 eco-mangwet 97 1446 281 socfb-Simmons81 1518 32988 131 eco-stmarks 54 353 282 socfb-Smith60 2970 97133 132 email-dnc-corecipient 906 10429 283 socfb-Swarthmore42 1659 61050 133 email-dnc-leak 1891 4465 284 socfb-Trinity100 2613 111996 134 email-enron-only 143 623 285 socfb-USFCA72 2682 65252 135 email-EU 32430 54397 286 socfb-Vassar85 3068 119161 136 email-radoslaw 167 3251 287 socfb-Villanova62 7772 314989 137 email-univ 1133 5451 288 socfb-Wellesley22 2970 94899 138 enzymes_g103 59 115 289 socfb-Williams40 2790 112986 139 enzymes_g118 95 121 290 tech-routers-rf 2113 6632
Continued on the next page
Table 6 – Continued from the previous page
Graph # Nodes # Edges Graph # Nodes # Edges
140 enzymes_g123 90 127 291 tech-routers-rf 2113 6632 141 enzymes_g199 62 108 292 web-BerkStan 12305 19500 142 enzymes_g204 57 105 293 web-edu 3031 6474 143 enzymes_g209 57 101 294 web-EPA 4271 8909 144 enzymes_g215 48 104 295 web-google 1299 2773 145 enzymes_g224 54 105 296 web-indochina-2004 11358 47606 146 enzymes_g279 60 107 297 web-polblogs 643 2280 147 enzymes_g291 62 104 298 web-spam 4767 37375 148 enzymes_g292 60 100 299 web-webbase-2001 16062 25593 149 enzymes_g293 96 109 300 web-wiki-chameleon 2277 31421 150 enzymes_g295 123 139 301 web-wiki-crocodile 11631 170918 151 enzymes_g296 125 141
F EXPERIMENTAL DETAILS
F.1 EXPERIMENTAL SETTINGS
Software. We used PyTorch1 for implementing the training and inference pipeline, and used the DGL’s implementation of HGT2. For MetaOD (Zhao et al., 2021), we used the implementation provided by the authors3. We used the Karate Club library (Rozemberczki et al., 2020) for the implementations of the following graph-level embedding (GLE) methods, Graph2Vec (Narayanan et al., 2017), GL2Vec (Chen & Koga, 2019), WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019). For GraphLoG (Xu et al., 2021), we used the authors’ implementation4. We used open source libraries, such as NetworkX5 and NumPy6, for implementing meta-graph feature extractors.
Hyperparameters. We set the embedding size k to 32 for METAGL and other meta-learners that learn embeddings of models and graphs. For METAGL, we created the G-M network by connecting nodes to their top-30 similar nodes. As an the embedding function f(·) in METAGL, we used HGT (Hu et al., 2020) with 2 layers and 4 heads per layer. HGT is included in the Deep Graph Library (DGL), which is licensed under the Apache License 2.0. For training, we used the Adam optimizer with a learning rate of 0.00075 and a weight decay of 0.0001. For GLE approaches, we used the default hyperparameter settings specified in the corresponding library and GitHub repository.
Link Prediction Model Training. Given a graph G, we first hold out 10% of the edges in graph G to be used for evaluation, and train GL models with the resulting subgraph for link prediction. The training of GL models was performed by sampling 20 negative edges per positive edge, computing the link score by applying a dot product between the two corresponding node embeddings, followed by a sigmoid function, and then optimizing a binary cross entropy loss for the positive and negative edge scores. For evaluation, we randomly sampled the same number of negative edges as the positive edges, and evaluated the predicted link scores in terms of mean average precision.
F.2 EVALUATION OF MODEL SELECTION ACCURACY (SECTION 5.2)
In our evaluation involving a partially observed performance matrix, we extended baseline metalearners as follows so they can operate in the presence of missing entries in the performance matrix.
• Global Best-AvgPerf averaged observed performance entries alone, ignoring missing values. If an average performance cannot be computed for some model (which is the case when a model has no observed performance entries for all graphs), we use the mean of averaged performance for other models in its place.
1https://pytorch.org/ 2https://www.dgl.ai/ 3https://github.com/yzhao062/MetaOD 4https://github.com/DeepGraphLearning/GraphLoG 5https://networkx.org/ 6https://numpy.org/
• Global Best-AvgRank computed the model rankings for each graph in percentile, as the number of observed model performances may be different for different graphs, and averaged the rank percentiles for observed cases only, as in Global Best-AvgPerf. • ISAC handled the sparse performance matrix in the same way as Global Best-AvgPerf, except that only a subset of graphs, which is similar to the test graph, is considered in ISAC. • ARGOSMART (AS) computed the mean of observed performance entries of the 1NN graph, and used this quantity in place of missing values. • ALORS factorized the sparse performance matrix using a missing value-aware non-negative matrix factorization algorithm. • Supervised Sur. (S2), NCF, and MetaOD performed optimization by only considering observed performance values in the loss function, while skipping over missing entries. Early stopping based on the validation performance was also done with respect to the observed performances alone.
F.3 EVALUATION OF META-GRAPH FEATURES (SECTION 5.3)
Except for WaveletCharacteristic and GraphLoG, we applied graph-level embedding (GLE) approaches to all graphs in our testbed, and meta-learners were trained and evaluated using the representations of all graphs via 5-fold cross validation. Since WaveletCharacteristic and GraphLoG could not scale up to some of the largest graphs in the testbed (e.g., due to out-of-memory error), we excluded 9% and 27% largest graphs for GraphLoG and WaveletCharacteristic, respectively, and evaluated meta-learners using the resulting subset of graphs. Note that, in these cases, METAGL was also trained and evaluated using the same subset of graphs.
G ADDITIONAL DETAILS AND ANALYSIS OF METAGL
G.1 METAGL ALGORITHM AND META-GRAPH FEATURES
Algorithm 1 provides detailed steps of METAGL, for both offline meta-training (top) and online model selection (bottom). In METAGL, we log-transform the meta-graph features, and extend the meta-graph features with them, as it helps with model selection. We use the notation m in Algorithm 1 as well as in the text to refer to these meta-graph features used by METAGL.
G.2 META-LEARNING OBJECTIVE FOR SPARSE PERFORMANCE MATRIX
Given a sparse performance matrix P, meta-training of METAGL can be performed by modifying the top-1 probability (Equation (3)) and the loss function (Equation (4)), such that the missing entries in P are ignored as follows:
pP̂itop1 (j) = Ipij (π(p̂ij))∑m k=1 Ipik(π(p̂ik)) = Ipij (exp(p̂ij))∑m k=1 Ipik(exp(p̂ik)) , (7)
L(P, P̂) = − n∑ i=1 m∑ j=1 Ipij ( pPitop1 (j) log ( pP̂itop1 (j) )) . (8)
where Ipij (·) is defined as
Ipij (x) = { x if pij exists in the observed performance matrix P, 0 if pij is missing in the observed performance matrix P.
(9)
Thus the supervision signal for each graph comes only from the model performances observed on it. If an entire row in P is empty, the loss terms for the corresponding graph are dropped from Equation (8).
G.3 G-M NETWORK
Figure 7 illustrates the G-M network (graph-model network) (Section 4.4), which is a multi-relational bipartite network between graph nodes and model nodes. In the G-M network, model and graph nodes are connected via five types of edges (e.g., P-m2m, P-g2m, M-g2g), which is shown as edges with distinct line styles and colors. Note that while Figure 7 shows only one edge per edge type, in the G-M network, each node is connected to its top-k similar nodes.
G.4 ATTENTIVE GRAPH NEURAL NETWORKS AND HETEROGENEOUS GRAPH TRANSFORMER
The embedding function f(·) in METAGL (Section 4.4) produces embeddings of models and graphs via weighted neighborhood aggregation over the multi-relational G-M network. Specifically, we define f(·) using Heterogeneous Graph Transformer (HGT) (Hu et al., 2020), which is a relationaware graph neural network (GNN) that performs attentive neighborhood aggregation over the G-M network. Let z`t denote the node t’s embedding produced by the `-th HGT layer, which becomes the
Algorithm 1: METAGL: Offline Meta-Training (Top) and Online Model Selection (Bottom) Input: Meta-train graph database G, model set M, embedding dimension k Output: Meta-learner for model selection /* (Offline) Meta-Learner Training (Sec. 4.1) */
1 Train & evaluate models in M on graphs in G to get performance matrix P 2 Extract meta-graph features M for each graph Gi in G (Sec. 4.3) 3 Factorize P to obtain latent graph factors U and model factors V, i.e., P ≈ UVᵀ
4 Learn an estimator φ(·) such that φ(m) = Ûi ≈ Ui 5 Create meta-train graph Gtrain (Sec. 4.4) 6 while not converged 7 for i = 1, . . . , n do 8 Get embeddings f(W[m;φ(m)]) of train graph Gi on Gtrain 9 for j = 1, . . . ,m do
10 Get embeddings f(Vj) of each model Mj on Gtrain 11 Estimate p̂ij = 〈f(W[m;φ(m)]), f(Vj)〉 (Eqn. 2) 12 end 13 end 14 Compute meta-training loss L(P, P̂) (Eqn. 4) and optimize parameters 15 end
Input: new graph Gtest Output: selected model M∗ for Gtest /* (Online) Model Selection (Sec. 4.2) */
16 Extract meta-graph features mtest = ψ(Gtest) 17 Estimate latent factor Ûtest = φ(mtest) for test graph Gtest 18 Create the test G-M network Gtest by extending Gtrain with new edges between test graph node
and existing nodes in Gtrain (Sec. 4.4) 19 Get embeddings f(W[mtest; Ûtest]) of test graph on Gtest 20 Get embeddings f(Vj) of each model Mj on Gtest 21 Return the best model M∗ ← arg maxMj∈M 〈 f(W[mtest; Ûtest]), f(Vj) 〉
input of the (`+ 1)-th layer. Given L total layers, the final embedding ht of node t is obtained to be the output from the last layer, i.e., zLt . In general, node embeddings z ` t produced by the `-th layer in an attention-based GNN, such as HGT, can be expressed as:
z`t = Aggregate ∀s∈N(t),∀e∈E(s,t)
( Attention(s, t) ·Message(s) ) (10)
where s and t are source and target nodes, respectively; N(t) denotes all the source nodes of node t; and E(s, t) denotes all edges from node s to t. There are three basic operators: Attention, which assigns different weights to neighbors based on the estimated importance of node s with respect to target node t; Message, which extracts the message vector from the source node s; and Aggregate, which aggregates the neighborhood messages by the attention weight.
HGT effectively processes multi-relational graphs, such as the proposed G-M network, by designing all of the above three operators to be aware of node types and edge types, e.g., by employing distinct set of projection weights for each type of nodes and edges, and utilizing node- and edge-type dependent attention mechanisms. We refer the reader to (Hu et al., 2020) for the details of how HGT defines the above three operators. In summary, METAGL computes the embedding function f(xt) by providing node t’s input features xt as the initial embedding (i.e., z0t ) to HGT, and returning z L t , the output from the last layer, which is computed over the G-M network via relation-aware attentive neighborhood aggregation.
G.5 TIME COMPLEXITY ANALYSIS
We now state the time complexity of our approach for inferring the best model given a new unseen graph G′ = (V ′, E′). Let G = (V, E) be the G-M network, which is comprised of model nodes and graph nodes, and induced by c-NN (nearest neighbor) search (Section 4.4 and Appendix G.3). Let k denote the embedding size, and h be the number of attention heads in HGT (Appendix G.4). The time complexity of METAGL is O(q|E′|∆ + |V|ck2/h) (11) where q is the number of meta-graph feature extractors, and |E′| is the number of edges in the new unseen graph G′. Note that both q and ∆ are small and thus negligible. Hence, METAGL is fast and efficient.
Meta-Graph Feature Extraction: The first term of the above time complexity includes the time required to estimate the frequency of all network motifs with {2, 3, 4}-nodes, which is O(|E′|∆) in the worst case where ∆ is a small constant representing the maximum sampled degree which can be set by the user. See Ahmed et al. (2016) for further details. The other structural meta-feature extractors such as PageRank all take at most O(|E′|) time. Furthermore, our approach is flexible and supports any set of meta-graph feature extractors. Thus, it is straightforward to see that we can achieve a slightly better time by restricting the set of such meta-graph feature extractors to those that can be computed in time that is linear in the number of edges of any arbitrary graph. Hence, in this case, the ∆ term is dropped and we have simply O(q|E′|+ |V|ck2/h). Also, note that feature extractors are independent of each other, and thus can be run in parallel.
Embedding Models and Graphs: To augment the G-M network G given a new graph G′, we find a fixed number of nearest neighbors for G′, which takes O(|V|k) time. Then we embed models and graphs by applying HGT over the G-M network G = (V, E). Assuming an HGT with h attention heads, the time to employ HGT over G is O(|V|k2 + |E|k2/h). More specifically, the time taken for Attention(·) and Message(·) functions is O(|V|k2 + |E|k2/h), where O(|V|k2) is for feature transformation by h heads for all nodes, and O(|E|k2/h) is for message transformation/attention computation over each edge. Similarly, Aggregate(·) step takes O(|V|k2 + |E|k) time. Assuming k2/h > k, the time for Aggregate(·) can be absorbed into the time for other steps. Further, given that the G-M network is induced by c-NN search, we have that O(|E|) = O(c|V|), and thus the time complexity for embedding models and graphs is O(|V|ck2/h).
H ADDITIONAL RELATED WORK
H.1 MODEL SELECTION IN MACHINE LEARNING
In this section, we provide a further review of model selection in machine learning, which we group into two categories.
Evaluation-Based Model Selection: A majority of model selection methods belong to this category. Representative techniques used by these methods include grid search (Liashchynskyi & Liashchynskyi, 2019), random search (Bergstra & Bengio, 2012), early stopping-based (Golovin et al., 2017) and bandit-based (Li et al., 2017b) approaches, and Bayesian optimization (BO) (Snoek et al., 2012; Wu et al., 2019b; Falkner et al., 2018). Among them, BO methods are more efficient than grid or random search, requiring fewer evaluations of hyperparameter configurations (HCs), as they determine which HC to try next in a guided manner using prior experience from previous trials. Since these methods perform model training or evaluation multiple times using different HCs, they are much less efficient than the following group of methods.
Evaluation-Free Model Selection: Methods in this category do not require model evaluation for model selection. A simple approach (Abdulrahman et al., 2018) identifies the best model by considering the models’ rankings observed on prior datasets. Instead of finding the globally best model, ISAC (Kadioglu et al., 2010) and AS (Nikolić et al., 2013) select a model that performed well on similar datasets, where the dataset similarity is modeled in the meta-feature space via clustering (Kadioglu et al., 2010) or k-nearest neighbor search (Nikolić et al., 2013). A different group of methods perform optimization-based model selection, where the model performance is estimated by modeling the relation between meta-features and model performances. Supervised Surrogates (Xu et al., 2012) learns a surrogate model that maps meta-features to model performance. Recently, MetaOD (Zhao et al., 2021) outperformed all of these methods in selecting outlier detection algorithms. As a method in this category, the proposed METAGL builds upon MetaOD and extends it for an effective and automatic GL model selection. Most importantly, METAGL selects a graph learning model (e.g., link predictor) for the given graph, while MetaOD selects an outlier detection (OD) model for the given dataset (n-dimensional input features). To this end, METAGL designs meta features to capture the characteristics of graphs, while MetaOD designs meta features specialized for OD tasks. Also, they adopt different meta-training objectives: METAGL adapts the top-1 probability for meta-training, whereas MetaOD uses an NDCG-based objective. Furthermore, METAGL learns the embeddings of models and graphs by applying a heterogeneous GNN over the G-M network, which allows a flexible modeling of the relations between different models and graphs. By contrast, in MetaOD, the embeddings of models and datasets are optimized separately, where the relations between models and datasets are modeled rather indirectly via reconstructing the performance matrix. Note that all of these earlier methods, except the first simple approach, rely on meta-features, and they focus on non-graph datasets. By using the proposed meta-graph features, they could be applied to the graph learning model selection task.
H.2 COMPARISON WITH MODEL-AGNOSTIC META-LEARNING (MAML) (FINN ET AL., 2017)
MAML employs meta-learning to train a model’s initial parameters such that the model can perform well on a new task after the parameters have been updated via a few gradient steps using the data from the new task. In other words, given a model, MAML initializes one specific model’s parameters via meta-learning over multiple observed tasks, such that the meta-trained model can quickly adapt to a new task after learning from a small number of new data (i.e., few-shot learning). On the other hand, METAGL employs meta-learning to carry over the prior knowledge of multiple different models’ performance on different graphs for evaluation-free selection of graph learning algorithms.
Since MAML meta-trains a specific model for fast adaptation to a new dataset, it is not for selecting a model from a model set consisting of a wide variety of learning algorithms. Further, MAML fine-tunes a meta-trained model in a few-shot learning setup, whereas in our problem setup, no training and evaluation is to be done given a new graph dataset. Due to these reasons, MAML is not applicable to the proposed evaluation-free GL model selection problem (Problem 1).
I ADDITIONAL RESULTS
I.1 EFFECTIVENESS OF META-GRAPH FEATURES
Figure 8 shows how accurately meta-learners can perform model selection when they use the proposed meta-graph features (Section 4.3) vs. six state-of-the-art graph-level embedding (GLE) techniques, i.e., GL2Vec (Chen & Koga, 2019), Graph2Vec (Narayanan et al., 2017), GraphLoG (Xu et al., 2021), WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019). As discussed in Section 5.3, all meta-learners achieve a higher model selection accuracy nearly
consistently by using METAGL’s meta-graph features than when they use these GLE techniques, and METAGL outperform all meta-learners, regardless of which features are used.
I.2 MODEL SELECTION TIME
50% 100% 150% 200% 250%
Figure 9: Distribution of the time for METAGL to make a prediction / the time to create meta-graph features (in percentage). Red and green lines denote the median and mean, respectively.
Table 7 shows results comparing the runtime (in seconds) for naive model selection with the runtime of METAGL. Note that naive model selection requires training and evaluating each method in the model set, while in METAGL, the runtime involves only the time to generate meta-graph feature (penultimate row) and predict the best model via a forward | 1. What is the focus and contribution of the paper regarding link prediction tasks?
2. What are the strengths of the proposed approach, particularly in terms of its novel problem-solving perspective?
3. What are the weaknesses of the paper, especially regarding its limitations in scalability and the selection of GRL methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper considers the problem of how to select an effective GL model for link prediction task on a new graph without training/evaluating process. To deal with this problem, the authors propose a meta-learning based matrix factorization (MF) approach MetaGL. MetaGL adopt the meta graph features to represent graphs, which can convert the transductive MF to an inductive manner for a new graph. The proposed method is evaluated on 301 graphs via 5-fold cross validation with 412 candidate models.
Strengths And Weaknesses
Strengths:
This paper first considers how to select an effective GL link prediction model for a new graph without searching process. It is a new and meaningful problem.
The experiments are solid and can support the contributions: effectiveness of the meta-learning framework and global statistical features.
The construction of G-M Network makes sense and further improves the performance.
Weaknesses:
It seems that the extracting of some meta-features (e.g., triangles and Page Rank score) is time-consuming which limits the application on the large-scale graph.
The GRL methods in the model set are limited and the model set may not contain some new GRL methods in recent years.
It seems to be missing the training details and metrics of the link prediction subproblem.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written and original. |
ICLR | Title
MetaGL: Evaluation-Free Selection of Graph Learning Models via Meta-Learning
Abstract
Given a graph learning task, such as link prediction, on a new graph, how can we select the best method as well as its hyperparameters (collectively called a model) without having to train or evaluate any model on the new graph? Model selection for graph learning has been largely ad hoc. A typical approach has been to apply popular methods to new datasets, but this is often suboptimal. On the other hand, systematically comparing models on the new graph quickly becomes too costly, or even impractical. In this work, we develop the first meta-learning approach for evaluation-free graph learning model selection, called METAGL, which utilizes the prior performances of existing methods on various benchmark graph datasets to automatically select an effective model for the new graph, without any model training or evaluations. To quantify similarities across a wide variety of graphs, we introduce specialized meta-graph features that capture the structural characteristics of a graph. Then we design G-M network, which represents the relations among graphs and models, and develop a graph-based meta-learner operating on this G-M network, which estimates the relevance of each model to different graphs. Extensive experiments show that using METAGL to select a model for the new graph greatly outperforms several existing meta-learning techniques tailed for graph learning model selection (up to 47% better), while being extremely fast at test time (∼1 sec).
1 INTRODUCTION
Given a graph learning (GL) task, such as link prediction, for a new graph dataset, how can we select the best method as well as its hyperparameters (HPs) (collectively called a model) without performing any model training or evaluations on the new graph? GL has received increasing attention recently (Zhang et al., 2022), achieving successes across various applications, e.g., recommendation and ranking (Fan et al., 2019; Park et al., 2020), traffic forecasting (Jiang & Luo, 2021), bioinformatics (Su et al., 2020), and question answering (Park et al., 2022). However, as GL methods continue to be developed, it becomes increasingly difficult to determine which model to use for the given graph.
Model selection (i.e., selecting a method and its configuration such as HPs) for graph learning has been largely ad hoc to date. A typical approach, called “no model selection”, is to simply apply popular methods to new graphs, often with the default HP values. However, it is well known that there is no universal learning algorithm that performs best on all problem instances (Wolpert & Macready, 1997), and such consistent model selection is often suboptimal. At the other extreme lies “naive model selection” (Fig. 1b), where all candidate models are trained on the new graph, evaluated on a hold-out validation graph, and then the best performing model for the new graph is selected. This approach is very costly as all candidate models are trained when a new graph arrives. Recent methods on neural architecture search (NAS) and hyperparameter optimization (HPO) of GL methods, which we review in Section 3, adopt smarter and more efficient strategies, such as Bayesian optimization (Snoek et al., 2012; Tu et al., 2019), which carefully choose a relatively small number of HP settings to evaluate. However, they still need to evaluate multiple configurations of each GL method on the new graph.
Evaluation-free model selection is yet another paradigm, which aims to tackle the limitations of the above approaches by attempting to simultaneously achieve the speed of no model selection and the accuracy of exhaustive model selection. Recently, a seminal work by Zhao et al. (2021) proposed a technique for outlier detection (OD) model selection, which carries over the observed performance of
OD methods on benchmark datasets for selecting OD methods. However, it does not address the unique challenges of GL model selection, and cannot be directly used to solve the problem. Inspired by this work, we systematically tackle the model selection problem for graph learning, especially link prediction. We choose link prediction as it is a key task for graph-structured data: It has many applications (e.g., recommendation, knowledge graph reasoning, and entity resolution), and several inference and learning tasks can be cast as a link prediction problem (e.g., Fadaee & Haeri (2019)). In this work, we develop METAGL, the first meta-learning framework for evaluation-free selection of graph learning models, which finds an effective GL model to employ for a new graph without training or evaluating any GL model on the new graph, as Figure 1a illustrates. METAGL satisfies all of desirable features for GL model selection listed in Table 1, while no existing paradigms satisfy all.
The high-level idea of meta-learning based model selection is to estimate a candidate model’s performance on the new graph based on its observed performances on similar graphs. Our meta-learning problem for graph data presents a unique challenge of how to model graph similarities, and what characteristic features (i.e., meta-features) of a graph to consider. Note that this step is often not needed for traditional meta-learning problems on non-graph data, as features for non-graph objects (e.g., location, age of users) are often readily available. Also, the high complexity and irregularity of graphs (e.g., different number of nodes and edges, and widely varying connectivity patterns among different graphs) makes the task even more challenging. To handle these challenges, we design specialized meta-graph features that can characterize major structural properties of real-world graphs.
Then to estimate the performance of a candidate model on a given graph, METAGL learns to embed models and graphs in the shared latent space such that their embeddings reflect the graph-to-model affinity. Specifically, we design a multi-relational graph called G-M network, which captures multiple types of relations among models and graphs, and develop a meta-learner operating on this G-M network, based on an attentive graph neural network that is optimized to leverage meta-graph features and prior model performance into producing model and graph embeddings that can be effectively used to estimate the best performing model for the new graph. METAGL greatly outperforms existing metalearners in GL model selection (Fig. 1c). In sum, the key contributions of this work are as follows. • Problem Formulation. We formulate the problem of selecting effective GL models in an
evaluation-free manner (i.e., without ever having to train/evaluate any model on the new graph), To the best of our knowledge, we are the first to study this important problem. • Meta-Learning Framework and Features. We propose METAGL, the first meta-learning framework for evaluation-free GL model selection. For meta-learning on various graphs, we design metagraph features that quantify graph similarities by capturing structural characteristics of a graph. • Effectiveness. Using METAGL for GL model selection greatly outperforms existing meta-learning techniques (up to 47% better, Fig. 1c), with negligible runtime overhead at test time (∼1 sec).
Benchmark Data/Code: To facilitate further research on this important new problem, we release code and data at https://github.com/NamyongPark/MetaGL, including performances of 400+ models on 300+ graphs, and 300+ meta-graph features.
Table 1: The proposed METAGL wins on features in comparison to existing graph learning (GL)
model selection (MS) paradigms, all of which fail to satisfy some of desirable properties for GL MS.
Desiderata for Graph Learning (GL) MS
GL Model Selection (MS) Paradigms No model selection Naive model selection (Fig. 1b) Graph HPO/NAS (e.g., AutoNE, AGNN; See Sec 3 and Fig. 1b) METAGL (Ours, Fig. 1a)
Evaluation-free GL model selection X ! Capable of MS from among multiple GL algorithms X ! Capitalizing on graph similarities for GL MS ! Estimating model performance based on past observations X !
2 PROBLEM FORMULATION
Given a new unseen graph, our goal is to select the best model from a heterogeneous set of graph learning models, without requiring any model evaluations and user intervention. In comparison to traditional meta-learning problems where a model denotes a single method and its hyperparameters, a model in the graph meta-learning problem is more broadly defined to be
model M = {(graph embedding method, hyperparameters), (predictor, hyperparameters)}, (1)
as graph learning tasks usually involve two steps: (1) embedding the graph using a graph representation learning method, and (2) providing node embeddings to the predictor of a downstream task like link prediction. Both steps require learning a method with specific hyperparameters. Thus, there can be many models with the same embedding method and predictor, which have different hyperparameters. Also, the set M of models may contain many different graph representation learning methods (e.g., node2vec (Grover & Leskovec, 2016), GraphSAGE (Hamilton et al., 2017), DeepGL (Rossi et al., 2020) to name a few), as well as multiple task-specific predictors, making M heterogeneous. Given a training meta-corpus of n graphs G = {Gi}ni=1, andmmodels M = {Mj}mj=1 for GL tasks, we derive performance matrix P ∈ Rn×m where Pij is the performance (e.g., accuracy) of model j on graph i. Our meta-learning problem for evaluation-free GL model selection is defined as follows.
Problem 1 (Evaluation-Free Graph Learning Model Selection).
Given
• an unseen test graph Gtest /∈ G, and • a potentially sparse performance matrix P ∈ Rn×m of m heterogeneous graph learning models M = {M1, . . . ,Mm} on n graphs G = {G1, . . . , Gn}, Select • the best model M∗ ∈M to employ on Gtest without evaluating any model in M on Gtest.
3 RELATED WORK
A majority of works on GL focus on developing new algorithms for certain graph tasks and applications (Xia et al., 2021; Zhang et al., 2022). In comparison, there exist relatively few recent works that address the GL model selection problem (Zhang et al., 2021). They mainly focus on neural architecture search (NAS) and hyperparameter optimization (HPO) for GL models, especially graph neural networks (GNNs). Toward efficient and effective NAS and HPO in GL, they investigated several approaches, such as Bayesian optimization (AutoNE by Tu et al. (2019)), reinforcement learning (GraphNAS by Gao et al. (2020), AGNN by Zhou et al. (2019), Policy-GNN by Lai et al. (2020)), hypernets (ST-GCN by Zhu et al. (2021)), and evolutionary algorithms (Bu et al. (2021)), as well as techniques like subgraph sampling (AutoNE by Tu et al. (2019)), graph coarsening (JITuNE by Guo et al. (2021)), and hierarchical evaluation (HESGA by Yuan et al. (2021)). However, as summarized in Table 1, these methods cannot perform evaluation-free GL model selection (Problem 1), since they need to evaluate multiple configurations of each GL method on the new graph for model selection. Further, they are limited to finding the best configuration of a single algorithm, and thus cannot select a model from a heterogeneous model set M with various GL models, as Problem 1 requests. An earlier work on GNN design space (You et al., 2020) is somewhat relevant, as it proposes an approach to quantify graph similarities, which can be used to find an observed graph similar to the test graph, and select a model that performed best on it. However, their approach evaluates a set of anchor models on all graphs, and computes similarities between two graphs based on anchor models’ performance on them. As it needs to run anchor models on the new graph, it is inapplicable to Problem 1. For the first time, METAGL enables evaluation-free model selection from a heterogeneous set of GL models.
4 FRAMEWORK
In this section, we present METAGL, our meta-learning based framework that solves Problem 1 by leveraging prior performances of existing methods. METAGL consists of two phases: (1) offline meta-training phase (Section 4.1) that trains a meta-learner using observed graphs G and model performances P, and (2) online model prediction phase (Section 4.2), which selects the best model for the new graph. A summary of notations used in this work is provided in Table 3 in the Appendix.
4.1 OFFLINE META-TRAINING
Meta-learning leverages prior experience from related learning tasks to do a better job on the new task. When the new task is similar to some historical learning tasks, then the knowledge from those similar tasks can be transferred and applied to the new task. Thus effectively capturing the similarity between an input task and observed ones is fundamentally important for successful meta-learning. In meta-learning, the similarity between learning tasks is modeled using meta-features, i.e., characteristic features of the learning task that can be used to quantify the task similarity.
Meta-Graph Features. Given the graph learning model selection problem (where new graphs correspond to new learning tasks), METAGL captures the graph similarity by extracting meta-graph features such that they reflect the structural characteristics of a graph. Notably, since graphs have irregular structure, with different number of nodes and edges, METAGL designs meta-graph features to be of the same size for any arbitrary graph such that they can be easily compared using meta-graph features. We use the symbol m ∈ Rd to denote the fixed-size meta-graph feature vector of graph G. We defer the details of how METAGL computes m to Section 4.3.
Model Performance Estimation. To estimate how well a model would perform on a given graph, METAGL represent models and graphs in the latent k-dimensional space, and captures the graphto-model affinity using the dot product similarity between the two representations hGi and hMj of the i-th graph Gi and j-th model Mj , respectively, such that pij ≈ 〈hGi ,hMj 〉 where pij is the performance of model Mj on graph Gi. Then to obtain the latent representation h, we design a learnable function f(·) that takes in relevant information on models and graphs from the meta-graph features m and the prior knowledge (i.e., model performances P and observed graphs G). Below in this section, we focus on the inputs to the function f(·), and defer the details of f(·) to Section 4.4. We first factorize performance matrix P into latent graph factors U ∈ Rn×k and model factors V ∈ Rm×k, and take the model factor Vj ∈ Rk (the j-th row of V) as the input representation of model Mj . Then, METAGL obtains the latent embedding hMj of model Mj by hMj = f(Vj). For graphs, more information is available since we have both meta-graph features m and meta-train graph factors U. However, while we have the same number of models during training and inference, we observe new graphs during inference, and thus cannot obtain the graph factor Utest for the test graph as for the train graphs since matrix factorization (MF) is transductive by construction (i.e., existing models’ performance on the test graph is needed to get latent factors for the test graph directly via MF). To handle this issue, we learn an estimator φ : Rd 7→ Rk that maps the meta-graph features m into the latent factors of meta-train graphs obtained via MF above (i.e., for graph Gi with m, φ(m) = Ûi ≈ Ui) and use this estimated graph factor. We then combine both inputs ([m;φ(m)] ∈ Rd+k), and apply linear transformation to make the input representation of graph Gi to be of the same size as that of model Mj , obtaining the latent embedding of graph Gi to be hGi = f(W[m;φ(m)]) where W ∈ Rk×(d+k) is a weight matrix. Thus in METAGL, the performance pij of model Mj on graph Gi with meta-graph features m is estimated as
pij ≈ p̂ij = 〈f(W[m;φ(m)]), f(Vj)〉. (2)
Meta-Learning Objective. For tasks where the goal is to estimate real values, such as accuracy, the mean squared error (MSE) is a typical choice for the loss function. While MSE is easy to optimize and
effective for regression, it does not directly take the ranking quality into account. By contrast, in our problem setup, accurately ranking models for each graph dataset is more important than estimating the performance itself, which makes MSE a suboptimal choice. In particular, Problem 1 focuses on finding the model with the best performance on the given graph. Therefore, we consider rank-based learning objectives, and among them, we adapt the top-1 probability to the proposed Problem 1 as follows. Let P̂i ∈ Rm be the i-th row of P̂ (i.e., estimated performance of all m models on graph Gi). Given P̂i, the top-1 probability pP̂itop1(j) of j-th model Mj in the model set M is defined to be
pP̂itop1 (j) = π(p̂ij)∑m k=1 π(p̂ik) = exp(p̂ij)∑m k=1 exp(p̂ik)
(3)
where π(·) is an increasing, strictly positive function, which we define to be an exponential function.
Theorem 1 (Cao et al. (2007)). Given the performance P̂i of all models on graph Gi, pP̂itop1(j) represents the probability of model Mj to be ranked at the top of the list (i.e., all models in M). Top-1 probabilities pP̂itop1 (j) for all j = 1, . . . ,m form a probability distribution over m models.
Based on Theorem 1, we obtain two probability distributions by applying top-1 probability to the true performance Pi and estimated performance P̂i of m models, and optimize METAGL such that the distance between the two resulting distributions gets decreased. Using the cross entropy as the distance metric, we obtain the following loss over all n meta-train graphs G:
L(P, P̂) = − n∑ i=1 m∑ j=1 pPitop1 (j) log ( pP̂itop1 (j) ) (4)
When P is sparse, meta-training can be performed via slightly modified Eqs. (3) and (4) in App. G.2.
4.2 ONLINE MODEL PREDICTION
In the meta-training phase, METAGL learns estimators f(·) and φ(·), as well as weight matrix W and latent model factors V. Given a new graph Gtest, METAGL first computes the meta-graph features mtest ∈ Rd as we discuss in Section 4.3. Then mtest is regressed to obtain the (approximate) latent graph factors Ûtest = φ(mtest)∈Rk. Recall that the model factors V learned in the meta-training stage can be directly used for model prediction. Then model Mj’s performance on test graph Gtest can be estimated by applying Equation (2) with mtest and φ(mtest). Finally, the model that has the highest estimated performance is selected by METAGL as the best model M∗, i.e.,
M∗ ← arg max Mj∈M
〈 f(W[mtest;φ(mtest)]), f(Vj) 〉 (5)
Note that model selection using Equation (5) depends only on the meta-graph features mtest of the test graph and other pretrained estimators and latent factors that METAGL learned in the meta-training phase. As no model training or evaluation is involved, model prediction by METAGL is much faster than training and evaluating different models multiple times, as our experiments show in Section 5.4. Further, model prediction process is fully automatic since it does not require users to choose or fine-tune any values at test time. Figure 2 shows an overview of the model prediction process, and Algorithm 1 in the Appendix lists steps for offline meta-training and online model prediction. 4.3 STRUCTURAL META-GRAPH FEATURES
𝜓! 𝜓" 𝜓#
...
...
ΣΣΣ
...
(ψ )
][ 𝒎
MetaGraph
Features
Input Graph 𝐺
Figure 3: Meta-graph features in METAGL are derived in two steps. See Section 4.3 for details.
Meta-graph features are a crucial component of our metalearning approach METAGL since they capture important structural characteristics of an arbitrary graph. Meta-graph features enable METAGL to quantify graph similarities, and utilize prior experience with observed graphs for GL model selection. It is important that a sufficient and representative set of meta-graph features are used to capture the important structural properties of graphs from a wide variety of domains, including biological, technological, information, and social networks to name a few.
In this work, we are not able to leverage the commonly used simple statistical meta-features used by previous work on model selection-based meta-learning, as they cannot be used directly
over irregular and complex graph data. To address this problem, we introduce the notion of meta-graph features and develop a general framework for computing them on any arbitrary graph.
Meta-graph features in METAGL are derived in two steps, which is shown in Figure 3. First, we apply a set of structural meta-feature extractors Ψ = {ψ1, . . . , ψq} to the input graph G, obtaining Ψ(G) = {ψ1(G), . . . , ψq(G)}. Applying ψ ∈ Ψ to G yields a vector or a distribution of values for the nodes (or edges) in the graph, such as degree distribution and PageRank scores. That is, in Figure 3, ψ1 can be a degree distribution, ψ2 can be PageRank scores of all nodes, and so on. Specifically, we use both local and global structural feature extractors. To capture the local structural properties around a node or an edge, we compute node degree, number of wedges (i.e., a path of length 2), triangles centered at each node, and also frequency of triangles for every edge. To capture global structural properties of a node, we derive eccentricity, PageRank score, and k-core number of each node. Appendix D summarizes meta-feature extractors used in this work.
Let ψ denote the local structural extractors for nodes. Given a graph Gi = (Vi, Ei) and ψ, we obtain an |Vi|-dimensional node vector ψ(Gi). Since any two graphs Gi and Gj are likely to have a different number of nodes and edges, the resulting structural feature matrices ψ(Gi) and ψ(Gj) for these graphs are also likely to be of different sizes as the rows of these matrices correspond to nodes or edges of the corresponding graph. Thus, in general, these structural feature-based representations of the graphs cannot be used directly to derive similarity between graphs.
Now, to address this issue, we apply the set Σ of global statistical meta-graph feature extractors to every ψi(G), ∀i = 1, . . . , q, which summarizes each ψi(G) to a fixed-size vector. Specifically, Σ(ψi(G)) applies each of the statistical functions in Σ (e.g., mean, kurtosis, etc) to the distribution ψi(G), which computes a real number that summarizes the given feature distribution ψi(G) from different statistical point of view, producing a vector Σ(ψi(G)) ∈ R|Σ|. Then we obtain the metagraph feature vector m of graph G by concatenating the resulting meta-graph feature vectors:
m = [Σ(ψ1(G)) · · · Σ(ψq(G))] ∈ Rd. (6) Table 5 in Appendix D lists the global statistical functions Σ used in this work to derive metagraph features. Further, in addition to the node- and edge-level structural features, we also compute global graph statistics (scalars directly derived from the graph, e.g., density and degree assortativity coefficient), and append them to m, i.e., the node- or edge-level structural features obtained above.
Most importantly, given any arbitrary graph G′, the proposed approach is guaranteed to output a fixed d-dimensional meta-graph feature vector characterizing G′. Hence, the structural similarity of any two graphs G and G′ can be quantified using a similarity function over m and m′, respectively.
4.4 EMBEDDING MODELS AND GRAPHS
Given an informative context (i.e., input features) of models and graphs that METAGL learns from model performances P and meta-graph features M (Sections 4.1 and 4.3), how can we use it to effectively learn model and graph embeddings that capture graph-to-model affinity? We note that similar entities can make each other’s context more accurate and informative. For instance, in our problem setup, similar models tend to have similar performance distributions over graphs, and likewise similar graphs are likely to exhibit similar affinity to different models. With this consideration, we model the task as a graph representation learning problem, where we construct a graph called G-M network that connects similar graphs and models, and learn the graph and model embeddings over it.
G-M Network. We define G-M network to be a multi-relational graph with two types of nodes (i.e., models and graphs) where edges connect similar model nodes and graph nodes. To measure similarity among graphs and models, we utilize the latent graph and model factors (U and V, respectively) obtained by factorizing P, as well as the meta-graph features M. More precisely, we use the estimated graph factor Û instead of U to let the same graph construction process work for new graphs. Note that this gives us two types of features for graph nodes (i.e., Û and M), and one type of features for model nodes (i.e., V). To let different features influence the embedding step differently as needed, we connect graph nodes and model nodes using five type of edges: M-g2g, P-g2g, P-m2m, P-g2m, P-m2g where g and m denote the type of nodes that an edge connects (graph and model, respectively), and M and P denote that the edge is based on meta-graph features and model performance, respectively. For example, M-g2g and P-g2g edges connect two graph nodes that are similar in terms of M and Û, respectively. Then for each edge type, we construct a k-NN graph by connecting nodes to their top-k similar nodes, where node-to-node similarity is defined as the cosine similarity between the
corresponding node features. For instance, for P-g2m edge type, graph nodes and model nodes are linked based on the similarity between Û and V. Fig. 7 in the Appendix illustrates the G-M network.
Learning Over G-M Network. Given the G-M network Gtrain with meta-train graphs and models, graph neural networks (GNNs) provide an effective framework to embed models and graphs via (weighted) neighborhood aggregation over Gtrain. However, since the structure of G-M network is induced by simple k-NN search, some of the neighbors may not provide the same amount of information as others, or may even provide noisy information. We found it helpful to perform attentive neighborhood aggregation, so more informative neighbors can be given more weights. To this end, we choose to use attentive GNNs designed for multi-relational networks, and specifically use HGT (Hu et al., 2020). Then the embedding function f(·) in Section 4.1 is defined to be f(x) = HGT(x,Gtrain) during training, which transforms the input node feature x into an embedding via attentive neighborhood aggregation over Gtrain. Further details of HGT are provided in Appendix G.4. Inference Over G-M Network. For inference at test time, we extend Gtrain to be a larger G-M network Gtest that additionally contains test graph nodes, and edges between test graph nodes, and existing graphs and models in Gtrain. The extension is done in the same way as in the training phase, by finding top-k similar nodes. Then the embedding at test time can be done by f(x) = HGT(x,Gtest).
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
Models and Evaluation. A model in our problem (Eqn. 1) consists of two components. The first component performs graph representation learning (GRL), and the other component leverages the learned embeddings for a downstream task of interest. In this work, we focus on link prediction, which is a key task for graph-structured data as we discuss in Section 1. We evaluate the performance of selecting a link prediction model for new graphs without any model evaluation. For the first component, we use 12 popular GRL methods, and for the second component for link scoring, we use a simple estimator that computes the cosine similarity between two node embeddings. This results in a model set M with 423 models. The full list of models is given in Table 4 in Appendix C. For evaluation, we create a testbed containing benchmark graphs, meta-graph features, and a performance matrix. We construct the performance matrix by evaluating each link prediction model in M on the graphs in the testbed, in terms of mean average precision score. Then we evaluate METAGL and baselines via 5-fold cross validation where the benchmark graphs are split into meta-train Gtrain and meta-test Gtest graphs for each fold, and meta-learners trained over the meta-train graph data are evaluated using the meta-test graph datasets. Thus, model performances over the meta-test graphs Gtest and the meta-graph features of Gtest were unseen during training, but used only for testing.
Table 2: The proposed METAGL outperforms existing meta-learners, given fully observed performance matrix. Best results are in bold, and second best results are underlined. “METAGL_Baseline” notation (e.g., METAGL_S2) indicate that the baseline metalearner uses METAGL’s meta-graph features.
Method MRR AUC NDCG@1
Random Selection 0.011 0.490 0.745
Si m
pl e
Global Best-AvgPerf 0.163 0.877 0.932 Global Best-AvgRank 0.103 0.867 0.930
METAGL_AS 0.222 0.905 0.947 METAGL_ISAC 0.202 0.887 0.939
O pt
im iz
at io nba se d METAGL_S2 0.170 0.910 0.945 METAGL_ALORS 0.190 0.897 0.950 METAGL_NCF 0.140 0.869 0.934 METAGL_MetaOD 0.075 0.599 0.889
METAGL 0.259 0.941 0.962
Since model selection aims to accurately predict the best model for a new graph, we evaluate the top-1 prediction performance in terms of MRR (Mean Reciprocal Rank), AUC, and NDCG (Normalized Discounted Cumulative Gain). To apply MRR and AUC, we label models such that the top1 model (i.e., the model with the best performance for the given graph) is labeled as 1, while all others are labeled as 0. For NDCG, we report NDCG@1, which evaluates the relevance of top-1 predicted model. All metrics range from 0 to 1, with larger values indicating better performance.
Baselines. Being the first work for evaluation-free model selection in GL, we do not have immediate baselines for comparison. Instead, we adapt baselines used for OD model selection (Zhao et al., 2021) and collaborative filtering for our problem setting. In Appendix A, we describe baselines in detail. Baselines are grouped into two categories: (a) Simple meta-learners select a model that performs generally well, either globally or locally: Global Best (GB)-AvgPerf, GB-AvgRank, ISAC (Kadioglu et al., 2010), and ARGOSMART (AS) (Nikolić et al., 2013); (b) Optimization-based meta-
learners learn to estimate the model performance by modeling the relation between meta features and model performances: Supervised Surrogates (S2) (Xu et al., 2012), ALORS (Misir & Sebag, 2017), NCF (He et al., 2017), and MetaOD (Zhao et al., 2021). We also include Random Selection (RS) as a baseline to see how these methods compare to random scoring.
Note that, except the simplest meta-learners RS and GB, no baselines can handle graph data, and thus they cannot estimate model performance on the new graph on their own. In that sense, they are not direct competitors of METAGL. We enable them to be used for GL model selection by providing the proposed meta-graph features. MetaOD, which was originally designed for OD model selection, is also given the same meta-graph features to perform GL model selection. We denote baselines either by combining METAGL with their names (e.g., METAGL_S2) to clearly show that they use METAGL’s meta-graph features, or using their name alone (e.g., S2) for simplicity.
5.2 MODEL SELECTION ACCURACY
Fully Observed Performance Matrix. In this setup, meta-learners are trained using a full performance matrix P with no missing entries. The model selection accuracy of all meta-learners in this setup is reported in Table 2, where METAGL achieves the best performance in all metrics, with 17% higher MRR than the best baseline (AS).
• Among simple meta-learners, Global Best meta-learners, which simply average model performance or rank over all observed graphs, are outperformed by more sophisticated meta-learners AS and ISAC, which leverage dataset similarities for model selection using meta-graph features. • For optimization-based meta-learners, it is important to be aware of how models and graphs relate to each other, and have high flexibility to capture that complex relationship. In methods like ALORS and MetaOD, relations between models and datasets (i.e., relative position of models and datasets in the embedding subspace) are learned rather indirectly via reconstructing the performance matrix. • METAGL, in contrast, directly captures graph-to-model affinity by modeling their relations via employing flexible GNNs over the G-M network, as well as reconstructing the performance matrix. As a result, METAGL consistently outperforms other optimization-based meta-learners.
Partially Observed Performance Matrix. In this setup, meta-learners are trained using a sparse performance matrix P, obtained by randomly masking out a full P. Figure 4 reports results obtained with varying sparsity, ranging up to 0.9. In this more challenging setup, METAGL consistently performs the best across all levels of sparsity, achieving up to 47% higher MRR than the best baseline.
• With increased sparsity, nearly all meta-learners perform increasingly worse, as one might expect. • While AS was the best baseline given a full P, its accuracy decreased rapidly as sparsity increased.
Since AS selects a model based on the 1NN meta-train graph, it is highly sensitive to P’s sparsity. • Baselines such as the Global Best baselines are more stable as they average across multiple graphs. • Optimization-based methods like METAGL and S2 perform favorably to simple meta-learners in
5.3 EFFECTIVENESS OF META-GRAPH FEATURES
In Figure 5, we evaluate how the performance of meta-learners obtained with the proposed meta-graph feature (Section 4.3) compares to that obtained with existing graph-level embedding (GLE) techniques, GL2Vec (Chen & Koga, 2019), Graph2Vec (Narayanan et al., 2017), and GraphLoG (Xu et al., 2021).
Figure 8 in App. I.1 provides results for three other GLE methods, WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019).
• Most points are below the diagonal in Figure 5, i.e., all meta-learners nearly consistently perform better when they use METAGL’s meta-graph features than when they use existing GLE methods. This shows the effectiveness of METAGL’s features for the proposed task of GL model selection. • METAGL performs the best, whether METAGL’s features or existing GLE methods are used.
5.4 MODEL SELECTION EFFICIENCY
To evaluate how efficient METAGL’s model selection is, we measure its runtime (i.e., the time to create metagraph features for the new graph at test time, plus the time to predict the best model), and compare it with the time to train a GL model. Figure 6 shows the distribution in box plots, where red and green lines denote the median and mean, respectively.
Results show that METAGL is fast, and incurs negligible runtime overhead: its runtime is just around 1 seconds or less in most cases (Figure 6a). Notably, compared to training each GL model for only 5% of its available model configurations, METAGL takes considerably less
time, i.e., a median of 5% and a mean of 11% of the time required for model training (Figure 6b). Given large-scale test graphs in practice, the speed-up enabled by METAGL will be greater than that reported in Figure 6b, due to the increased training time on such graphs. Also, METAGL’s model selection process can be further streamlined, e.g., by parallelizing meta-feature generation process. We provide additional results on the runtime of METAGL and naive model selection in Appendix I.2.
5.5 ADDITIONAL RESULTS
We present ablation study in Appendix I.3, which shows the effectiveness of METAGL’s proposed components, e.g., the meta-learning objective, G-M network, and the graph encoder used by METAGL. We evaluate the sensitivity of model selection approaches to the variance of performance matrix P in Appendix I.4, and compare the predicted model performance with the actual best performance in Appendix I.5.
6 CONCLUSION
As more and more GL models are developed, selecting which one to use is becoming increasingly hard. Toward near-instantaneous, automatic GL model selection, we make the following contributions.
• Problem Formulation. We present the first problem formulation to select effective GL models in an evaluation-free manner (i.e., without ever having to train/evaluate any model on the new graph). • Meta-Learning Framework and Features. We propose METAGL, the first meta-learning framework for evaluation-free GL model selection, and meta-graph features to quantify graph similarities. • Effectiveness. Using METAGL for model selection greatly outperforms existing meta-learning techniques (up to 47% better), while incurring negligible runtime overhead at test time (∼1 sec).
A BASELINES
Being the first work for evaluation-free model selection in graph learning (GL), we do not have immediate baselines for comparison. Instead, we adapt baselines used in MetaOD (Zhao et al., 2021) for outlier detection (OD) model selection as well as collaborative filtering for our problem setting. The baselines used in experiments can be organized into the following two categories.
(a) Simple meta-learners select a model that performs generally well, either globally or locally.
• Global Best (GB)-AvgPerf selects the model that has the largest average performance over all meta-train graphs.
• Global Best (GB)-AvgRank computes the rank of all models (in percentile) for each graph, and selects the model with the largest average ranking over all meta-train graphs.
• ISAC (Kadioglu et al., 2010) first clusters meta-train datasets using meta-graph features, and at test time, finds the cluster closest to the test graph, and selects the model with the largest average performance over all graphs in that cluster.
• ARGOSMART (AS) (Nikolić et al., 2013) finds the meta-train graph closest to the test graph (i.e., 1NN) in terms of meta-graph feature similarity, and selects the model with the best result on the 1NN graph.
(b) Optimization-based meta-learners learn to estimate the model performance by modeling the relation between meta-graph features and model performances.
• Supervised Surrogates (S2) (Xu et al., 2012) learns a surrogate model (a regressor) that maps meta-graph features to model performances.
• ALORS (Misir & Sebag, 2017) factorizes the performance matrix into latent factors on graphs and models, and estimates the performance to be the dot product between the two factors, where a non-linear regressor maps meta-graph features into the latent graph factors.
• NCF (He et al., 2017) replaces the dot product used in ALORS with a more general neural architecture that estimates performance by combining the linearity of matrix factorization and non-linearity of deep neural networks.
• MetaOD (Zhao et al., 2021) pioneered the field of unsupervised OD model selection by designing meta-features specialized to capture the outlying characteristics of datasets, as well as improving upon ALORS with the adoption of an NDCG-based meta-training objective. We enable MetaOD to be applicable to our problem setting, by applying our proposed meta-graph features to MetaOD.
In addition, we also include Random Selection (RS) as a baseline, to see how meta-learners perform in comparison to random scoring. Note that among the above approaches, only GB-AvgPerf and GB-AvgRank do not rely on meta-features for model selection. All other meta-learners make use of the proposed meta-graph features to estimate model performances on an unseen test graph.
B NOTATIONS
Table 3 provides a list of notations frequently used in this work.
C MODEL SET
A model in the model set M refers to a graph representation learning (GRL) method along with its hyperparameters settings, and a predictor that makes a downstream task-specific prediction given the node embeddings from the GRL method. In this work, we use a link predictor which scores a given link by computing the cosine similarity between the two nodes’ embeddings. Table 4 shows the
Methods Hyperparameter Settings Count
SGC (Wu et al., 2019a) # (number of) hops k ∈ {1, 2, 3} 3 GCN (Kipf & Welling, 2017) # layers L ∈ {1, 2, 3}, # epochs N ∈ {1, 10} 6 GraphSAGE (Hamilton et al., 2017) # layers L ∈ {1, 2, 3}, # epochs N ∈ {1, 10}, aggregation functions
f ∈ {mean, gcn, lstm} 18
node2vec (Grover & Leskovec, 2016) p, q ∈ {1, 2, 4} 9 role2vec (Ahmed et al., 2018) p, q ∈ {0.25, 1, 4}, α ∈ {0.01, 0.1, 0.5, 0.9, 0.99}, motif combinations
H ∈ {{H1}, {H2, H3}, {H2, H3, H4, H6, H8}, {H1, H2, . . . , H8}} 180
GraRep (Cao et al., 2015) k ∈ {1, 2} 2 DeepWalk (Perozzi et al., 2014) p = 1, q = 1 1 HONE (Rossi et al., 2018) k ∈ {1, 2}, Dlocal ∈ {4, 8, 16}, variant v = {1, 2, 3, 4, 5} 30 node2bits (Jin et al., 2019) walk num wn ∈ {5, 10, 20}, walk len wl ∈ {5, 10, 20}, log base b ∈
{2, 4, 8, 10}, feats f ∈ {16} 36
DeepGL (Rossi et al., 2020) α ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, motif size ∈ {4}, eps tolerance t ∈ {0.01, 0.05, 0.1}, relational aggr. ∈ {{m}, {p}, {s}, {v}, {m, p}, {m, v}, {s,m}, {s, p}, {s, v}} where m, p, s, v denote mean, product, sum, var 135
LINE (Tang et al., 2015) # hops/order k ∈ {1, 2} 2 Spectral Emb. (Luo et al., 2003) tolerance t ∈ {0.001} 1
Total Count 423
complete list of 12 popular GRL methods and their specific hyperparameter settings, which compose 412 unique models in the model set M. Note that the link predictor is omitted from Table 4 since we employ the same link predictor based on cosine similarity to all GRL methods.
D META-GRAPH FEATURES
Structural Meta-Feature Extractors. To capture the local structural properties around a node or an edge, we compute the distribution of node degrees, number of wedges (i.e., a path of length 2), triangles centered at each node, as well as the frequency of triangles for each edge. To capture the global structural properties of a node, we derive the eccentricity, PageRank score, and k-core number of each node. We also capture the global graph-level statistics (i.e., different from local
node/edge-level structural properties), such as the density of A and AAT where A is the adjacency matrix, and also the degree assortativity coefficient r.
Global Statistical Functions. For each of the structural property distributions (degree, k-core numbers, and so on) derived by the above structural meta-feature extractors, we apply the set Σ of global statistical functions (Table 5) over it to obtain a fixed-length vector representation for the node/edge/graph-level structural feature distribution.
After obtaining a set of meta-graph features, we concatenate all of them together to create the final meta-graph feature vector m for the graph.
E GRAPH DATASETS
The testbed used in this work comprises 301 graphs that have widely varying structural characteristics. Table 6 provides a list of all graph datasets in the testbed. All graph data are from (Rossi & Ahmed, 2015); they are publicly available under the Creative Commons Attribution-ShareAlike License.
Table 6 – Continued from the previous page
Graph # Nodes # Edges Graph # Nodes # Edges
66 CL-1000-1d8-trial3 925 3714 217 nci1_g3711 89 106 67 CL-1000-1d9-trial1 928 3510 218 nci1_g3901 93 102 68 CL-1000-1d9-trial2 912 3053 219 nci1_g3990 90 105 69 CL-1000-1d9-trial3 932 3278 220 nci1_g4094 90 98 70 CL-1000-2d0-trial1 909 2795 221 power-1138-bus 1138 2596 71 CL-1000-2d0-trial2 899 2941 222 power-494-bus 494 1080 72 CL-1000-2d0-trial3 916 3010 223 power-662-bus 662 1568 73 CL-1000-2d1-trial1 903 2430 224 power-685-bus 685 1967 74 CL-1000-2d1-trial2 911 2734 225 power-bcspwr09 1723 4117 75 CL-1000-2d1-trial3 915 2782 226 power-eris1176 1176 9864 76 DD_g1 327 899 227 rec-amazon 91813 125704 77 DD_g10 146 328 228 rec-movielens-tag-movies-10m 16528 71081 78 DD_g100 349 1005 229 road-chesapeake 39 170 79 DD_g1000 183 408 230 road-ChicagoRegional 1467 1298 80 DD_g1001 88 203 231 road-euroroad 1174 1417 81 DD_g1002 104 255 232 road-luxembourg-osm 114599 119666 82 DD_g1003 53 116 233 road-minnesota 2642 3303 83 DD_g1004 94 230 234 road-usroads-48 126146 161950 84 DD_g1005 370 903 235 rt-retweet 96 117 85 DD_g1006 246 568 236 rt-twitter-copen 761 1029 86 DD_g1007 309 732 237 rt_alwefaq 4171 7063 87 DD_g1008 109 304 238 rt_assad 2139 2788 88 DD_g1009 129 272 239 rt_bahrain 4676 7979 89 DD_g101 306 728 240 rt_barackobama 9631 9775 90 DD_g1010 157 363 241 rt_damascus 3052 3869 91 DD_g1011 47 136 242 rt_dash 6288 7436 92 DD_g1012 146 365 243 rt_gmanews 8373 8721 93 DD_g1013 93 211 244 rt_gop 4687 5529 94 DD_g1014 119 273 245 rt_http 8917 10314 95 DD_g1015 102 244 246 rt_islam 4497 4616 96 DD_g1016 113 291 247 rt_israel 3698 4165 97 DD_g1017 162 376 248 rt_lebanon 3961 4436 98 DD_g1018 296 680 249 rt_libya 5067 5541 99 DD_g1019 131 353 250 rt_lolgop 9765 10075 100 DD_g102 561 1422 251 rt_obama 3212 3423 101 DD_g1020 228 541 252 rt_occupy 3225 3944 102 DD_g1021 329 787 253 rt_occupywallstnyc 3609 3833 103 DD_g1022 294 730 254 rt_oman 4904 6230 104 DD_g1023 172 425 255 rt_onedirection 7987 8103 105 DD_g1024 59 160 256 rt_p2 4902 6018 106 DD_g1025 88 205 257 rt_saudi 7252 8061 107 DD_g1026 247 578 258 rt_tcot 4547 5503 108 DD_g1027 108 223 259 rt_tlot 3665 4475 109 DD_g1028 72 137 260 rt_uae 5248 6387 110 DD_g1029 99 215 261 rt_voteonedirection 2280 2464 111 DD_g103 265 647 262 sc-nasasrb 54870 1311227 112 DD_g1030 136 351 263 soc-advogato 6551 43427 113 DD_g104 372 999 264 soc-dolphins 62 159 114 DD_g105 423 1192 265 soc-firm-hi-tech 33 91 115 DD_g106 574 1355 266 soc-gplus 23628 39194 116 DD_g107 130 292 267 soc-hamsterster 2426 16630 117 DD_g108 483 1137 268 soc-highschool-moreno 70 274 118 DD_g109 132 315 269 soc-physicians 241 923 119 DD_g11 312 761 270 soc-sign-bitcoinalpha 3783 14124 120 DD_g110 394 1137 271 soc-student-coop 185 311 121 DD_g111 483 1520 272 soc-wiki-Vote 889 2914 122 DD_g112 266 631 273 socfb-Amherst 2235 90954 123 DD_g113 347 853 274 socfb-Bowdoin47 2252 84387 124 DD_g114 334 761 275 socfb-Caltech 769 16656 125 DD_g115 336 946 276 socfb-Hamilton46 2314 96394 126 eco-everglades 69 885 277 socfb-Haverford76 1446 59589 127 eco-florida 128 2075 278 socfb-nips-ego 2888 2981 128 eco-foodweb-baydry 128 2106 279 socfb-Oberlin44 2920 89912 129 eco-foodweb-baywet 128 2075 280 socfb-Reed98 962 18812 130 eco-mangwet 97 1446 281 socfb-Simmons81 1518 32988 131 eco-stmarks 54 353 282 socfb-Smith60 2970 97133 132 email-dnc-corecipient 906 10429 283 socfb-Swarthmore42 1659 61050 133 email-dnc-leak 1891 4465 284 socfb-Trinity100 2613 111996 134 email-enron-only 143 623 285 socfb-USFCA72 2682 65252 135 email-EU 32430 54397 286 socfb-Vassar85 3068 119161 136 email-radoslaw 167 3251 287 socfb-Villanova62 7772 314989 137 email-univ 1133 5451 288 socfb-Wellesley22 2970 94899 138 enzymes_g103 59 115 289 socfb-Williams40 2790 112986 139 enzymes_g118 95 121 290 tech-routers-rf 2113 6632
Continued on the next page
Table 6 – Continued from the previous page
Graph # Nodes # Edges Graph # Nodes # Edges
140 enzymes_g123 90 127 291 tech-routers-rf 2113 6632 141 enzymes_g199 62 108 292 web-BerkStan 12305 19500 142 enzymes_g204 57 105 293 web-edu 3031 6474 143 enzymes_g209 57 101 294 web-EPA 4271 8909 144 enzymes_g215 48 104 295 web-google 1299 2773 145 enzymes_g224 54 105 296 web-indochina-2004 11358 47606 146 enzymes_g279 60 107 297 web-polblogs 643 2280 147 enzymes_g291 62 104 298 web-spam 4767 37375 148 enzymes_g292 60 100 299 web-webbase-2001 16062 25593 149 enzymes_g293 96 109 300 web-wiki-chameleon 2277 31421 150 enzymes_g295 123 139 301 web-wiki-crocodile 11631 170918 151 enzymes_g296 125 141
F EXPERIMENTAL DETAILS
F.1 EXPERIMENTAL SETTINGS
Software. We used PyTorch1 for implementing the training and inference pipeline, and used the DGL’s implementation of HGT2. For MetaOD (Zhao et al., 2021), we used the implementation provided by the authors3. We used the Karate Club library (Rozemberczki et al., 2020) for the implementations of the following graph-level embedding (GLE) methods, Graph2Vec (Narayanan et al., 2017), GL2Vec (Chen & Koga, 2019), WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019). For GraphLoG (Xu et al., 2021), we used the authors’ implementation4. We used open source libraries, such as NetworkX5 and NumPy6, for implementing meta-graph feature extractors.
Hyperparameters. We set the embedding size k to 32 for METAGL and other meta-learners that learn embeddings of models and graphs. For METAGL, we created the G-M network by connecting nodes to their top-30 similar nodes. As an the embedding function f(·) in METAGL, we used HGT (Hu et al., 2020) with 2 layers and 4 heads per layer. HGT is included in the Deep Graph Library (DGL), which is licensed under the Apache License 2.0. For training, we used the Adam optimizer with a learning rate of 0.00075 and a weight decay of 0.0001. For GLE approaches, we used the default hyperparameter settings specified in the corresponding library and GitHub repository.
Link Prediction Model Training. Given a graph G, we first hold out 10% of the edges in graph G to be used for evaluation, and train GL models with the resulting subgraph for link prediction. The training of GL models was performed by sampling 20 negative edges per positive edge, computing the link score by applying a dot product between the two corresponding node embeddings, followed by a sigmoid function, and then optimizing a binary cross entropy loss for the positive and negative edge scores. For evaluation, we randomly sampled the same number of negative edges as the positive edges, and evaluated the predicted link scores in terms of mean average precision.
F.2 EVALUATION OF MODEL SELECTION ACCURACY (SECTION 5.2)
In our evaluation involving a partially observed performance matrix, we extended baseline metalearners as follows so they can operate in the presence of missing entries in the performance matrix.
• Global Best-AvgPerf averaged observed performance entries alone, ignoring missing values. If an average performance cannot be computed for some model (which is the case when a model has no observed performance entries for all graphs), we use the mean of averaged performance for other models in its place.
1https://pytorch.org/ 2https://www.dgl.ai/ 3https://github.com/yzhao062/MetaOD 4https://github.com/DeepGraphLearning/GraphLoG 5https://networkx.org/ 6https://numpy.org/
• Global Best-AvgRank computed the model rankings for each graph in percentile, as the number of observed model performances may be different for different graphs, and averaged the rank percentiles for observed cases only, as in Global Best-AvgPerf. • ISAC handled the sparse performance matrix in the same way as Global Best-AvgPerf, except that only a subset of graphs, which is similar to the test graph, is considered in ISAC. • ARGOSMART (AS) computed the mean of observed performance entries of the 1NN graph, and used this quantity in place of missing values. • ALORS factorized the sparse performance matrix using a missing value-aware non-negative matrix factorization algorithm. • Supervised Sur. (S2), NCF, and MetaOD performed optimization by only considering observed performance values in the loss function, while skipping over missing entries. Early stopping based on the validation performance was also done with respect to the observed performances alone.
F.3 EVALUATION OF META-GRAPH FEATURES (SECTION 5.3)
Except for WaveletCharacteristic and GraphLoG, we applied graph-level embedding (GLE) approaches to all graphs in our testbed, and meta-learners were trained and evaluated using the representations of all graphs via 5-fold cross validation. Since WaveletCharacteristic and GraphLoG could not scale up to some of the largest graphs in the testbed (e.g., due to out-of-memory error), we excluded 9% and 27% largest graphs for GraphLoG and WaveletCharacteristic, respectively, and evaluated meta-learners using the resulting subset of graphs. Note that, in these cases, METAGL was also trained and evaluated using the same subset of graphs.
G ADDITIONAL DETAILS AND ANALYSIS OF METAGL
G.1 METAGL ALGORITHM AND META-GRAPH FEATURES
Algorithm 1 provides detailed steps of METAGL, for both offline meta-training (top) and online model selection (bottom). In METAGL, we log-transform the meta-graph features, and extend the meta-graph features with them, as it helps with model selection. We use the notation m in Algorithm 1 as well as in the text to refer to these meta-graph features used by METAGL.
G.2 META-LEARNING OBJECTIVE FOR SPARSE PERFORMANCE MATRIX
Given a sparse performance matrix P, meta-training of METAGL can be performed by modifying the top-1 probability (Equation (3)) and the loss function (Equation (4)), such that the missing entries in P are ignored as follows:
pP̂itop1 (j) = Ipij (π(p̂ij))∑m k=1 Ipik(π(p̂ik)) = Ipij (exp(p̂ij))∑m k=1 Ipik(exp(p̂ik)) , (7)
L(P, P̂) = − n∑ i=1 m∑ j=1 Ipij ( pPitop1 (j) log ( pP̂itop1 (j) )) . (8)
where Ipij (·) is defined as
Ipij (x) = { x if pij exists in the observed performance matrix P, 0 if pij is missing in the observed performance matrix P.
(9)
Thus the supervision signal for each graph comes only from the model performances observed on it. If an entire row in P is empty, the loss terms for the corresponding graph are dropped from Equation (8).
G.3 G-M NETWORK
Figure 7 illustrates the G-M network (graph-model network) (Section 4.4), which is a multi-relational bipartite network between graph nodes and model nodes. In the G-M network, model and graph nodes are connected via five types of edges (e.g., P-m2m, P-g2m, M-g2g), which is shown as edges with distinct line styles and colors. Note that while Figure 7 shows only one edge per edge type, in the G-M network, each node is connected to its top-k similar nodes.
G.4 ATTENTIVE GRAPH NEURAL NETWORKS AND HETEROGENEOUS GRAPH TRANSFORMER
The embedding function f(·) in METAGL (Section 4.4) produces embeddings of models and graphs via weighted neighborhood aggregation over the multi-relational G-M network. Specifically, we define f(·) using Heterogeneous Graph Transformer (HGT) (Hu et al., 2020), which is a relationaware graph neural network (GNN) that performs attentive neighborhood aggregation over the G-M network. Let z`t denote the node t’s embedding produced by the `-th HGT layer, which becomes the
Algorithm 1: METAGL: Offline Meta-Training (Top) and Online Model Selection (Bottom) Input: Meta-train graph database G, model set M, embedding dimension k Output: Meta-learner for model selection /* (Offline) Meta-Learner Training (Sec. 4.1) */
1 Train & evaluate models in M on graphs in G to get performance matrix P 2 Extract meta-graph features M for each graph Gi in G (Sec. 4.3) 3 Factorize P to obtain latent graph factors U and model factors V, i.e., P ≈ UVᵀ
4 Learn an estimator φ(·) such that φ(m) = Ûi ≈ Ui 5 Create meta-train graph Gtrain (Sec. 4.4) 6 while not converged 7 for i = 1, . . . , n do 8 Get embeddings f(W[m;φ(m)]) of train graph Gi on Gtrain 9 for j = 1, . . . ,m do
10 Get embeddings f(Vj) of each model Mj on Gtrain 11 Estimate p̂ij = 〈f(W[m;φ(m)]), f(Vj)〉 (Eqn. 2) 12 end 13 end 14 Compute meta-training loss L(P, P̂) (Eqn. 4) and optimize parameters 15 end
Input: new graph Gtest Output: selected model M∗ for Gtest /* (Online) Model Selection (Sec. 4.2) */
16 Extract meta-graph features mtest = ψ(Gtest) 17 Estimate latent factor Ûtest = φ(mtest) for test graph Gtest 18 Create the test G-M network Gtest by extending Gtrain with new edges between test graph node
and existing nodes in Gtrain (Sec. 4.4) 19 Get embeddings f(W[mtest; Ûtest]) of test graph on Gtest 20 Get embeddings f(Vj) of each model Mj on Gtest 21 Return the best model M∗ ← arg maxMj∈M 〈 f(W[mtest; Ûtest]), f(Vj) 〉
input of the (`+ 1)-th layer. Given L total layers, the final embedding ht of node t is obtained to be the output from the last layer, i.e., zLt . In general, node embeddings z ` t produced by the `-th layer in an attention-based GNN, such as HGT, can be expressed as:
z`t = Aggregate ∀s∈N(t),∀e∈E(s,t)
( Attention(s, t) ·Message(s) ) (10)
where s and t are source and target nodes, respectively; N(t) denotes all the source nodes of node t; and E(s, t) denotes all edges from node s to t. There are three basic operators: Attention, which assigns different weights to neighbors based on the estimated importance of node s with respect to target node t; Message, which extracts the message vector from the source node s; and Aggregate, which aggregates the neighborhood messages by the attention weight.
HGT effectively processes multi-relational graphs, such as the proposed G-M network, by designing all of the above three operators to be aware of node types and edge types, e.g., by employing distinct set of projection weights for each type of nodes and edges, and utilizing node- and edge-type dependent attention mechanisms. We refer the reader to (Hu et al., 2020) for the details of how HGT defines the above three operators. In summary, METAGL computes the embedding function f(xt) by providing node t’s input features xt as the initial embedding (i.e., z0t ) to HGT, and returning z L t , the output from the last layer, which is computed over the G-M network via relation-aware attentive neighborhood aggregation.
G.5 TIME COMPLEXITY ANALYSIS
We now state the time complexity of our approach for inferring the best model given a new unseen graph G′ = (V ′, E′). Let G = (V, E) be the G-M network, which is comprised of model nodes and graph nodes, and induced by c-NN (nearest neighbor) search (Section 4.4 and Appendix G.3). Let k denote the embedding size, and h be the number of attention heads in HGT (Appendix G.4). The time complexity of METAGL is O(q|E′|∆ + |V|ck2/h) (11) where q is the number of meta-graph feature extractors, and |E′| is the number of edges in the new unseen graph G′. Note that both q and ∆ are small and thus negligible. Hence, METAGL is fast and efficient.
Meta-Graph Feature Extraction: The first term of the above time complexity includes the time required to estimate the frequency of all network motifs with {2, 3, 4}-nodes, which is O(|E′|∆) in the worst case where ∆ is a small constant representing the maximum sampled degree which can be set by the user. See Ahmed et al. (2016) for further details. The other structural meta-feature extractors such as PageRank all take at most O(|E′|) time. Furthermore, our approach is flexible and supports any set of meta-graph feature extractors. Thus, it is straightforward to see that we can achieve a slightly better time by restricting the set of such meta-graph feature extractors to those that can be computed in time that is linear in the number of edges of any arbitrary graph. Hence, in this case, the ∆ term is dropped and we have simply O(q|E′|+ |V|ck2/h). Also, note that feature extractors are independent of each other, and thus can be run in parallel.
Embedding Models and Graphs: To augment the G-M network G given a new graph G′, we find a fixed number of nearest neighbors for G′, which takes O(|V|k) time. Then we embed models and graphs by applying HGT over the G-M network G = (V, E). Assuming an HGT with h attention heads, the time to employ HGT over G is O(|V|k2 + |E|k2/h). More specifically, the time taken for Attention(·) and Message(·) functions is O(|V|k2 + |E|k2/h), where O(|V|k2) is for feature transformation by h heads for all nodes, and O(|E|k2/h) is for message transformation/attention computation over each edge. Similarly, Aggregate(·) step takes O(|V|k2 + |E|k) time. Assuming k2/h > k, the time for Aggregate(·) can be absorbed into the time for other steps. Further, given that the G-M network is induced by c-NN search, we have that O(|E|) = O(c|V|), and thus the time complexity for embedding models and graphs is O(|V|ck2/h).
H ADDITIONAL RELATED WORK
H.1 MODEL SELECTION IN MACHINE LEARNING
In this section, we provide a further review of model selection in machine learning, which we group into two categories.
Evaluation-Based Model Selection: A majority of model selection methods belong to this category. Representative techniques used by these methods include grid search (Liashchynskyi & Liashchynskyi, 2019), random search (Bergstra & Bengio, 2012), early stopping-based (Golovin et al., 2017) and bandit-based (Li et al., 2017b) approaches, and Bayesian optimization (BO) (Snoek et al., 2012; Wu et al., 2019b; Falkner et al., 2018). Among them, BO methods are more efficient than grid or random search, requiring fewer evaluations of hyperparameter configurations (HCs), as they determine which HC to try next in a guided manner using prior experience from previous trials. Since these methods perform model training or evaluation multiple times using different HCs, they are much less efficient than the following group of methods.
Evaluation-Free Model Selection: Methods in this category do not require model evaluation for model selection. A simple approach (Abdulrahman et al., 2018) identifies the best model by considering the models’ rankings observed on prior datasets. Instead of finding the globally best model, ISAC (Kadioglu et al., 2010) and AS (Nikolić et al., 2013) select a model that performed well on similar datasets, where the dataset similarity is modeled in the meta-feature space via clustering (Kadioglu et al., 2010) or k-nearest neighbor search (Nikolić et al., 2013). A different group of methods perform optimization-based model selection, where the model performance is estimated by modeling the relation between meta-features and model performances. Supervised Surrogates (Xu et al., 2012) learns a surrogate model that maps meta-features to model performance. Recently, MetaOD (Zhao et al., 2021) outperformed all of these methods in selecting outlier detection algorithms. As a method in this category, the proposed METAGL builds upon MetaOD and extends it for an effective and automatic GL model selection. Most importantly, METAGL selects a graph learning model (e.g., link predictor) for the given graph, while MetaOD selects an outlier detection (OD) model for the given dataset (n-dimensional input features). To this end, METAGL designs meta features to capture the characteristics of graphs, while MetaOD designs meta features specialized for OD tasks. Also, they adopt different meta-training objectives: METAGL adapts the top-1 probability for meta-training, whereas MetaOD uses an NDCG-based objective. Furthermore, METAGL learns the embeddings of models and graphs by applying a heterogeneous GNN over the G-M network, which allows a flexible modeling of the relations between different models and graphs. By contrast, in MetaOD, the embeddings of models and datasets are optimized separately, where the relations between models and datasets are modeled rather indirectly via reconstructing the performance matrix. Note that all of these earlier methods, except the first simple approach, rely on meta-features, and they focus on non-graph datasets. By using the proposed meta-graph features, they could be applied to the graph learning model selection task.
H.2 COMPARISON WITH MODEL-AGNOSTIC META-LEARNING (MAML) (FINN ET AL., 2017)
MAML employs meta-learning to train a model’s initial parameters such that the model can perform well on a new task after the parameters have been updated via a few gradient steps using the data from the new task. In other words, given a model, MAML initializes one specific model’s parameters via meta-learning over multiple observed tasks, such that the meta-trained model can quickly adapt to a new task after learning from a small number of new data (i.e., few-shot learning). On the other hand, METAGL employs meta-learning to carry over the prior knowledge of multiple different models’ performance on different graphs for evaluation-free selection of graph learning algorithms.
Since MAML meta-trains a specific model for fast adaptation to a new dataset, it is not for selecting a model from a model set consisting of a wide variety of learning algorithms. Further, MAML fine-tunes a meta-trained model in a few-shot learning setup, whereas in our problem setup, no training and evaluation is to be done given a new graph dataset. Due to these reasons, MAML is not applicable to the proposed evaluation-free GL model selection problem (Problem 1).
I ADDITIONAL RESULTS
I.1 EFFECTIVENESS OF META-GRAPH FEATURES
Figure 8 shows how accurately meta-learners can perform model selection when they use the proposed meta-graph features (Section 4.3) vs. six state-of-the-art graph-level embedding (GLE) techniques, i.e., GL2Vec (Chen & Koga, 2019), Graph2Vec (Narayanan et al., 2017), GraphLoG (Xu et al., 2021), WaveletCharacteristic (Wang et al., 2021), SF (de Lara & Pineau, 2018), and LDP (Cai & Wang, 2019). As discussed in Section 5.3, all meta-learners achieve a higher model selection accuracy nearly
consistently by using METAGL’s meta-graph features than when they use these GLE techniques, and METAGL outperform all meta-learners, regardless of which features are used.
I.2 MODEL SELECTION TIME
50% 100% 150% 200% 250%
Figure 9: Distribution of the time for METAGL to make a prediction / the time to create meta-graph features (in percentage). Red and green lines denote the median and mean, respectively.
Table 7 shows results comparing the runtime (in seconds) for naive model selection with the runtime of METAGL. Note that naive model selection requires training and evaluating each method in the model set, while in METAGL, the runtime involves only the time to generate meta-graph feature (penultimate row) and predict the best model via a forward | 1. What is the focus and contribution of the paper regarding graph learning models?
2. What are the strengths and weaknesses of the proposed framework for evaluation-free model selection?
3. Do you have any concerns or questions about the experimental setup and its limitations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a novel framework to conduct evaluation-free model selection for graph learning models, i.e., without having to train/evaluate any model on the new graph. The framework learns latent embeddings for observed models and the corresponding performance on observed graphs. Moreover, meta-graph features are computed for each graph based on specific structures so that the graph similarity can be measured. The extensive experiments demonstrate the effectiveness of the proposed framework.
Strengths And Weaknesses
Strengths:
The framework solves the novel problem of evaluation-free model selection in graph learning, which is crucial and challenging in real-world scenarios.
The experimental results are comprehensive and adequate. The authors also provide additional results regarding the effectiveness of each module and the efficiency of the framework.
The paper is well organized and easy to follow.
Weaknesses:
The structural information in each graph seems to be only incorporated via the meta-graph features. However, this can also be achieved by a specific GNN with a readout function. That being said, the improvement of the proposed framework is hardly believed to result from the incorporation of structural information.
The input for computing the latent embedding of a graph to function f() is not rational. If φ(m) is solely dependent on m, why the input of f() is a combination of m and φ(m), instead of only m or φ(m)? In this way, the information of m can also be preserved. On the other hand, combining both will seem redundant.
The experimental part only includes the link prediction task. Although this task is important in graph mining, the proposed method should be capable of various graph mining tasks (considering that the scenario will generally be applications). However, the lack of evaluation on other graph mining tasks will cause the paper to be less convincing.
The paper lacks sufficient theoretical analysis to support the claim of efficacy and efficiency. The empirical results are plentiful, while further analysis would be beneficial.
Clarity, Quality, Novelty And Reproducibility
For the Clarity:
There are several typos and grammar errors in this paper. For example, “smarter, more efficient” should be “smarter and more efficient”. “is fully automatic it does not require” lacks the word “since”. “number of wedges” should be “number of edges”.
The paragraph starting with “Benchmark Data/Code:” appears twice in the main paper, which is redundant.
For the Quality and Novelty: The quality seems to be solid and the studied problem is novel. The specific modules in the framework are less novel, while the framework is well designed.
For the Reproducibility: Following the above, the authors claim that they release the code and datasets, which are not found in the paper. No links are provided. |
ICLR | Title
Bidirectional Propagation for Cross-Modal 3D Object Detection
Abstract
Recent works have revealed the superiority of feature-level fusion for cross-modal 3D object detection, where fine-grained feature propagation from 2D image pixels to 3D LiDAR points has been widely adopted for performance improvement. Still, the potential of heterogeneous feature propagation between 2D and 3D domains has not been fully explored. In this paper, in contrast to existing pixelto-point feature propagation, we investigate an opposite point-to-pixel direction, allowing point-wise features to flow inversely into the 2D image branch. Thus, when jointly optimizing the 2D and 3D streams, the gradients back-propagated from the 2D image branch can boost the representation ability of the 3D backbone network working on LiDAR point clouds. Then, combining pixel-to-point and point-to-pixel information flow mechanisms, we construct an bidirectional feature propagation framework, dubbed BiProDet. In addition to the architectural design, we also propose normalized local coordinates map estimation, a new 2D auxiliary task for the training of the 2D image branch, which facilitates learning local spatial-aware features from the image modality and implicitly enhances the overall 3D detection performance. Extensive experiments and ablation studies validate the effectiveness of our method. Notably, we rank 1 on the highly competitive KITTI benchmark on the cyclist class by the time of submission. The source code is available at https://github.com/Eaphan/BiProDet.
1 INTRODUCTION
In recent years, 3D object detection has received increasing academic and industrial attention driven by the thriving of autonomous driving scenarios, where the two dominating data modalities 3D point clouds and 2D RGB images demonstrate complementary properties in many aspects. 3D point clouds scanned by LiDAR encode accurate structure and depth cues in the geometric domain but typically suffer from severe sparsity, incompleteness, and non-uniformity. The 2D RGB images captured by optical cameras convey rich semantic features in the visual domain and facilitate the application of well-developed image learning architectures (He et al., 2016) but pose challenges in reliably modeling spatial structures. More importantly, due to the essential difficulties in processing irregular and unordered point clouds, the actual learning ability of neural network backbones working on 3D LiDAR data is still relatively insufficient, thus limiting the performance of LiDAR-based detection frameworks. Practically, despite the efforts made by previous studies (Wang et al., 2019; Vora et al., 2020) devoted to cross-modal learning, how to reasonably mitigate the big modality gap between images and point clouds and fuse the corresponding heterogeneous feature representations still remains an open and non-trivial problem.
Depending on specific fusion strategies between image and point cloud modalities, existing crossmodal 3D object detection pipelines can be categorized into: 1) proposal-level; 2) result-level; 3) pseudo-LiDAR-based; and 4) point-level processing paradigms, as illustrated in Fig. 1. Concretely, proposal-level fusion methods (Chen et al., 2017; Ku et al., 2018) perform feature fusion for both modalities over each of the proposals/anchors, while result-level fusion methods (Pang et al., 2020) directly manipulate the output 2D and 3D object candidates by leveraging geometric and semantic consistencies. However, the former two fusion paradigms do not make full use of fine-grained correspondence between 3D LiDAR points and 2D images, and thus show inferior detection performance.
For another alternative fusion paradigm based on pseudo-LiDAR representations (You et al., 2020), auxiliary learning modules specialized for stereo depth estimation are integrated to synthesize dense pseudo-LiDAR point clouds as additional complements to the original real-scanned sparse LiDAR data. Such a strategy indeed brings performance boosts but it becomes more technically cumbersome and demanding, since ground-truth depth maps are required as supervision signals. Moreover, as investigated previously (Qian et al., 2020), it is non-trivial to harmonize the joint end-to-end optimization of the object detector and the depth estimator. To make use of point-pixel correspondences between images and point clouds, there has been another family of point-level fusion approaches (Huang et al., 2020; Vora et al., 2020) characterized by using projection from the 3D space of point cloud to 2D image planes. That is, for each 3D point, its geometric feature extracted from the 3D LiDAR learning branch is fused with the semantic feature (retrieved from the 2D image learning branch) of the corresponding 2D pixel. Benefiting from more fine-grained feature manipulation, point-level fusion typically demonstrates more significant performance improvement and thus is considered as a more promising cross-modal fusion paradigm.
Still, based on our observations, despite the superiority of point-level fusion, previous studies have not fully exploited its great potential in terms of specific architectural design and technical workflow. Fundamentally, the performance boost achieved by previous methods can be mainly credited to the fact that additional discriminative 2D semantic information is incorporated into the 3D detection pipeline. However, there is no evidence that the actual representation capability of the 3D LiDAR backbone network, which is also one of the most critical influencing factors, is strengthened. Besides, it is also noticed that the choice of 2D auxiliary tasks used for training the image learning branch also makes a difference, which essentially determines the consequent 2D feature representations to be propagated. Previous methods directly resort to classic semantic understanding tasks (e.g., image segmentation) for the learning of visual patterns, which may not be the optimal task that harmonizes with the 3D LiDAR learning branch for geometric modeling.
Given the above issues, in this paper we propose a bidirectional feature propagation architecture, namely BiProDet, for cross-modal 3D object detection. Foremost, our core component lies in a novel point-to-pixel feature propagation mechanism allowing 3D geometric features of LiDAR point clouds to flow into the 2D image learning branch. Our BiProDet differentiates from previous crossmodal 3D detection pipelines that solely pay attention to unidirectional 2D-to-3D (pixel-to-point) feature propagation while neglecting the potential effects of the opposite 3D-to-2D (point-to-pixel) direction. Functionally, different from existing pixel-to-point propagation, the proposed point-topixel propagation turns out to be capable of enhancing the representation ability of 3D LiDAR backbone networks. In fact, it is worth emphasizing that this is an interesting phenomenon, and we reason that this is because the gradients back-propagated from the auxiliary 2D image branch built upon mature 2D convolutional neural networks (CNNs) with the impressive learning ability can effectively strengthen the 3D LiDAR branch. Taking a step forward, we integrate pixel-to-point and point-to-pixel propagation directions into a unified interactive bidirectional feature propagation framework such that 2D and 3D learning branches are inter-strengthened, eventually contributing to more significant performance gains. Note that the previous work also develops a bidirectional model for joint optical and scene flow estimation(Liu et al., 2022). Still, we design a concise yet effective bidirectional propagation strategy without bells and whistles. Unlike Liu et al. (2022) detach the gradients in bidirectional fusion modules, we demonstrate that gradients back-propagated from the auxiliary 2D image branch could enhance the 3D backbone. In addition, we also particularly customize a new 2D auxiliary task called normalized local coordinate (NLC) map estimation, which facilitates learning local spatial-aware feature representations from the 2D image. Such a task also
implicitly helps to capture more discriminative geometric information from the main 3D LiDAR point cloud learning branch, thus improving the overall detection performance, especially for hard cases of distant and/or highly-occluded objects.
We conduct extensive experiments on the prevailing KITTI (Geiger et al., 2012) benchmark dataset. Compared with state-of-the-art single-modal and cross-modal 3D object detectors, our framework achieves remarkable performance improvement. Notably, our method ranks 1st on the cyclist class of KITTI 3D detection benchmark1. Conclusively, our main contributions are three-fold:
• In contrary to existing pixel-to-point cross-modal fusion strategies, we explore an opposite point-to-pixel feature propagation paradigm, which turns out to be capable of enhancing the representation capability of 3D backbone networks working on LiDAR point clouds.
• Combining the existing pixel-to-point and the newly proposed point-to-pixel feature propagation schemes, we further construct the powerful bidirectional feature propagation for crossmodal 3D objection detection.
• We reveal the importance of specific 2D auxiliary tasks used for training the 2D image learning branch which has not received enough attention in previous works, and introduce 2D NLC map estimation to facilitate learning spatial-aware features and implicitly boost the overall 3D detection performance.
2 RELATED WORK
LiDAR-based 3D Object Detection could be roughly divided into two categories. 1) Voxel-based 3D detectors typically voxelize the point clouds into grid-structure forms of a fixed size (Zhou & Tuzel, 2018). Yan et al. (2018) introduced a more efficient sparse convolution to accelerate training and inference. He et al. (2020a) employed auxiliary tasks including center estimation and foreground segmentation to guide the network to learn the intra-object relationship. 2) Point-based 3D detectors consume the raw 3D point clouds directly and generate predictions based on (downsampled) points. Shi et al. (2019) applied a point-based feature extractor and generated high-quality proposals on foreground points. 3DSSD (Yang et al., 2020) adopted a new sampling strategy named F-FPS as a supplement to D-FPS to preserve enough interior points of foreground instances. It also built a onestage anchor-free 3D object detector based on feasible representative points. Zhang et al. (2022c) and Chen et al. (2022) introduced semantic-aware down-sampling strategies to preserve foreground points as much as possible. Shi & Rajkumar (2020b) proposed to encode the point cloud by a fixedradius near-neighbor graph. Besides, PV-RCNN (Shi et al., 2020a) adopted a voxel-based RPN and leverages point-wise features extracted from the voxel set abstraction module to refine the proposals.
Camera-based 3D Object Detection. Early works (Mousavian et al., 2017; Manhardt et al., 2019) designed monocular 3D detectors by referring to 2D detectors (Ren et al., 2015; Tian et al., 2019) and utilizing 2D-3D geometric constraints. Another way is to convert images to pseudo-lidar representations via monocular depth estimation and then resort to LiDAR-based methods (Wang et al., 2019). Chen et al. (2021) built dense 2D-3D correspondence by predicting object coordinate map and adopted uncertainty-driven PnP to estimate object location. Recently, multi-view detectors like BEVStereo (Li et al., 2022) have also achieved promising performance, getting closer to LiDARbased methods.
Cross-Modal 3D Object Detection can be roughly divided into four categories: proposal-level, result-level, point-level, and pseudo-lidar-based approaches. Proposal-level fusion methods (Chen et al., 2017; Ku et al., 2018; Liang et al., 2019) adopted a two-stage framework to fuse image features and point cloud features corresponding to the same anchor or proposal. For result-level fusion, Pang et al. (2020) exploited the geometric and semantic consistencies of 2D detection results and 3D detection results to improve single-modal detectors. Qi et al. (2018) generated 2D region proposals using a CNN and lifted them to frustum proposals, where a PointNet-based network is utilized to estimate 3D box. Obviously, both proposal-level and decision-level fusion strategies are coarse and do not make full use of correspondence between LiDAR points and images. Pseudo-LiDAR based methods (You et al., 2020; Wu et al., 2022; Liang et al., 2019) employed depth estimation/completion and converted the estimated depth map to pseudo-LiDAR points to complement raw sparse point clouds. However, such methods require extra depth map annotations that are high-cost. Regarding point-level fusion methods, Liang et al. (2018) proposed the continuous fusion layer to retrieve corresponding image features of nearest 3D points for each grid in BEV feature map. Especially, Vora
1www.cvlibs.net/datasets/kitti/eval object.php?obj benchmark=3d
et al. (2020) and Huang et al. (2020) retrieved the semantic scores or features by projecting points to image plane. Liu et al. (2021) proposed EPNet++, where an LI-Fusion layer is used to enable more interaction between the two modalities to obtain more comprehensive features. And Wang et al. (2021) decorated point clouds with middle CNN features extracted from 2D detection models. However, neither semantic segmentation nor 2D detection tasks can enforce the network to learn 3D-spatial-aware image features. By contrast, we introduce NLC Map estimation as an auxiliary task to promote the image branch to learn local spatial information to supplement the sparse point cloud. Besides, Bai et al. (2022) proposed TransFusion, a transformer-based detector that performs fine-grained fusion with attention mechanisms on the BEV level. Philion & Fidler (2020) and Liang et al. (2022) estimated the depth of multi-view images and transformed the camera feature maps into the BEV space before fusing them with LiDAR BEV features.
3 PROPOSED METHOD
Overview. Architecturally, the overall processing pipeline of cross-modal 3D object detection consists of an image branch and a point cloud branch, learning feature representations from 2D RGB images and 3D LiDAR point clouds, respectively. Existing methods implement cross-modal learning only by incorporating pixel-wise semantic cues from the 2D image branch into the 3D point cloud branch for feature fusion. Here, motivated by the observation that point-based networks typically show insufficient learning ability due to the essential difficulties in processing irregular and unordered point cloud data modality (Zhang et al., 2022a), we additionally seek to boost the expressive power of the point cloud backbone network with the assistance of the 2D image branch, whose potential is ignored in previous studies. It is also worth reminding that our ultimate goal is not to design a more powerful point cloud backbone network structure but to elegantly make full use of the available resources in this cross-modal application scenario.
As shown in Fig. 2, in addition to pixel-to-point feature propagation explored by mainstream pointlevel fusion methods (Liang et al., 2018; Huang et al., 2020), we propose point-to-pixel propagation allowing features to flow inversely from the point cloud branch to the image branch, based on which we can achieve bidirectional feature propagation implemented in a multi-stage fashion. In this way, not only the image features propagated via the pixel-to-point module can provide additional semantic information, but also the gradients backpropagated from the training objectives of the image branch can boost the representation ability of the point cloud backbone. Besides, we also employ auxiliary tasks to train the pipeline, aiming to enforce the network to learn rich semantic and spatial representations. In particular, we propose NLC map estimation to promote the image branch to learn spatial-aware features, providing a necessary complement to the sparse spatial representation
extracted from point clouds, especially for distant or highly occluded cases. We refer the readers to Appendix A.2 for detailed network structures of the image and point cloud backbones.
3.1 BIDIRECTIONAL FEATURE PROPAGATION
As illustrated in Fig. 3, the proposed bidirectional feature propagation consists of a pixel-to-point module and a point-to-pixel module, which bridges the two learning branches that operate on RGB images and LiDAR point clouds. Functionally, the point-to-pixel module applies grid-level interpolation on point-wise features to produce a 2D feature map, while the pixel-to-point module retrieves the corresponding 2D image features by projecting 3D points to the 2D image plane.
In general, we partition the overall 2D and 3D branches into the same number of stages, such that we can perform bidirectional feature propagation at each stage between the corresponding layers of the image and point cloud learning networks. Without loss of generality, below we only focus on a certain stage, where we can acquire a 2D feature map F ∈ RC2D×H′×W ′ with C2D channels and dimensions H ′ ×W ′ from the image branch and a set of C3D-dimensional embedding vectors G = { gi ∈ RC3D }N ′ i=1 of the corresponding N ′ 3D points.
Point-to-Pixel Module. We design this module to propagate geometric information from the 3D point cloud branch into the 2D image branch. Formally, we expect to construct a 2D feature map F P2I ∈ RC3D×H′×W ′ by querying from the acquired point-wise embeddings G. More specifically, for each pixel position (r, c) over the targeted 2D grid of F P2I of dimensions H ′ ×W ′, we collect the corresponding geometric features of points that are projected inside the range of this pixel and then perform feature aggregation by avg-pooling AVG(·):
F P2Ir,c = AVG({gj |j = 1, ..., n}), (1) where n counts the number of points that can be projected inside the range of the pixel located at the rth row and the cth column. Particularly, for empty pixel positions where there is no point projected inside, we simply define its feature value as 0.
After that, instead of directly feeding the “interpolated” 2D feature map F P2I into the subsequent 2D convolutional stage, we introduce a few additional refinement procedures for feature smoothness as described below: F fuse = CONV(CAT[CONV(F P2I),F ]), (2) where CAT[·, ·] and CONV(·) stand for feature channel concatenation and the convolution operation, respectively, and the resulting F fuse is further fed into the subsequent layers of the 2D image branch.
Pixel-to-Point Module. From an opposite direction, this module propagates semantic information from the 2D image branch into the 3D point cloud branch. We start by projecting the corresponding N ′ 3D points onto the 2D image plane, obtaining their planar coordinates denoted as X = {xi ∈ R2}N ′i=1. Considering that these 2D coordinates are continuously and irregularly distributed over the projection plane, we apply bilinear interpolation to compute exact image features at projected positions, deducing a set of point-wise embeddings F I2P = {F (xi) ∈ RC2D |i = 1, ..., N ′}. Similar to our practice in the point-to-pixel module, the interpolated 3D point-wise features F I2P need to be refined by the following procedures:
Gfuse = MLP ( CAT [ MLP ( F I2P ;θ1 ) ,G ] ;θ2 ) , (3)
where MLP(·;θ) denotes shared multi-layer perceptrons parameterized by learnable weights θ, and the resulting Gfuse further passes through the subsequent layers of the 3D point cloud branch.
3.2 AUXILIARY TASKS FOR TRAINING
The semantic and spatial-aware information is essential for determining the category and boundary of objects. To promote the network to uncover such information of input data, we also leverage auxiliary tasks when training the whole pipeline. For the point cloud branch, following SA-SSD (He et al., 2020b), we introduce 3D semantic segmentation and center estimation to learn structure-aware features. For the image branch, we particularly propose NLC map estimation, in addition to 2D semantic segmentation.
Normalized Local Coordinate (NLC) System. We define the NLC system as taking the center of an object as the origin, aligning the x-axis towards the head direction of its ground-truth (GT) bounding box, then normalizing the local coordinates with respect to the size of the GT bounding box. Fig. 4 (a) shows an example of the NLC system for a typical car. With the geometry relationship between the input RGB image and point
cloud, we can project NLCs to the 2D image plane to construct a 2D NLC map with three channels corresponding to three spatial dimensions, as illustrated in Fig. 4 (b). We refer the reader to Appendix A.1 for the detailed process.
NLC Map Estimation. Intuitively, vision-based semantic information is useful for network to distinguish the foreground from the background. However, the performance bottleneck of existing detectors lies in the limited localization accuracy in distant or highly occluded cases, where the spatial structure is incomplete due to the sparsity of point clouds. To this end, we further take NLC map as supervision to learn the relative position of each pixel inside the object from the image. We expect such an auxiliary task can drive the image branch to learn local spatial-aware features, which serve as a complement to the sparse spatial representation extracted from point clouds. Besides, this task could have an effect on augmenting the representation ability of the point cloud branch.
Remark. The proposed NLC map estimation shares the identical objective with pseudo-LiDARbased methods, i.e, enhancing the spatial representation limited by the sparsity of point clouds. However, compared with the manner of learning a pseudo-LiDAR representation via depth completion, our NLC map estimation has the following advantages: 1) the local NLC is easier to learn than the global depth owing to its scale-invariant property at different distances; 2) our NLC map estimation can be naturally incorporated and trained with the detection pipeline end-to-end, while it is non-trivial to optimize the depth estimator and LiDAR-based detector simultaneously in pseudoLiDAR based methods though it can be implemented in a somewhat complex way (Qian et al., 2020); and 3) the pseudo-LiDAR representations require ground-true dense depth images as supervision, which may not be available in reality, but our method does not require such information.
Semantic Segmentation. Leveraging semantics of 3D point clouds has demonstrated effectiveness in point-based detectors, owing to the explicit preservation of foreground points at downsampling (Zhang et al., 2022c; Chen et al., 2022). Therefore, we further introduce the auxiliary semantic segmentation tasks not only in the point cloud branch but also in the image branch, to exploit richer semantic information. Needless to say, additional semantic features extracted from images facilitate the network to distinguish true positive candidate boxes from the false-positive.
3.3 LOSS FUNCTION
The NLC map estimation task is optimized with the loss function defined as
LNLC = 1
Npos N∑ i ∥∥pNLCi − p̂NLCi ∥∥H · 1pi , (4)
where pi is the ith LiDAR point, Npos is the number of foreground LiDAR points, ∥·∥H is the Huber-loss, 1pi indicates the loss is calculated only with foreground points, and p NLC i and p̂ NLC i are the NLCs of ground-truth and prediction at corresponding pixel for foreground points.
We use the standard cross-entropy loss to optimize both 2D and 3D semantic segmentation, denoted as L2Dsem and L 3D sem respectively. And the loss of center estimation is computed as
Lctr = 1
Npos N∑ i ∥∆pi −∆p̂i∥H · 1pi , (5)
where ∆pi is the target offsets from points to the corresponding object center, and ∆p̂i denotes the output of center estimation head.
Besides the auxiliary tasks, we follow (Chen et al., 2022) to define the loss of the RPN stage as Lrpn = Lc + Lr + Ls + Ld, (6)
where Lc is the classification loss, Lr the regression loss, Ls the shifting loss between the predicted shifts and residuals from the sampled points to their corresponding object centers in candidate generate layer (Yang et al., 2020), and Ld the point segmentation loss in SA layers to enable semanticguided sampling. In addition, we also adopt commonly used proposal refinement loss Lrcnn as defined in (Shi et al., 2020a).
In all, the loss for optimizing the overall pipeline in an end-to-end manner is written as Ltotal = Lrpn + Lrcnn + λ1LNLC + λ2L 2D sem + λ3L 3D sem + λ4Lctr, (7) where λ1, λ2, λ3, and λ4 are hyper-parameters that are empirically set to 1.
4 EXPERIMENTS
4.1 EXPERIMENT SETTINGS
Datasets and Metrics. We conducted experiments on the prevailing KITTI benchmark dataset, which contains two modalities of 3D point clouds and 2D RGB images. Following previous works (Shi et al., 2020a), we divided all training data into two subsets, i.e., 3712 samples for training and the rest 3769 for validation. Performance is evaluated by the Average Precision (AP) metric under IoU thresholds of 0.7, 0.5, and 0.5 for car, pedestrian, and cyclist categories, respectively. We computed APs with 40 sampling recall positions by default, instead of 11. For the 2D auxiliary task of semantic segmentation, we used the instance segmentation annotations as provided in (Qi et al., 2019). Besides, we also conducted experiments on the Waymo Open Dataset (WOD) (Sun et al., 2020), which can be found in Appendix A.4.
Implementation Details. For the image branch, we used ResNet18 (He et al., 2016) as the backbone encoder, followed by a decoder composed of pyramid pooling module (Zhao et al., 2017) and several upsampling layers to give the outputs of semantic segmentation and NLC map estimation. For the point cloud branch, we deployed a point-based network like (Yang et al., 2020) with extra FP layers and further applied semantic-guided farthest point sampling (S-FPS) (Chen et al., 2022) in SA layers. Thus, we implemented bidirectional feature propagation between the top of each SA or FP layer and their corresponding locations at the image branch. Please refer to Appendix A.2 for more details.
4.2 COMPARISON WITH STATE-OF-THE-ART METHODS
We submitted our results to the official website of KITTI. As compared in Table 1, our BiProDet outperforms existing state-of-the-art methods on the KITTI test set by a remarkable margin, i.e., an absolute increase of 2.1% mAP over the second best method EQ-PVRCNN. Notably, by the time of submission, we ranked 1st on the KITTI 3D detection benchmark for the cyclist class. Particularly, it is observed that BiProDet shows consistent and more obvious superiority on “moderate” and “hard” levels, i.e., the distant or highly-occluded objects with sparse points. We ascribe such performance gains to the fact that our proposed bidirectional feature propagation mechanism contributes to more adequate exploitation of complementary information between multi-modalities as well as the effects of 2D auxiliary tasks (as verified in Sec. 4.3). Besides, we can observe from the actual visual results (as presented in Appendix A.7) that our BiProDet is able to produce higher-quality 3D bounding boxes in varying scenes.
4.3 ABLATION STUDY
We conducted comprehensive ablation studies to validate the effectiveness and explore the impacts of key modules involved in the overall learning framework.
Effectiveness of Bidirectional Propagation. We performed detailed ablation studies on specific multi-modal feature exploitation and interaction strategies. We started by presenting a single-modal baseline (Table 2 (a)) that only preserves the point cloud branch of our BiProDet for both training and inference. Based on our complete bidirectional propagation pipeline (Table 2 (e)), we explored another two variants as shown in (Table 2 (c)) and (Table 2 (d)), solely preserving the point-to-pixel and pixel-to-point feature propagation in our 3D detectors, respectively. Note that in Table 2 (c) the point-to-pixel feature flow was only enabled during training, and we detached the point-to-pixel module as well as the whole image branch during inference. In addition, in Table 2 (b), we replaced the bidirectional feature propagation of our BiProDet with the input fusion strategy as proposed in (Wang et al., 2021) that decorates point clouds with CNN features deduced from the image branch. Empirically, we can draw several aspects of conclusions that strongly support our preceding claims. First, the performance of Table 2 (a) turns out to be the worst among all variants, which reveals the superiority of cross-modal learning. Second, combining Table 2 (a) and Table 2 (c), the mAP is largely boosted from 75.88% to 77.01%. Considering that, during the inference stage, these two variants have identical forms of input and network structure, the resulting improvement strongly indicates that the representation ability of the 3D LiDAR branch is indeed strengthened. Third, comparing Table 2 (c) and Table 2 (d) with Table 2 (e), we can verify the superiority of bidirectional propagation (78.64%) over the unidirectional schemes (77.01% and 77.47%). If we particularly pay attention to Table 2 (d) and Table 2 (e), we can conclude that our newly proposed point-to-pixel feature propagation direction further brings 1.17% mAP increase based on the previously explored pixel-to-point paradigm. Besides, by comparing Table 2 (b) (77.07%) and Table 2 (d) (77.47%) whose information propagation directions are both from the 2D image domain to the 3D point cloud domain, we can demonstrate the superiority of feature-level propagation over its input-level counterpart (Wang et al., 2021).
Effectiveness of Image Branch. Comparing Table 3 (d) with Tables 3 (e)-(g), the performance is stably boosted from 75.88% to 78.64%, as the gradual addition of the 2D image branch and two
auxiliary tasks. These results indicate that the additional semantic and geometric features learned from the image modality are indeed effective supplements to the point cloud representation, leading to significantly improved 3D object detection performances.
Effectiveness of Auxiliary Tasks. Comparing Table 3 (a) with Tables 3 (b)-(d), the mAP is boosted by 0.70% when incorporating 3D semantic segmentation and 3D center estimation, and can be further improved by 2.76% after introducing 2D semantic segmentation and NLC map estimation (Tables 3 (d)-(g)). However, only integrating image features without 2D auxiliary tasks (comparing results of Tables 3 (d) and (e)) brings limited improvement of 0.5%. This observation shows that the 2D auxiliary tasks, especially the proposed 2D NLC map estimation, do enforce the image branch to learn complementary information for the detector.
Robustness against Input Corruption. We also conducted extensive experiments to verify the robustness of our BiProDet to sensor perturbation. Specifically, we added Gaussian noises to the reflectance value of points or RGB images. Fig. 5 shows that the mAP value of our cross-modal BiProDet is consistently higher than that of the single-modal baseline and decreases slower with the LiDAR noise level increasing. Particularly, as listed in Table 4, when the variance of the LiDAR noise is set to 0.15, the perturbation affects our cross-modal BiProDet much less than the singlemodal detector. Besides, even applying the corruption to both LiDAR input and RGB images, the mAP value of our BiProDet only drops by 2.49%.
5 CONCLUSION
We have presented a novel cross-modal 3D detector, namely BiProDet, by fully exploiting complementary information from the image domain in two aspects. First, we proposed point-to-pixel feature propagation, which enables gradients backpropagated from the training loss of the image branch to augment the expressive power of the 3D point cloud backbone. Second, we proposed NLC map estimation as an auxiliary task to promote the image branch to learn local spatial-aware representation rather than just semantic features. Our BiProDet achieves state-of-the-art results on the KITTI detection benchmark. And extensive ablative experiments demonstrate the robustness of our BiProDet against sensor noises and generalization to LiDAR signals with fewer beams. The decent results demonstrate the potential of joint training between 3D object detection and more 2D scene understanding tasks. We believe our new perspective will further inspire investigations on multi-task multi-modal learning for scene understanding in autonomous driving.
ACKNOWLEDGEMENT
This work was supported in part by the Hong Kong Research Grants Council under Grants 11219422, 11202320, and 11218121, and in part by Hong Kong Innovation and Technology Fund under Grant MHP/117/21.
REPRODUCIBILITY STATEMENT
We provide the source code and configuration in the supplementary material for the experimental results presented in the main text. We specify the settings of hyper-parameters, the training scheme, and the implementation details of our method in Section 4.1 and Appendix A.2. Besides, we also give a clear explanation of used datasets. We repeat the experiments on the KITTI dataset several times to check the rightness and reproducibility of the implementation.
ETHICS STATEMENT
Proposing effective cross-modal 3D object detectors would benefit a wide range of applications. For example, the incomplete information obtained from the single-modal sensor in autonomous driving vehicles may lead to serious traffic accidents. And the cross-modal detectors are expected to achieve more accurate detection results and be more robust to environment perturbation. Besides, this study was conducted using publicly available datasets and without surveying participants for experiments. Thus we do not consider this study to raise privacy or data security issues. We cited the creators when using existing datasets.
A APPENDIX
In this appendix, we provide the details omitted from the manuscript due to space limitation. We organize the appendix as follows.
• Section A.1: Details of the normalized local coordinate (NLC) System. • Section A.2: Implementation details. • Section A.3: More quantitative results. • Section A.4: Experimental results on Waymo dataset. • Section A.5: More ablation studies. • Section A.6: Efficiency analysis. • Section A.7: Visual results of 3D object detection. • Section A.8: Visual results of 2D semantic segmentation. • Section A.9: Details on the official KITTI test leaderboard.
A.1 DETAILS OF NORMALIZED LOCAL COORDINATE (NLC) SYSTEM
The aim of the cross-modal 3D detector is to estimate the bounding box parameterized by the object dimension (w, l, h), the center location (xc, yc, zc), and the orientation angle θ. The input contains an RGB image X ∈ R3×H×W , where H and W represent the height and width of the image respectively, and a 3D LiDAR point cloud {pi|i = 1, ..., N}, where N is the number of points, and each point pi ∈ R4 is represented with its 3D location (xp, yp, zp) in the LiDAR coordinate system and the reflectance value ρ.
To build the correspondence between the 2D and 3D modalities, we project the observed points from the 3D coordinate system to the 2D coordinate system on the image plane:
dc [ u v 1 ] = K [R T ] xpypzp 1 , (8) where u, v, dc denote the corresponding coordinates and depth on the image plane, R ∈ R3×3 and T ∈ R3×1 denote the rotation matrix and translation matrix of the LiDAR relative to the camera, and K ∈ R3×3 is the camera intrinsic matrix. Given the point cloud and the bounding box of an object, LiDAR coordinates of foreground points can be transformed into the NLC system by proper translation, rotation, and scaling, i.e.,xNLCpyNLCp
zNLCp
= [1/l 0 00 1/w 0 0 0 1/h ] · xLCSpyLCSp zLCSp + [0.50.5 0.5 ] = [ 1/l 0 0 0 1/w 0 0 0 1/h ] · [ cosθ −sinθ 0 sinθ cosθ 0 0 0 1 ] · [ xp − xc yp − yc zp − zc ] + [ 0.5 0.5 0.5 ] , (9)
where (xNLCp , y NLC p , z NLC p ) and (x LCS p , y LCS p , z LCS p ) denote the coordinates of points in NLC system and local coordinate system, respectively. Suppose that the bounding box is unknown with known global coordinates and NLCs of points, we can build a set of equations about the parameters of box (i.e., the xc, yc, zc, w, l, h, θ). In order to solve the total of 7 parameters, global coordinates and NLCs of at least 7 points are required to construct the equations.
Discussion. Notably, it is difficult to infer NLCs of points with only LiDAR data for objects far away or with low reflectivity, since the point clouds are sparse. Meanwhile, the contours and appearance of these cases are still visible in RGB images, so we propose to estimate the normalized local coordinates from images. Then, we can retrieve the estimated NLCs of observed points based on the 2D-3D correspondence. Finally, we expect the proposed cross-modal detector to achieve higher detection accuracy with estimated NLCs and known global coordinates.
A.2 IMPLEMENTATION DETAILS
Network Architecture. Fig. 6 illustrates the architectures of the point cloud and image backbone networks. For the encoder of the point cloud branch, we further show the details of multi-scale
grouping (MSG) network in Table 5. Following 3DSSD (Yang et al., 2020), we take key points sampled by the 3rd SA layer to generate vote points and estimate the 3D box. Then we feed these 3D boxes and features output by the last FP layer to the refinement stage. Besides, we adopt densityaware RoI grid pooling (Hu et al., 2022) to encode point density as an additional feature. Note that 3D center estimation (He et al., 2020b) aims to learn the relative position of each foreground point to the object center, while the 3D box is estimated based on sub-sampled points. Thus, the auxiliary task of 3D center estimation differs from 3D box estimation and can facilitate learning structureaware features of objects.
Training Details. Through the experiments on KITTI dataset, we adopted Adam (Kingma & Ba, 2015) (β1=0.9, β2=0.99) to optimize our BiProDet. We initialized the learning rate as 0.003 and updated it with the one-cycle policy (Smith, 2017). And we trained the model for a total of 80 epochs in an end-to-end manner. In our experiments, the batch size was set to 8, equally distributed on 4 NVIDIA 3090 GPUs. We kept the input image with the original resolution and padded it to
the size of 1248× 376, and down-sampled the input point cloud to 16384 points during training and inference. Following the common practice, we set the detection range of the x, y, and z axis to [0m, 70.4m], [-40m, 40m] and [-3m, 1m], respectively.
Data Augmentation. We applied common data augmentation strategies at global and object levels. The global-level augmentation includes random global flipping, global scaling with a random scaling factor between 0.95 and 1.05, and global rotation around the z-axis with a random angle in the range of [−π/4, π/4]. Each of the three augmentations was performed with a 50% probability for each sample. The object-level augmentation refers to copying objects from other scenes and pasting them to current scene (Yan et al., 2018). In order to perform sampling synchronously on point clouds and images, we utilized the instance masks provided in (Qi et al., 2019). Specifically, we pasted both the point clouds and pixels of sampled objects to the point cloud and images of new scenes, respectively.
A.3 MORE QUANTITATIVE RESULTS
Performance on KITTI Val Set. We also reported the performance of our BiProDet on all three classes of the KITTI validation set in Table 6, where it can be seen that our BiProDet also achieves the highest mAP of 77.73%, which is obviously higher than the second best method CAT-Det.
Performance of Single-Class Detector. Quite a few methods (Deng et al., 2021; Zheng et al., 2021) train models only for car detection. Empirically, the single-class detector performs better in the car class compared with multi-class detectors. Therefore, we also provided performance of BiProDet trained only for the car class, and compared it with several state-of-the-art methods in Table 7.
Performance of NLC Map Estimation. The NLC map estimation task is introduced to guide the image branch to learn local spatial-aware features. We evaluated the predicted NLC of pixels containing at least one projected point with mean absolute error (MAE) for each object:
MAEq = 1
Nobj N∑ i ∣∣qNLCpi − q̂NLCpi ∣∣ · 1pi , q ∈ {x, y, z} , (10)
where Nobj is the number of LiDAR points inside the ground-truth box of the object, 1pi indicates that the evaluation is only calculated with foreground points inside the box, qNLCpi and q̂ NLC pi are the normalized local coordinates of the ground-truth and the prediction at the corresponding pixel for the foreground point. Finally, we obtained the mean value of MAEq for all instances, namely mMAEq . We report the metric over different difficulty levels for three categories on the KITTI val set at Figure 7. We can observe that, for the car class, the mMAE error is only 0.0619, i.e., ±6.19 cm error per meter. For the challenging pedestrian and cyclist categories, the error becomes larger due to the smaller size and the non-rigid shape.
Generalization to Asymmetric Backbones. As shown in Fig. 6, we originally adopted an encoderdecoder network in the LiDAR branch that is architecturally similar to the image backbone. Nevertheless, it is worth clarifying that our approach is not limited to symmetrical structures and can be generalized to different point-based backbones. Here, we replaced the 3D branch of the original framework with an efficient single-stage detector—SASA (Chen et al., 2022), using a backbone only with the encoder in the LiDAR branch, which is asymmetric with the encoder-decoder structure of the image backbone. Accordingly, the proposed bidirectional propagation is only performed between the 3D backbone and the encoder of the 2D image backbone. The experimental results are shown in Table 8. We can observe that the proposed method works well even when the two backbones are asymmetric, which demonstrates the satisfactory generalization ability of our method for different LiDAR backbones.
A.4 RESULTS ON WAYMO OPEN DATASET
The Waymo Open Dataset (Sun et al., 2020) is a large-scale dataset for 3D object detection. It contains 798 sequences (15836 frames) for training, and 202 sequences (40077 frames) for validation. According to the number of points inside the object and the difficulty of annotation, the objects are further divided into two difficulty levels: LEVEL 1 and LEVEL 2. Following common practice, we adopted the metrics of mean Average Precision (mAP) and mean Average Precision weighted by
heading accuracy (mAPH), and reported the performance on both LEVEL 1 and LEVEL 2. We set the detection range to [-75.2m, 75.2m] for x and y axis, and [-2m, 4m] for z axis. Following Wang et al. (2021) and Bai et al. (2022), the training on Waymo dataset consists of two stages to allow flexible augmentations. First, we only trained the LiDAR branch without image inputs and bidirectional propagation for 30 epochs. We enabled the copy-and-paste augmentation in this stage. Then, we trained the whole pipeline for another 6 epochs, during which the copy-and-paste is disabled. Note that the image semantic segmentation head is disabled, since ground-truth segmentation maps are not provided (Sun et al., 2020).
As shown in Table 9, our method achieves substantial improvement compared with previous stateof-the-arts. Particularly, unlike existing approaches including PointAugmenting (Wang et al., 2021) and TransFusion (Bai et al., 2022) where the camera backbone is pre-trained on other datasets and then frozen, we trained the entire pipeline in an end-to-end manner. It can be seen that even without the 2D segmentation auxiliary task, our method still achieves higher accuracy under all scenarios except “Ped L2”, demonstrating its advantage.
A.5 MORE ABLATION STUDIES
Effectiveness of Multi-stage Interaction. As mentioned before, both 2D and 3D backbones adopt an encoder-decoder structure, and we perform bidirectional feature propagation at both downsampling and upsampling stages. Here, we conducted experiments to verify the superiority of the multistage interaction over single-stage interaction. As shown in Table 10, only performing the bidirectional feature propagation in the encoder (i.e., Table 10 (b)) or the decoder (i.e., Table 10 (c)) leads to worse performance than that of performing the module in both stages (i.e., Table 10 (d)).
Effect of Semantic-guided Point Sampling. When performing downsampling in the SA layers of the point cloud branch, we adopted S-FPS (Chen et al., 2022) to explicitly preserve as many foreground points as possible. We report the percentage of sampled foreground points and instance recall (i.e., the ratio of instances that have at least one point) in Table 11, where it can be seen that exploiting supplementary semantic features from images leads to substantial improvement of the ratio of sampled foreground points and better instance recall during S-FPS.
Influence on 2D Semantic Segmentation. We also aimed to demonstrate that the 2D-3D joint learning paradigm benefits not only the 3D object detection task but also the 2D semantic segmentation task. As shown in Table 12, the deep interaction between different modalities yields an improvement of 4.42% mIoU. The point features can naturally complement RGB image features by providing 3D geometry and semantics, which are robust to illumination changes and help distinguish different classes of objects, for 2D visual information. The results suggest the potential of joint training between 3D object detection and more 2D scene understanding tasks in autonomous driving.
Conditional Analysis. To better figure out where the improvement comes from when using additional image features, we compared BiProDet with the single-modal detector on different occlusion levels and distant ranges. The results shown in Table 13 and Table 14 include separate APs for objects belonging to different occlusion levels and APs for moderate class in different distance ranges. For car detection, our BiProDet achieves more accuracy gains for long-distance and highly occluded objects, which suffer from the sparsity of observed LiDAR points. The cyclist and pedestrian are much more difficult categories on account of small sizes, non-rigid structures, and fewer training samples. For these two categories, BiProDet still brings consistent and significant improvements on different levels even in extremely difficult cases.
Generalization to Sparse LiDAR Signals. We also compared our BiProDet with the single-modal baseline on LiDAR point clouds with various sparsity. In practice, following Pseudo-LiDAR++ (You et al., 2020), we simulated the 32-beam, 16-beam, and 8-beam LiDAR signals by selecting LiDAR points whose elevation angles fall within specific intervals. As shown in Table 15, the proposed BiProDet outperforms the single-modal baseline under all settings. The consistent improvements suggest our method can generalize to sparser signals. Besides, the proposed BiProDet significantly performs better than the baseline in the setting of LiDAR signals with fewer beams, demonstrating the effectiveness of our method in exploiting the supplementary information in the image domain.
A.6 EFFICIENCY ANALYSIS
We also compared the inference speed and number of parameters of the proposed BiProDet with state-of-the-art cross-modal approaches in Table 16. Our BiProDet has about the same number of parameters as CAT-Det (Zhang et al., 2022b), but a much higher inference speed at 9.52 frames per second on a single GeForce RTX 2080 Ti GPU. In general, our BiProDet is inevitably slower than some single-modal detectors, but it achieves a good trade-off between speed and accuracy among cross-modal approaches.
A.7 VISUAL RESULTS OF 3D OBJECT DETECTION
In Figure 8, we present the qualitative comparison of detection results between the single-modal baseline and our BiProDet. We can observe that the proposed BiProDet shows better localization capability than the single-modal baseline in challenging cases. Besides, we also show qualitative
results of BiProDet on the KITTI test split in Figure 9. We can clearly observe that our BiProDet performs well in challenging cases, such as pedestrians and cyclists (with small sizes) and highlyoccluded cars.
A.8 VISUAL RESULTS OF 2D SEMANTIC SEGMENTATION
Several examples are shown in Figure 10. For distant cars in the first and the second row as well as the pedestrian in the sixth row, the size of objects is small and PSPNet tends to treat them as background, while our BiProDet is able to correct such errors. In the third row, our BiProDet finds the dim cyclist missed by PSPNet. Our BiProDet also performs better for the highly occluded objects as shown in the fourth and the fifth lines. This observation shows the 3D feature representations extracted from point clouds can boost 2D semantic segmentation, since the image-based method is sensitive to illumination and can hardly handle corner cases with only single-modal inputs.
A.9 DETAILS ON OFFICIAL KITTI TEST LEADERBOARD
We submitted the results of our BiProDet to the official KITTI website, and it ranks 1st on the 3D object detection benchmark for the cyclist class. Figure 11 shows the screenshot of the leaderboard. Figure 12 illustrates the precision-recall curves along with AP scores on different categories of the KITTI test set. The samples of the KITTI test set are quite different from those of training/validation set in terms of scenes and camera parameters, so the impressive performance of our BiProDet on the test set demonstrates it also achieves good generalization. | 1. What is the main contribution of the paper regarding 3D object detection?
2. What are the strengths and weaknesses of the proposed method, particularly in its design choices and feature fusion approach?
3. Do you have any concerns or questions regarding the auxiliary tasks used in the paper, such as NLC map estimation and 3D semantic segmentation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor corrections or clarifications that could improve the paper's presentation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes fusion of image and point cloud features for 3D object detection. In contrast to previous work (like Point-augmenting; Wang et. al. 2021), features are not just borrowed from the image plane to enrich the point features, but also vice-versa. Additional auxiliary tasks - NLC-map estimation for the image backbone and 3D semantic segmentation for the LIDAR backbone improve the performance of object detection. Detailed ablation studies show the effect of each element in the network design.
Strengths And Weaknesses
Strengths: The paper is well written, and is a simple idea that works well in practice. Detailed ablation studies show the effectiveness of each design choice. The authors get top-ranking results on the KITTI leaderboard for the cycle class.
Weaknesses: -The symmetry in the two backbones - which is required to fuse features from one into the other might be a bit artificial and a bit of a drawback as efficient and State of the Art LIDAR pipelines look a bit different to image pipelines. -How does one account for / deal with the fact that the same image feature (or same neighboring features) might get attached to a multiple LIDAR points (features) as points at different distances along a ray might all be projected to the same image point? There is nothing to deal with this, and this might actually be reducing the expressiveness of the LIDAR features. Maybe a BEV representation (used in Wang et. al. 2021) is a better choice of representation? -LIDAR head auxiliary task #2: 3D center estimation: how is this an auxiliary task? Doesn’t the 3D bounding box estimation already do this? -Table 1 should include a comparison with Wang et. al. 2021, since that is the closest work to this.
Minor corrections/clarifications: -Presumably the NLC map estimation aux task is only done with cars and bikes and not pedestrians - but this is not mentioned anywhere. -In the abstract: “..we further construct and interactive bidirectional…” What do you mean by interactive? -More details about the 3D semantic segmentation task will be nice. -Figure 6 caption: “…multi layer perception.” Should be corrected to multi layer perceptron. -Table 3: Please include labels a-g in the caption. -Equation 10: q_pi and qcap_pi in equation 10 are referred to as pi and pi_cap in the explanation. Make this consistent. -Table 11 is a bit hard to read. Are you combining experiments with changing distance and changing occlusion? How do you define levels of occlusion?
Clarity, Quality, Novelty And Reproducibility
The paper is easy to read and has enough experimental results to suggest that the approach, though very incremental, works well. Code has been provided, but I have not tried to reproduce it. |
ICLR | Title
Bidirectional Propagation for Cross-Modal 3D Object Detection
Abstract
Recent works have revealed the superiority of feature-level fusion for cross-modal 3D object detection, where fine-grained feature propagation from 2D image pixels to 3D LiDAR points has been widely adopted for performance improvement. Still, the potential of heterogeneous feature propagation between 2D and 3D domains has not been fully explored. In this paper, in contrast to existing pixelto-point feature propagation, we investigate an opposite point-to-pixel direction, allowing point-wise features to flow inversely into the 2D image branch. Thus, when jointly optimizing the 2D and 3D streams, the gradients back-propagated from the 2D image branch can boost the representation ability of the 3D backbone network working on LiDAR point clouds. Then, combining pixel-to-point and point-to-pixel information flow mechanisms, we construct an bidirectional feature propagation framework, dubbed BiProDet. In addition to the architectural design, we also propose normalized local coordinates map estimation, a new 2D auxiliary task for the training of the 2D image branch, which facilitates learning local spatial-aware features from the image modality and implicitly enhances the overall 3D detection performance. Extensive experiments and ablation studies validate the effectiveness of our method. Notably, we rank 1 on the highly competitive KITTI benchmark on the cyclist class by the time of submission. The source code is available at https://github.com/Eaphan/BiProDet.
1 INTRODUCTION
In recent years, 3D object detection has received increasing academic and industrial attention driven by the thriving of autonomous driving scenarios, where the two dominating data modalities 3D point clouds and 2D RGB images demonstrate complementary properties in many aspects. 3D point clouds scanned by LiDAR encode accurate structure and depth cues in the geometric domain but typically suffer from severe sparsity, incompleteness, and non-uniformity. The 2D RGB images captured by optical cameras convey rich semantic features in the visual domain and facilitate the application of well-developed image learning architectures (He et al., 2016) but pose challenges in reliably modeling spatial structures. More importantly, due to the essential difficulties in processing irregular and unordered point clouds, the actual learning ability of neural network backbones working on 3D LiDAR data is still relatively insufficient, thus limiting the performance of LiDAR-based detection frameworks. Practically, despite the efforts made by previous studies (Wang et al., 2019; Vora et al., 2020) devoted to cross-modal learning, how to reasonably mitigate the big modality gap between images and point clouds and fuse the corresponding heterogeneous feature representations still remains an open and non-trivial problem.
Depending on specific fusion strategies between image and point cloud modalities, existing crossmodal 3D object detection pipelines can be categorized into: 1) proposal-level; 2) result-level; 3) pseudo-LiDAR-based; and 4) point-level processing paradigms, as illustrated in Fig. 1. Concretely, proposal-level fusion methods (Chen et al., 2017; Ku et al., 2018) perform feature fusion for both modalities over each of the proposals/anchors, while result-level fusion methods (Pang et al., 2020) directly manipulate the output 2D and 3D object candidates by leveraging geometric and semantic consistencies. However, the former two fusion paradigms do not make full use of fine-grained correspondence between 3D LiDAR points and 2D images, and thus show inferior detection performance.
For another alternative fusion paradigm based on pseudo-LiDAR representations (You et al., 2020), auxiliary learning modules specialized for stereo depth estimation are integrated to synthesize dense pseudo-LiDAR point clouds as additional complements to the original real-scanned sparse LiDAR data. Such a strategy indeed brings performance boosts but it becomes more technically cumbersome and demanding, since ground-truth depth maps are required as supervision signals. Moreover, as investigated previously (Qian et al., 2020), it is non-trivial to harmonize the joint end-to-end optimization of the object detector and the depth estimator. To make use of point-pixel correspondences between images and point clouds, there has been another family of point-level fusion approaches (Huang et al., 2020; Vora et al., 2020) characterized by using projection from the 3D space of point cloud to 2D image planes. That is, for each 3D point, its geometric feature extracted from the 3D LiDAR learning branch is fused with the semantic feature (retrieved from the 2D image learning branch) of the corresponding 2D pixel. Benefiting from more fine-grained feature manipulation, point-level fusion typically demonstrates more significant performance improvement and thus is considered as a more promising cross-modal fusion paradigm.
Still, based on our observations, despite the superiority of point-level fusion, previous studies have not fully exploited its great potential in terms of specific architectural design and technical workflow. Fundamentally, the performance boost achieved by previous methods can be mainly credited to the fact that additional discriminative 2D semantic information is incorporated into the 3D detection pipeline. However, there is no evidence that the actual representation capability of the 3D LiDAR backbone network, which is also one of the most critical influencing factors, is strengthened. Besides, it is also noticed that the choice of 2D auxiliary tasks used for training the image learning branch also makes a difference, which essentially determines the consequent 2D feature representations to be propagated. Previous methods directly resort to classic semantic understanding tasks (e.g., image segmentation) for the learning of visual patterns, which may not be the optimal task that harmonizes with the 3D LiDAR learning branch for geometric modeling.
Given the above issues, in this paper we propose a bidirectional feature propagation architecture, namely BiProDet, for cross-modal 3D object detection. Foremost, our core component lies in a novel point-to-pixel feature propagation mechanism allowing 3D geometric features of LiDAR point clouds to flow into the 2D image learning branch. Our BiProDet differentiates from previous crossmodal 3D detection pipelines that solely pay attention to unidirectional 2D-to-3D (pixel-to-point) feature propagation while neglecting the potential effects of the opposite 3D-to-2D (point-to-pixel) direction. Functionally, different from existing pixel-to-point propagation, the proposed point-topixel propagation turns out to be capable of enhancing the representation ability of 3D LiDAR backbone networks. In fact, it is worth emphasizing that this is an interesting phenomenon, and we reason that this is because the gradients back-propagated from the auxiliary 2D image branch built upon mature 2D convolutional neural networks (CNNs) with the impressive learning ability can effectively strengthen the 3D LiDAR branch. Taking a step forward, we integrate pixel-to-point and point-to-pixel propagation directions into a unified interactive bidirectional feature propagation framework such that 2D and 3D learning branches are inter-strengthened, eventually contributing to more significant performance gains. Note that the previous work also develops a bidirectional model for joint optical and scene flow estimation(Liu et al., 2022). Still, we design a concise yet effective bidirectional propagation strategy without bells and whistles. Unlike Liu et al. (2022) detach the gradients in bidirectional fusion modules, we demonstrate that gradients back-propagated from the auxiliary 2D image branch could enhance the 3D backbone. In addition, we also particularly customize a new 2D auxiliary task called normalized local coordinate (NLC) map estimation, which facilitates learning local spatial-aware feature representations from the 2D image. Such a task also
implicitly helps to capture more discriminative geometric information from the main 3D LiDAR point cloud learning branch, thus improving the overall detection performance, especially for hard cases of distant and/or highly-occluded objects.
We conduct extensive experiments on the prevailing KITTI (Geiger et al., 2012) benchmark dataset. Compared with state-of-the-art single-modal and cross-modal 3D object detectors, our framework achieves remarkable performance improvement. Notably, our method ranks 1st on the cyclist class of KITTI 3D detection benchmark1. Conclusively, our main contributions are three-fold:
• In contrary to existing pixel-to-point cross-modal fusion strategies, we explore an opposite point-to-pixel feature propagation paradigm, which turns out to be capable of enhancing the representation capability of 3D backbone networks working on LiDAR point clouds.
• Combining the existing pixel-to-point and the newly proposed point-to-pixel feature propagation schemes, we further construct the powerful bidirectional feature propagation for crossmodal 3D objection detection.
• We reveal the importance of specific 2D auxiliary tasks used for training the 2D image learning branch which has not received enough attention in previous works, and introduce 2D NLC map estimation to facilitate learning spatial-aware features and implicitly boost the overall 3D detection performance.
2 RELATED WORK
LiDAR-based 3D Object Detection could be roughly divided into two categories. 1) Voxel-based 3D detectors typically voxelize the point clouds into grid-structure forms of a fixed size (Zhou & Tuzel, 2018). Yan et al. (2018) introduced a more efficient sparse convolution to accelerate training and inference. He et al. (2020a) employed auxiliary tasks including center estimation and foreground segmentation to guide the network to learn the intra-object relationship. 2) Point-based 3D detectors consume the raw 3D point clouds directly and generate predictions based on (downsampled) points. Shi et al. (2019) applied a point-based feature extractor and generated high-quality proposals on foreground points. 3DSSD (Yang et al., 2020) adopted a new sampling strategy named F-FPS as a supplement to D-FPS to preserve enough interior points of foreground instances. It also built a onestage anchor-free 3D object detector based on feasible representative points. Zhang et al. (2022c) and Chen et al. (2022) introduced semantic-aware down-sampling strategies to preserve foreground points as much as possible. Shi & Rajkumar (2020b) proposed to encode the point cloud by a fixedradius near-neighbor graph. Besides, PV-RCNN (Shi et al., 2020a) adopted a voxel-based RPN and leverages point-wise features extracted from the voxel set abstraction module to refine the proposals.
Camera-based 3D Object Detection. Early works (Mousavian et al., 2017; Manhardt et al., 2019) designed monocular 3D detectors by referring to 2D detectors (Ren et al., 2015; Tian et al., 2019) and utilizing 2D-3D geometric constraints. Another way is to convert images to pseudo-lidar representations via monocular depth estimation and then resort to LiDAR-based methods (Wang et al., 2019). Chen et al. (2021) built dense 2D-3D correspondence by predicting object coordinate map and adopted uncertainty-driven PnP to estimate object location. Recently, multi-view detectors like BEVStereo (Li et al., 2022) have also achieved promising performance, getting closer to LiDARbased methods.
Cross-Modal 3D Object Detection can be roughly divided into four categories: proposal-level, result-level, point-level, and pseudo-lidar-based approaches. Proposal-level fusion methods (Chen et al., 2017; Ku et al., 2018; Liang et al., 2019) adopted a two-stage framework to fuse image features and point cloud features corresponding to the same anchor or proposal. For result-level fusion, Pang et al. (2020) exploited the geometric and semantic consistencies of 2D detection results and 3D detection results to improve single-modal detectors. Qi et al. (2018) generated 2D region proposals using a CNN and lifted them to frustum proposals, where a PointNet-based network is utilized to estimate 3D box. Obviously, both proposal-level and decision-level fusion strategies are coarse and do not make full use of correspondence between LiDAR points and images. Pseudo-LiDAR based methods (You et al., 2020; Wu et al., 2022; Liang et al., 2019) employed depth estimation/completion and converted the estimated depth map to pseudo-LiDAR points to complement raw sparse point clouds. However, such methods require extra depth map annotations that are high-cost. Regarding point-level fusion methods, Liang et al. (2018) proposed the continuous fusion layer to retrieve corresponding image features of nearest 3D points for each grid in BEV feature map. Especially, Vora
1www.cvlibs.net/datasets/kitti/eval object.php?obj benchmark=3d
et al. (2020) and Huang et al. (2020) retrieved the semantic scores or features by projecting points to image plane. Liu et al. (2021) proposed EPNet++, where an LI-Fusion layer is used to enable more interaction between the two modalities to obtain more comprehensive features. And Wang et al. (2021) decorated point clouds with middle CNN features extracted from 2D detection models. However, neither semantic segmentation nor 2D detection tasks can enforce the network to learn 3D-spatial-aware image features. By contrast, we introduce NLC Map estimation as an auxiliary task to promote the image branch to learn local spatial information to supplement the sparse point cloud. Besides, Bai et al. (2022) proposed TransFusion, a transformer-based detector that performs fine-grained fusion with attention mechanisms on the BEV level. Philion & Fidler (2020) and Liang et al. (2022) estimated the depth of multi-view images and transformed the camera feature maps into the BEV space before fusing them with LiDAR BEV features.
3 PROPOSED METHOD
Overview. Architecturally, the overall processing pipeline of cross-modal 3D object detection consists of an image branch and a point cloud branch, learning feature representations from 2D RGB images and 3D LiDAR point clouds, respectively. Existing methods implement cross-modal learning only by incorporating pixel-wise semantic cues from the 2D image branch into the 3D point cloud branch for feature fusion. Here, motivated by the observation that point-based networks typically show insufficient learning ability due to the essential difficulties in processing irregular and unordered point cloud data modality (Zhang et al., 2022a), we additionally seek to boost the expressive power of the point cloud backbone network with the assistance of the 2D image branch, whose potential is ignored in previous studies. It is also worth reminding that our ultimate goal is not to design a more powerful point cloud backbone network structure but to elegantly make full use of the available resources in this cross-modal application scenario.
As shown in Fig. 2, in addition to pixel-to-point feature propagation explored by mainstream pointlevel fusion methods (Liang et al., 2018; Huang et al., 2020), we propose point-to-pixel propagation allowing features to flow inversely from the point cloud branch to the image branch, based on which we can achieve bidirectional feature propagation implemented in a multi-stage fashion. In this way, not only the image features propagated via the pixel-to-point module can provide additional semantic information, but also the gradients backpropagated from the training objectives of the image branch can boost the representation ability of the point cloud backbone. Besides, we also employ auxiliary tasks to train the pipeline, aiming to enforce the network to learn rich semantic and spatial representations. In particular, we propose NLC map estimation to promote the image branch to learn spatial-aware features, providing a necessary complement to the sparse spatial representation
extracted from point clouds, especially for distant or highly occluded cases. We refer the readers to Appendix A.2 for detailed network structures of the image and point cloud backbones.
3.1 BIDIRECTIONAL FEATURE PROPAGATION
As illustrated in Fig. 3, the proposed bidirectional feature propagation consists of a pixel-to-point module and a point-to-pixel module, which bridges the two learning branches that operate on RGB images and LiDAR point clouds. Functionally, the point-to-pixel module applies grid-level interpolation on point-wise features to produce a 2D feature map, while the pixel-to-point module retrieves the corresponding 2D image features by projecting 3D points to the 2D image plane.
In general, we partition the overall 2D and 3D branches into the same number of stages, such that we can perform bidirectional feature propagation at each stage between the corresponding layers of the image and point cloud learning networks. Without loss of generality, below we only focus on a certain stage, where we can acquire a 2D feature map F ∈ RC2D×H′×W ′ with C2D channels and dimensions H ′ ×W ′ from the image branch and a set of C3D-dimensional embedding vectors G = { gi ∈ RC3D }N ′ i=1 of the corresponding N ′ 3D points.
Point-to-Pixel Module. We design this module to propagate geometric information from the 3D point cloud branch into the 2D image branch. Formally, we expect to construct a 2D feature map F P2I ∈ RC3D×H′×W ′ by querying from the acquired point-wise embeddings G. More specifically, for each pixel position (r, c) over the targeted 2D grid of F P2I of dimensions H ′ ×W ′, we collect the corresponding geometric features of points that are projected inside the range of this pixel and then perform feature aggregation by avg-pooling AVG(·):
F P2Ir,c = AVG({gj |j = 1, ..., n}), (1) where n counts the number of points that can be projected inside the range of the pixel located at the rth row and the cth column. Particularly, for empty pixel positions where there is no point projected inside, we simply define its feature value as 0.
After that, instead of directly feeding the “interpolated” 2D feature map F P2I into the subsequent 2D convolutional stage, we introduce a few additional refinement procedures for feature smoothness as described below: F fuse = CONV(CAT[CONV(F P2I),F ]), (2) where CAT[·, ·] and CONV(·) stand for feature channel concatenation and the convolution operation, respectively, and the resulting F fuse is further fed into the subsequent layers of the 2D image branch.
Pixel-to-Point Module. From an opposite direction, this module propagates semantic information from the 2D image branch into the 3D point cloud branch. We start by projecting the corresponding N ′ 3D points onto the 2D image plane, obtaining their planar coordinates denoted as X = {xi ∈ R2}N ′i=1. Considering that these 2D coordinates are continuously and irregularly distributed over the projection plane, we apply bilinear interpolation to compute exact image features at projected positions, deducing a set of point-wise embeddings F I2P = {F (xi) ∈ RC2D |i = 1, ..., N ′}. Similar to our practice in the point-to-pixel module, the interpolated 3D point-wise features F I2P need to be refined by the following procedures:
Gfuse = MLP ( CAT [ MLP ( F I2P ;θ1 ) ,G ] ;θ2 ) , (3)
where MLP(·;θ) denotes shared multi-layer perceptrons parameterized by learnable weights θ, and the resulting Gfuse further passes through the subsequent layers of the 3D point cloud branch.
3.2 AUXILIARY TASKS FOR TRAINING
The semantic and spatial-aware information is essential for determining the category and boundary of objects. To promote the network to uncover such information of input data, we also leverage auxiliary tasks when training the whole pipeline. For the point cloud branch, following SA-SSD (He et al., 2020b), we introduce 3D semantic segmentation and center estimation to learn structure-aware features. For the image branch, we particularly propose NLC map estimation, in addition to 2D semantic segmentation.
Normalized Local Coordinate (NLC) System. We define the NLC system as taking the center of an object as the origin, aligning the x-axis towards the head direction of its ground-truth (GT) bounding box, then normalizing the local coordinates with respect to the size of the GT bounding box. Fig. 4 (a) shows an example of the NLC system for a typical car. With the geometry relationship between the input RGB image and point
cloud, we can project NLCs to the 2D image plane to construct a 2D NLC map with three channels corresponding to three spatial dimensions, as illustrated in Fig. 4 (b). We refer the reader to Appendix A.1 for the detailed process.
NLC Map Estimation. Intuitively, vision-based semantic information is useful for network to distinguish the foreground from the background. However, the performance bottleneck of existing detectors lies in the limited localization accuracy in distant or highly occluded cases, where the spatial structure is incomplete due to the sparsity of point clouds. To this end, we further take NLC map as supervision to learn the relative position of each pixel inside the object from the image. We expect such an auxiliary task can drive the image branch to learn local spatial-aware features, which serve as a complement to the sparse spatial representation extracted from point clouds. Besides, this task could have an effect on augmenting the representation ability of the point cloud branch.
Remark. The proposed NLC map estimation shares the identical objective with pseudo-LiDARbased methods, i.e, enhancing the spatial representation limited by the sparsity of point clouds. However, compared with the manner of learning a pseudo-LiDAR representation via depth completion, our NLC map estimation has the following advantages: 1) the local NLC is easier to learn than the global depth owing to its scale-invariant property at different distances; 2) our NLC map estimation can be naturally incorporated and trained with the detection pipeline end-to-end, while it is non-trivial to optimize the depth estimator and LiDAR-based detector simultaneously in pseudoLiDAR based methods though it can be implemented in a somewhat complex way (Qian et al., 2020); and 3) the pseudo-LiDAR representations require ground-true dense depth images as supervision, which may not be available in reality, but our method does not require such information.
Semantic Segmentation. Leveraging semantics of 3D point clouds has demonstrated effectiveness in point-based detectors, owing to the explicit preservation of foreground points at downsampling (Zhang et al., 2022c; Chen et al., 2022). Therefore, we further introduce the auxiliary semantic segmentation tasks not only in the point cloud branch but also in the image branch, to exploit richer semantic information. Needless to say, additional semantic features extracted from images facilitate the network to distinguish true positive candidate boxes from the false-positive.
3.3 LOSS FUNCTION
The NLC map estimation task is optimized with the loss function defined as
LNLC = 1
Npos N∑ i ∥∥pNLCi − p̂NLCi ∥∥H · 1pi , (4)
where pi is the ith LiDAR point, Npos is the number of foreground LiDAR points, ∥·∥H is the Huber-loss, 1pi indicates the loss is calculated only with foreground points, and p NLC i and p̂ NLC i are the NLCs of ground-truth and prediction at corresponding pixel for foreground points.
We use the standard cross-entropy loss to optimize both 2D and 3D semantic segmentation, denoted as L2Dsem and L 3D sem respectively. And the loss of center estimation is computed as
Lctr = 1
Npos N∑ i ∥∆pi −∆p̂i∥H · 1pi , (5)
where ∆pi is the target offsets from points to the corresponding object center, and ∆p̂i denotes the output of center estimation head.
Besides the auxiliary tasks, we follow (Chen et al., 2022) to define the loss of the RPN stage as Lrpn = Lc + Lr + Ls + Ld, (6)
where Lc is the classification loss, Lr the regression loss, Ls the shifting loss between the predicted shifts and residuals from the sampled points to their corresponding object centers in candidate generate layer (Yang et al., 2020), and Ld the point segmentation loss in SA layers to enable semanticguided sampling. In addition, we also adopt commonly used proposal refinement loss Lrcnn as defined in (Shi et al., 2020a).
In all, the loss for optimizing the overall pipeline in an end-to-end manner is written as Ltotal = Lrpn + Lrcnn + λ1LNLC + λ2L 2D sem + λ3L 3D sem + λ4Lctr, (7) where λ1, λ2, λ3, and λ4 are hyper-parameters that are empirically set to 1.
4 EXPERIMENTS
4.1 EXPERIMENT SETTINGS
Datasets and Metrics. We conducted experiments on the prevailing KITTI benchmark dataset, which contains two modalities of 3D point clouds and 2D RGB images. Following previous works (Shi et al., 2020a), we divided all training data into two subsets, i.e., 3712 samples for training and the rest 3769 for validation. Performance is evaluated by the Average Precision (AP) metric under IoU thresholds of 0.7, 0.5, and 0.5 for car, pedestrian, and cyclist categories, respectively. We computed APs with 40 sampling recall positions by default, instead of 11. For the 2D auxiliary task of semantic segmentation, we used the instance segmentation annotations as provided in (Qi et al., 2019). Besides, we also conducted experiments on the Waymo Open Dataset (WOD) (Sun et al., 2020), which can be found in Appendix A.4.
Implementation Details. For the image branch, we used ResNet18 (He et al., 2016) as the backbone encoder, followed by a decoder composed of pyramid pooling module (Zhao et al., 2017) and several upsampling layers to give the outputs of semantic segmentation and NLC map estimation. For the point cloud branch, we deployed a point-based network like (Yang et al., 2020) with extra FP layers and further applied semantic-guided farthest point sampling (S-FPS) (Chen et al., 2022) in SA layers. Thus, we implemented bidirectional feature propagation between the top of each SA or FP layer and their corresponding locations at the image branch. Please refer to Appendix A.2 for more details.
4.2 COMPARISON WITH STATE-OF-THE-ART METHODS
We submitted our results to the official website of KITTI. As compared in Table 1, our BiProDet outperforms existing state-of-the-art methods on the KITTI test set by a remarkable margin, i.e., an absolute increase of 2.1% mAP over the second best method EQ-PVRCNN. Notably, by the time of submission, we ranked 1st on the KITTI 3D detection benchmark for the cyclist class. Particularly, it is observed that BiProDet shows consistent and more obvious superiority on “moderate” and “hard” levels, i.e., the distant or highly-occluded objects with sparse points. We ascribe such performance gains to the fact that our proposed bidirectional feature propagation mechanism contributes to more adequate exploitation of complementary information between multi-modalities as well as the effects of 2D auxiliary tasks (as verified in Sec. 4.3). Besides, we can observe from the actual visual results (as presented in Appendix A.7) that our BiProDet is able to produce higher-quality 3D bounding boxes in varying scenes.
4.3 ABLATION STUDY
We conducted comprehensive ablation studies to validate the effectiveness and explore the impacts of key modules involved in the overall learning framework.
Effectiveness of Bidirectional Propagation. We performed detailed ablation studies on specific multi-modal feature exploitation and interaction strategies. We started by presenting a single-modal baseline (Table 2 (a)) that only preserves the point cloud branch of our BiProDet for both training and inference. Based on our complete bidirectional propagation pipeline (Table 2 (e)), we explored another two variants as shown in (Table 2 (c)) and (Table 2 (d)), solely preserving the point-to-pixel and pixel-to-point feature propagation in our 3D detectors, respectively. Note that in Table 2 (c) the point-to-pixel feature flow was only enabled during training, and we detached the point-to-pixel module as well as the whole image branch during inference. In addition, in Table 2 (b), we replaced the bidirectional feature propagation of our BiProDet with the input fusion strategy as proposed in (Wang et al., 2021) that decorates point clouds with CNN features deduced from the image branch. Empirically, we can draw several aspects of conclusions that strongly support our preceding claims. First, the performance of Table 2 (a) turns out to be the worst among all variants, which reveals the superiority of cross-modal learning. Second, combining Table 2 (a) and Table 2 (c), the mAP is largely boosted from 75.88% to 77.01%. Considering that, during the inference stage, these two variants have identical forms of input and network structure, the resulting improvement strongly indicates that the representation ability of the 3D LiDAR branch is indeed strengthened. Third, comparing Table 2 (c) and Table 2 (d) with Table 2 (e), we can verify the superiority of bidirectional propagation (78.64%) over the unidirectional schemes (77.01% and 77.47%). If we particularly pay attention to Table 2 (d) and Table 2 (e), we can conclude that our newly proposed point-to-pixel feature propagation direction further brings 1.17% mAP increase based on the previously explored pixel-to-point paradigm. Besides, by comparing Table 2 (b) (77.07%) and Table 2 (d) (77.47%) whose information propagation directions are both from the 2D image domain to the 3D point cloud domain, we can demonstrate the superiority of feature-level propagation over its input-level counterpart (Wang et al., 2021).
Effectiveness of Image Branch. Comparing Table 3 (d) with Tables 3 (e)-(g), the performance is stably boosted from 75.88% to 78.64%, as the gradual addition of the 2D image branch and two
auxiliary tasks. These results indicate that the additional semantic and geometric features learned from the image modality are indeed effective supplements to the point cloud representation, leading to significantly improved 3D object detection performances.
Effectiveness of Auxiliary Tasks. Comparing Table 3 (a) with Tables 3 (b)-(d), the mAP is boosted by 0.70% when incorporating 3D semantic segmentation and 3D center estimation, and can be further improved by 2.76% after introducing 2D semantic segmentation and NLC map estimation (Tables 3 (d)-(g)). However, only integrating image features without 2D auxiliary tasks (comparing results of Tables 3 (d) and (e)) brings limited improvement of 0.5%. This observation shows that the 2D auxiliary tasks, especially the proposed 2D NLC map estimation, do enforce the image branch to learn complementary information for the detector.
Robustness against Input Corruption. We also conducted extensive experiments to verify the robustness of our BiProDet to sensor perturbation. Specifically, we added Gaussian noises to the reflectance value of points or RGB images. Fig. 5 shows that the mAP value of our cross-modal BiProDet is consistently higher than that of the single-modal baseline and decreases slower with the LiDAR noise level increasing. Particularly, as listed in Table 4, when the variance of the LiDAR noise is set to 0.15, the perturbation affects our cross-modal BiProDet much less than the singlemodal detector. Besides, even applying the corruption to both LiDAR input and RGB images, the mAP value of our BiProDet only drops by 2.49%.
5 CONCLUSION
We have presented a novel cross-modal 3D detector, namely BiProDet, by fully exploiting complementary information from the image domain in two aspects. First, we proposed point-to-pixel feature propagation, which enables gradients backpropagated from the training loss of the image branch to augment the expressive power of the 3D point cloud backbone. Second, we proposed NLC map estimation as an auxiliary task to promote the image branch to learn local spatial-aware representation rather than just semantic features. Our BiProDet achieves state-of-the-art results on the KITTI detection benchmark. And extensive ablative experiments demonstrate the robustness of our BiProDet against sensor noises and generalization to LiDAR signals with fewer beams. The decent results demonstrate the potential of joint training between 3D object detection and more 2D scene understanding tasks. We believe our new perspective will further inspire investigations on multi-task multi-modal learning for scene understanding in autonomous driving.
ACKNOWLEDGEMENT
This work was supported in part by the Hong Kong Research Grants Council under Grants 11219422, 11202320, and 11218121, and in part by Hong Kong Innovation and Technology Fund under Grant MHP/117/21.
REPRODUCIBILITY STATEMENT
We provide the source code and configuration in the supplementary material for the experimental results presented in the main text. We specify the settings of hyper-parameters, the training scheme, and the implementation details of our method in Section 4.1 and Appendix A.2. Besides, we also give a clear explanation of used datasets. We repeat the experiments on the KITTI dataset several times to check the rightness and reproducibility of the implementation.
ETHICS STATEMENT
Proposing effective cross-modal 3D object detectors would benefit a wide range of applications. For example, the incomplete information obtained from the single-modal sensor in autonomous driving vehicles may lead to serious traffic accidents. And the cross-modal detectors are expected to achieve more accurate detection results and be more robust to environment perturbation. Besides, this study was conducted using publicly available datasets and without surveying participants for experiments. Thus we do not consider this study to raise privacy or data security issues. We cited the creators when using existing datasets.
A APPENDIX
In this appendix, we provide the details omitted from the manuscript due to space limitation. We organize the appendix as follows.
• Section A.1: Details of the normalized local coordinate (NLC) System. • Section A.2: Implementation details. • Section A.3: More quantitative results. • Section A.4: Experimental results on Waymo dataset. • Section A.5: More ablation studies. • Section A.6: Efficiency analysis. • Section A.7: Visual results of 3D object detection. • Section A.8: Visual results of 2D semantic segmentation. • Section A.9: Details on the official KITTI test leaderboard.
A.1 DETAILS OF NORMALIZED LOCAL COORDINATE (NLC) SYSTEM
The aim of the cross-modal 3D detector is to estimate the bounding box parameterized by the object dimension (w, l, h), the center location (xc, yc, zc), and the orientation angle θ. The input contains an RGB image X ∈ R3×H×W , where H and W represent the height and width of the image respectively, and a 3D LiDAR point cloud {pi|i = 1, ..., N}, where N is the number of points, and each point pi ∈ R4 is represented with its 3D location (xp, yp, zp) in the LiDAR coordinate system and the reflectance value ρ.
To build the correspondence between the 2D and 3D modalities, we project the observed points from the 3D coordinate system to the 2D coordinate system on the image plane:
dc [ u v 1 ] = K [R T ] xpypzp 1 , (8) where u, v, dc denote the corresponding coordinates and depth on the image plane, R ∈ R3×3 and T ∈ R3×1 denote the rotation matrix and translation matrix of the LiDAR relative to the camera, and K ∈ R3×3 is the camera intrinsic matrix. Given the point cloud and the bounding box of an object, LiDAR coordinates of foreground points can be transformed into the NLC system by proper translation, rotation, and scaling, i.e.,xNLCpyNLCp
zNLCp
= [1/l 0 00 1/w 0 0 0 1/h ] · xLCSpyLCSp zLCSp + [0.50.5 0.5 ] = [ 1/l 0 0 0 1/w 0 0 0 1/h ] · [ cosθ −sinθ 0 sinθ cosθ 0 0 0 1 ] · [ xp − xc yp − yc zp − zc ] + [ 0.5 0.5 0.5 ] , (9)
where (xNLCp , y NLC p , z NLC p ) and (x LCS p , y LCS p , z LCS p ) denote the coordinates of points in NLC system and local coordinate system, respectively. Suppose that the bounding box is unknown with known global coordinates and NLCs of points, we can build a set of equations about the parameters of box (i.e., the xc, yc, zc, w, l, h, θ). In order to solve the total of 7 parameters, global coordinates and NLCs of at least 7 points are required to construct the equations.
Discussion. Notably, it is difficult to infer NLCs of points with only LiDAR data for objects far away or with low reflectivity, since the point clouds are sparse. Meanwhile, the contours and appearance of these cases are still visible in RGB images, so we propose to estimate the normalized local coordinates from images. Then, we can retrieve the estimated NLCs of observed points based on the 2D-3D correspondence. Finally, we expect the proposed cross-modal detector to achieve higher detection accuracy with estimated NLCs and known global coordinates.
A.2 IMPLEMENTATION DETAILS
Network Architecture. Fig. 6 illustrates the architectures of the point cloud and image backbone networks. For the encoder of the point cloud branch, we further show the details of multi-scale
grouping (MSG) network in Table 5. Following 3DSSD (Yang et al., 2020), we take key points sampled by the 3rd SA layer to generate vote points and estimate the 3D box. Then we feed these 3D boxes and features output by the last FP layer to the refinement stage. Besides, we adopt densityaware RoI grid pooling (Hu et al., 2022) to encode point density as an additional feature. Note that 3D center estimation (He et al., 2020b) aims to learn the relative position of each foreground point to the object center, while the 3D box is estimated based on sub-sampled points. Thus, the auxiliary task of 3D center estimation differs from 3D box estimation and can facilitate learning structureaware features of objects.
Training Details. Through the experiments on KITTI dataset, we adopted Adam (Kingma & Ba, 2015) (β1=0.9, β2=0.99) to optimize our BiProDet. We initialized the learning rate as 0.003 and updated it with the one-cycle policy (Smith, 2017). And we trained the model for a total of 80 epochs in an end-to-end manner. In our experiments, the batch size was set to 8, equally distributed on 4 NVIDIA 3090 GPUs. We kept the input image with the original resolution and padded it to
the size of 1248× 376, and down-sampled the input point cloud to 16384 points during training and inference. Following the common practice, we set the detection range of the x, y, and z axis to [0m, 70.4m], [-40m, 40m] and [-3m, 1m], respectively.
Data Augmentation. We applied common data augmentation strategies at global and object levels. The global-level augmentation includes random global flipping, global scaling with a random scaling factor between 0.95 and 1.05, and global rotation around the z-axis with a random angle in the range of [−π/4, π/4]. Each of the three augmentations was performed with a 50% probability for each sample. The object-level augmentation refers to copying objects from other scenes and pasting them to current scene (Yan et al., 2018). In order to perform sampling synchronously on point clouds and images, we utilized the instance masks provided in (Qi et al., 2019). Specifically, we pasted both the point clouds and pixels of sampled objects to the point cloud and images of new scenes, respectively.
A.3 MORE QUANTITATIVE RESULTS
Performance on KITTI Val Set. We also reported the performance of our BiProDet on all three classes of the KITTI validation set in Table 6, where it can be seen that our BiProDet also achieves the highest mAP of 77.73%, which is obviously higher than the second best method CAT-Det.
Performance of Single-Class Detector. Quite a few methods (Deng et al., 2021; Zheng et al., 2021) train models only for car detection. Empirically, the single-class detector performs better in the car class compared with multi-class detectors. Therefore, we also provided performance of BiProDet trained only for the car class, and compared it with several state-of-the-art methods in Table 7.
Performance of NLC Map Estimation. The NLC map estimation task is introduced to guide the image branch to learn local spatial-aware features. We evaluated the predicted NLC of pixels containing at least one projected point with mean absolute error (MAE) for each object:
MAEq = 1
Nobj N∑ i ∣∣qNLCpi − q̂NLCpi ∣∣ · 1pi , q ∈ {x, y, z} , (10)
where Nobj is the number of LiDAR points inside the ground-truth box of the object, 1pi indicates that the evaluation is only calculated with foreground points inside the box, qNLCpi and q̂ NLC pi are the normalized local coordinates of the ground-truth and the prediction at the corresponding pixel for the foreground point. Finally, we obtained the mean value of MAEq for all instances, namely mMAEq . We report the metric over different difficulty levels for three categories on the KITTI val set at Figure 7. We can observe that, for the car class, the mMAE error is only 0.0619, i.e., ±6.19 cm error per meter. For the challenging pedestrian and cyclist categories, the error becomes larger due to the smaller size and the non-rigid shape.
Generalization to Asymmetric Backbones. As shown in Fig. 6, we originally adopted an encoderdecoder network in the LiDAR branch that is architecturally similar to the image backbone. Nevertheless, it is worth clarifying that our approach is not limited to symmetrical structures and can be generalized to different point-based backbones. Here, we replaced the 3D branch of the original framework with an efficient single-stage detector—SASA (Chen et al., 2022), using a backbone only with the encoder in the LiDAR branch, which is asymmetric with the encoder-decoder structure of the image backbone. Accordingly, the proposed bidirectional propagation is only performed between the 3D backbone and the encoder of the 2D image backbone. The experimental results are shown in Table 8. We can observe that the proposed method works well even when the two backbones are asymmetric, which demonstrates the satisfactory generalization ability of our method for different LiDAR backbones.
A.4 RESULTS ON WAYMO OPEN DATASET
The Waymo Open Dataset (Sun et al., 2020) is a large-scale dataset for 3D object detection. It contains 798 sequences (15836 frames) for training, and 202 sequences (40077 frames) for validation. According to the number of points inside the object and the difficulty of annotation, the objects are further divided into two difficulty levels: LEVEL 1 and LEVEL 2. Following common practice, we adopted the metrics of mean Average Precision (mAP) and mean Average Precision weighted by
heading accuracy (mAPH), and reported the performance on both LEVEL 1 and LEVEL 2. We set the detection range to [-75.2m, 75.2m] for x and y axis, and [-2m, 4m] for z axis. Following Wang et al. (2021) and Bai et al. (2022), the training on Waymo dataset consists of two stages to allow flexible augmentations. First, we only trained the LiDAR branch without image inputs and bidirectional propagation for 30 epochs. We enabled the copy-and-paste augmentation in this stage. Then, we trained the whole pipeline for another 6 epochs, during which the copy-and-paste is disabled. Note that the image semantic segmentation head is disabled, since ground-truth segmentation maps are not provided (Sun et al., 2020).
As shown in Table 9, our method achieves substantial improvement compared with previous stateof-the-arts. Particularly, unlike existing approaches including PointAugmenting (Wang et al., 2021) and TransFusion (Bai et al., 2022) where the camera backbone is pre-trained on other datasets and then frozen, we trained the entire pipeline in an end-to-end manner. It can be seen that even without the 2D segmentation auxiliary task, our method still achieves higher accuracy under all scenarios except “Ped L2”, demonstrating its advantage.
A.5 MORE ABLATION STUDIES
Effectiveness of Multi-stage Interaction. As mentioned before, both 2D and 3D backbones adopt an encoder-decoder structure, and we perform bidirectional feature propagation at both downsampling and upsampling stages. Here, we conducted experiments to verify the superiority of the multistage interaction over single-stage interaction. As shown in Table 10, only performing the bidirectional feature propagation in the encoder (i.e., Table 10 (b)) or the decoder (i.e., Table 10 (c)) leads to worse performance than that of performing the module in both stages (i.e., Table 10 (d)).
Effect of Semantic-guided Point Sampling. When performing downsampling in the SA layers of the point cloud branch, we adopted S-FPS (Chen et al., 2022) to explicitly preserve as many foreground points as possible. We report the percentage of sampled foreground points and instance recall (i.e., the ratio of instances that have at least one point) in Table 11, where it can be seen that exploiting supplementary semantic features from images leads to substantial improvement of the ratio of sampled foreground points and better instance recall during S-FPS.
Influence on 2D Semantic Segmentation. We also aimed to demonstrate that the 2D-3D joint learning paradigm benefits not only the 3D object detection task but also the 2D semantic segmentation task. As shown in Table 12, the deep interaction between different modalities yields an improvement of 4.42% mIoU. The point features can naturally complement RGB image features by providing 3D geometry and semantics, which are robust to illumination changes and help distinguish different classes of objects, for 2D visual information. The results suggest the potential of joint training between 3D object detection and more 2D scene understanding tasks in autonomous driving.
Conditional Analysis. To better figure out where the improvement comes from when using additional image features, we compared BiProDet with the single-modal detector on different occlusion levels and distant ranges. The results shown in Table 13 and Table 14 include separate APs for objects belonging to different occlusion levels and APs for moderate class in different distance ranges. For car detection, our BiProDet achieves more accuracy gains for long-distance and highly occluded objects, which suffer from the sparsity of observed LiDAR points. The cyclist and pedestrian are much more difficult categories on account of small sizes, non-rigid structures, and fewer training samples. For these two categories, BiProDet still brings consistent and significant improvements on different levels even in extremely difficult cases.
Generalization to Sparse LiDAR Signals. We also compared our BiProDet with the single-modal baseline on LiDAR point clouds with various sparsity. In practice, following Pseudo-LiDAR++ (You et al., 2020), we simulated the 32-beam, 16-beam, and 8-beam LiDAR signals by selecting LiDAR points whose elevation angles fall within specific intervals. As shown in Table 15, the proposed BiProDet outperforms the single-modal baseline under all settings. The consistent improvements suggest our method can generalize to sparser signals. Besides, the proposed BiProDet significantly performs better than the baseline in the setting of LiDAR signals with fewer beams, demonstrating the effectiveness of our method in exploiting the supplementary information in the image domain.
A.6 EFFICIENCY ANALYSIS
We also compared the inference speed and number of parameters of the proposed BiProDet with state-of-the-art cross-modal approaches in Table 16. Our BiProDet has about the same number of parameters as CAT-Det (Zhang et al., 2022b), but a much higher inference speed at 9.52 frames per second on a single GeForce RTX 2080 Ti GPU. In general, our BiProDet is inevitably slower than some single-modal detectors, but it achieves a good trade-off between speed and accuracy among cross-modal approaches.
A.7 VISUAL RESULTS OF 3D OBJECT DETECTION
In Figure 8, we present the qualitative comparison of detection results between the single-modal baseline and our BiProDet. We can observe that the proposed BiProDet shows better localization capability than the single-modal baseline in challenging cases. Besides, we also show qualitative
results of BiProDet on the KITTI test split in Figure 9. We can clearly observe that our BiProDet performs well in challenging cases, such as pedestrians and cyclists (with small sizes) and highlyoccluded cars.
A.8 VISUAL RESULTS OF 2D SEMANTIC SEGMENTATION
Several examples are shown in Figure 10. For distant cars in the first and the second row as well as the pedestrian in the sixth row, the size of objects is small and PSPNet tends to treat them as background, while our BiProDet is able to correct such errors. In the third row, our BiProDet finds the dim cyclist missed by PSPNet. Our BiProDet also performs better for the highly occluded objects as shown in the fourth and the fifth lines. This observation shows the 3D feature representations extracted from point clouds can boost 2D semantic segmentation, since the image-based method is sensitive to illumination and can hardly handle corner cases with only single-modal inputs.
A.9 DETAILS ON OFFICIAL KITTI TEST LEADERBOARD
We submitted the results of our BiProDet to the official KITTI website, and it ranks 1st on the 3D object detection benchmark for the cyclist class. Figure 11 shows the screenshot of the leaderboard. Figure 12 illustrates the precision-recall curves along with AP scores on different categories of the KITTI test set. The samples of the KITTI test set are quite different from those of training/validation set in terms of scenes and camera parameters, so the impressive performance of our BiProDet on the test set demonstrates it also achieves good generalization. | 1. What is the focus and contribution of the paper on multimodal fusion?
2. What are the strengths of the proposed approach, particularly in terms of bidirectional fusion and dual backbone representation?
3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposed a novel fusion layer, which allows feature-map fusion across image and lidar backbones. Fusion is enabled bidirectional, which allows jointly training for multiple auxiliary tasks as regularization. This work also introduces a novel normalized local coordinate (NLC) map estimation, which predicts the perpixel prediction of the featuremaps encoded with rotation and geometry information of each object. Combining with other auxiliary tasks like 2D & 3D segmentation, the proposed approach is able achieve superior performance.
Strengths And Weaknesses
To my knowledge, the bidirectional fusion module is novel. It enables joint optimization for both image and lidar backbones using multiple training tasks.
The overall dual backbone representation is also novel and provides a natural way of feature fusion.
The experimental results are strong.
It would be great if the author could also benchmark the proposed approach on other datasets, e.g. NuScenes to show the generalization ability of the proposed approach.
Clarity, Quality, Novelty And Reproducibility
The paper is well written and has included enough details to reproduce the implementation. The authors also attached the training code. |
ICLR | Title
Bidirectional Propagation for Cross-Modal 3D Object Detection
Abstract
Recent works have revealed the superiority of feature-level fusion for cross-modal 3D object detection, where fine-grained feature propagation from 2D image pixels to 3D LiDAR points has been widely adopted for performance improvement. Still, the potential of heterogeneous feature propagation between 2D and 3D domains has not been fully explored. In this paper, in contrast to existing pixelto-point feature propagation, we investigate an opposite point-to-pixel direction, allowing point-wise features to flow inversely into the 2D image branch. Thus, when jointly optimizing the 2D and 3D streams, the gradients back-propagated from the 2D image branch can boost the representation ability of the 3D backbone network working on LiDAR point clouds. Then, combining pixel-to-point and point-to-pixel information flow mechanisms, we construct an bidirectional feature propagation framework, dubbed BiProDet. In addition to the architectural design, we also propose normalized local coordinates map estimation, a new 2D auxiliary task for the training of the 2D image branch, which facilitates learning local spatial-aware features from the image modality and implicitly enhances the overall 3D detection performance. Extensive experiments and ablation studies validate the effectiveness of our method. Notably, we rank 1 on the highly competitive KITTI benchmark on the cyclist class by the time of submission. The source code is available at https://github.com/Eaphan/BiProDet.
1 INTRODUCTION
In recent years, 3D object detection has received increasing academic and industrial attention driven by the thriving of autonomous driving scenarios, where the two dominating data modalities 3D point clouds and 2D RGB images demonstrate complementary properties in many aspects. 3D point clouds scanned by LiDAR encode accurate structure and depth cues in the geometric domain but typically suffer from severe sparsity, incompleteness, and non-uniformity. The 2D RGB images captured by optical cameras convey rich semantic features in the visual domain and facilitate the application of well-developed image learning architectures (He et al., 2016) but pose challenges in reliably modeling spatial structures. More importantly, due to the essential difficulties in processing irregular and unordered point clouds, the actual learning ability of neural network backbones working on 3D LiDAR data is still relatively insufficient, thus limiting the performance of LiDAR-based detection frameworks. Practically, despite the efforts made by previous studies (Wang et al., 2019; Vora et al., 2020) devoted to cross-modal learning, how to reasonably mitigate the big modality gap between images and point clouds and fuse the corresponding heterogeneous feature representations still remains an open and non-trivial problem.
Depending on specific fusion strategies between image and point cloud modalities, existing crossmodal 3D object detection pipelines can be categorized into: 1) proposal-level; 2) result-level; 3) pseudo-LiDAR-based; and 4) point-level processing paradigms, as illustrated in Fig. 1. Concretely, proposal-level fusion methods (Chen et al., 2017; Ku et al., 2018) perform feature fusion for both modalities over each of the proposals/anchors, while result-level fusion methods (Pang et al., 2020) directly manipulate the output 2D and 3D object candidates by leveraging geometric and semantic consistencies. However, the former two fusion paradigms do not make full use of fine-grained correspondence between 3D LiDAR points and 2D images, and thus show inferior detection performance.
For another alternative fusion paradigm based on pseudo-LiDAR representations (You et al., 2020), auxiliary learning modules specialized for stereo depth estimation are integrated to synthesize dense pseudo-LiDAR point clouds as additional complements to the original real-scanned sparse LiDAR data. Such a strategy indeed brings performance boosts but it becomes more technically cumbersome and demanding, since ground-truth depth maps are required as supervision signals. Moreover, as investigated previously (Qian et al., 2020), it is non-trivial to harmonize the joint end-to-end optimization of the object detector and the depth estimator. To make use of point-pixel correspondences between images and point clouds, there has been another family of point-level fusion approaches (Huang et al., 2020; Vora et al., 2020) characterized by using projection from the 3D space of point cloud to 2D image planes. That is, for each 3D point, its geometric feature extracted from the 3D LiDAR learning branch is fused with the semantic feature (retrieved from the 2D image learning branch) of the corresponding 2D pixel. Benefiting from more fine-grained feature manipulation, point-level fusion typically demonstrates more significant performance improvement and thus is considered as a more promising cross-modal fusion paradigm.
Still, based on our observations, despite the superiority of point-level fusion, previous studies have not fully exploited its great potential in terms of specific architectural design and technical workflow. Fundamentally, the performance boost achieved by previous methods can be mainly credited to the fact that additional discriminative 2D semantic information is incorporated into the 3D detection pipeline. However, there is no evidence that the actual representation capability of the 3D LiDAR backbone network, which is also one of the most critical influencing factors, is strengthened. Besides, it is also noticed that the choice of 2D auxiliary tasks used for training the image learning branch also makes a difference, which essentially determines the consequent 2D feature representations to be propagated. Previous methods directly resort to classic semantic understanding tasks (e.g., image segmentation) for the learning of visual patterns, which may not be the optimal task that harmonizes with the 3D LiDAR learning branch for geometric modeling.
Given the above issues, in this paper we propose a bidirectional feature propagation architecture, namely BiProDet, for cross-modal 3D object detection. Foremost, our core component lies in a novel point-to-pixel feature propagation mechanism allowing 3D geometric features of LiDAR point clouds to flow into the 2D image learning branch. Our BiProDet differentiates from previous crossmodal 3D detection pipelines that solely pay attention to unidirectional 2D-to-3D (pixel-to-point) feature propagation while neglecting the potential effects of the opposite 3D-to-2D (point-to-pixel) direction. Functionally, different from existing pixel-to-point propagation, the proposed point-topixel propagation turns out to be capable of enhancing the representation ability of 3D LiDAR backbone networks. In fact, it is worth emphasizing that this is an interesting phenomenon, and we reason that this is because the gradients back-propagated from the auxiliary 2D image branch built upon mature 2D convolutional neural networks (CNNs) with the impressive learning ability can effectively strengthen the 3D LiDAR branch. Taking a step forward, we integrate pixel-to-point and point-to-pixel propagation directions into a unified interactive bidirectional feature propagation framework such that 2D and 3D learning branches are inter-strengthened, eventually contributing to more significant performance gains. Note that the previous work also develops a bidirectional model for joint optical and scene flow estimation(Liu et al., 2022). Still, we design a concise yet effective bidirectional propagation strategy without bells and whistles. Unlike Liu et al. (2022) detach the gradients in bidirectional fusion modules, we demonstrate that gradients back-propagated from the auxiliary 2D image branch could enhance the 3D backbone. In addition, we also particularly customize a new 2D auxiliary task called normalized local coordinate (NLC) map estimation, which facilitates learning local spatial-aware feature representations from the 2D image. Such a task also
implicitly helps to capture more discriminative geometric information from the main 3D LiDAR point cloud learning branch, thus improving the overall detection performance, especially for hard cases of distant and/or highly-occluded objects.
We conduct extensive experiments on the prevailing KITTI (Geiger et al., 2012) benchmark dataset. Compared with state-of-the-art single-modal and cross-modal 3D object detectors, our framework achieves remarkable performance improvement. Notably, our method ranks 1st on the cyclist class of KITTI 3D detection benchmark1. Conclusively, our main contributions are three-fold:
• In contrary to existing pixel-to-point cross-modal fusion strategies, we explore an opposite point-to-pixel feature propagation paradigm, which turns out to be capable of enhancing the representation capability of 3D backbone networks working on LiDAR point clouds.
• Combining the existing pixel-to-point and the newly proposed point-to-pixel feature propagation schemes, we further construct the powerful bidirectional feature propagation for crossmodal 3D objection detection.
• We reveal the importance of specific 2D auxiliary tasks used for training the 2D image learning branch which has not received enough attention in previous works, and introduce 2D NLC map estimation to facilitate learning spatial-aware features and implicitly boost the overall 3D detection performance.
2 RELATED WORK
LiDAR-based 3D Object Detection could be roughly divided into two categories. 1) Voxel-based 3D detectors typically voxelize the point clouds into grid-structure forms of a fixed size (Zhou & Tuzel, 2018). Yan et al. (2018) introduced a more efficient sparse convolution to accelerate training and inference. He et al. (2020a) employed auxiliary tasks including center estimation and foreground segmentation to guide the network to learn the intra-object relationship. 2) Point-based 3D detectors consume the raw 3D point clouds directly and generate predictions based on (downsampled) points. Shi et al. (2019) applied a point-based feature extractor and generated high-quality proposals on foreground points. 3DSSD (Yang et al., 2020) adopted a new sampling strategy named F-FPS as a supplement to D-FPS to preserve enough interior points of foreground instances. It also built a onestage anchor-free 3D object detector based on feasible representative points. Zhang et al. (2022c) and Chen et al. (2022) introduced semantic-aware down-sampling strategies to preserve foreground points as much as possible. Shi & Rajkumar (2020b) proposed to encode the point cloud by a fixedradius near-neighbor graph. Besides, PV-RCNN (Shi et al., 2020a) adopted a voxel-based RPN and leverages point-wise features extracted from the voxel set abstraction module to refine the proposals.
Camera-based 3D Object Detection. Early works (Mousavian et al., 2017; Manhardt et al., 2019) designed monocular 3D detectors by referring to 2D detectors (Ren et al., 2015; Tian et al., 2019) and utilizing 2D-3D geometric constraints. Another way is to convert images to pseudo-lidar representations via monocular depth estimation and then resort to LiDAR-based methods (Wang et al., 2019). Chen et al. (2021) built dense 2D-3D correspondence by predicting object coordinate map and adopted uncertainty-driven PnP to estimate object location. Recently, multi-view detectors like BEVStereo (Li et al., 2022) have also achieved promising performance, getting closer to LiDARbased methods.
Cross-Modal 3D Object Detection can be roughly divided into four categories: proposal-level, result-level, point-level, and pseudo-lidar-based approaches. Proposal-level fusion methods (Chen et al., 2017; Ku et al., 2018; Liang et al., 2019) adopted a two-stage framework to fuse image features and point cloud features corresponding to the same anchor or proposal. For result-level fusion, Pang et al. (2020) exploited the geometric and semantic consistencies of 2D detection results and 3D detection results to improve single-modal detectors. Qi et al. (2018) generated 2D region proposals using a CNN and lifted them to frustum proposals, where a PointNet-based network is utilized to estimate 3D box. Obviously, both proposal-level and decision-level fusion strategies are coarse and do not make full use of correspondence between LiDAR points and images. Pseudo-LiDAR based methods (You et al., 2020; Wu et al., 2022; Liang et al., 2019) employed depth estimation/completion and converted the estimated depth map to pseudo-LiDAR points to complement raw sparse point clouds. However, such methods require extra depth map annotations that are high-cost. Regarding point-level fusion methods, Liang et al. (2018) proposed the continuous fusion layer to retrieve corresponding image features of nearest 3D points for each grid in BEV feature map. Especially, Vora
1www.cvlibs.net/datasets/kitti/eval object.php?obj benchmark=3d
et al. (2020) and Huang et al. (2020) retrieved the semantic scores or features by projecting points to image plane. Liu et al. (2021) proposed EPNet++, where an LI-Fusion layer is used to enable more interaction between the two modalities to obtain more comprehensive features. And Wang et al. (2021) decorated point clouds with middle CNN features extracted from 2D detection models. However, neither semantic segmentation nor 2D detection tasks can enforce the network to learn 3D-spatial-aware image features. By contrast, we introduce NLC Map estimation as an auxiliary task to promote the image branch to learn local spatial information to supplement the sparse point cloud. Besides, Bai et al. (2022) proposed TransFusion, a transformer-based detector that performs fine-grained fusion with attention mechanisms on the BEV level. Philion & Fidler (2020) and Liang et al. (2022) estimated the depth of multi-view images and transformed the camera feature maps into the BEV space before fusing them with LiDAR BEV features.
3 PROPOSED METHOD
Overview. Architecturally, the overall processing pipeline of cross-modal 3D object detection consists of an image branch and a point cloud branch, learning feature representations from 2D RGB images and 3D LiDAR point clouds, respectively. Existing methods implement cross-modal learning only by incorporating pixel-wise semantic cues from the 2D image branch into the 3D point cloud branch for feature fusion. Here, motivated by the observation that point-based networks typically show insufficient learning ability due to the essential difficulties in processing irregular and unordered point cloud data modality (Zhang et al., 2022a), we additionally seek to boost the expressive power of the point cloud backbone network with the assistance of the 2D image branch, whose potential is ignored in previous studies. It is also worth reminding that our ultimate goal is not to design a more powerful point cloud backbone network structure but to elegantly make full use of the available resources in this cross-modal application scenario.
As shown in Fig. 2, in addition to pixel-to-point feature propagation explored by mainstream pointlevel fusion methods (Liang et al., 2018; Huang et al., 2020), we propose point-to-pixel propagation allowing features to flow inversely from the point cloud branch to the image branch, based on which we can achieve bidirectional feature propagation implemented in a multi-stage fashion. In this way, not only the image features propagated via the pixel-to-point module can provide additional semantic information, but also the gradients backpropagated from the training objectives of the image branch can boost the representation ability of the point cloud backbone. Besides, we also employ auxiliary tasks to train the pipeline, aiming to enforce the network to learn rich semantic and spatial representations. In particular, we propose NLC map estimation to promote the image branch to learn spatial-aware features, providing a necessary complement to the sparse spatial representation
extracted from point clouds, especially for distant or highly occluded cases. We refer the readers to Appendix A.2 for detailed network structures of the image and point cloud backbones.
3.1 BIDIRECTIONAL FEATURE PROPAGATION
As illustrated in Fig. 3, the proposed bidirectional feature propagation consists of a pixel-to-point module and a point-to-pixel module, which bridges the two learning branches that operate on RGB images and LiDAR point clouds. Functionally, the point-to-pixel module applies grid-level interpolation on point-wise features to produce a 2D feature map, while the pixel-to-point module retrieves the corresponding 2D image features by projecting 3D points to the 2D image plane.
In general, we partition the overall 2D and 3D branches into the same number of stages, such that we can perform bidirectional feature propagation at each stage between the corresponding layers of the image and point cloud learning networks. Without loss of generality, below we only focus on a certain stage, where we can acquire a 2D feature map F ∈ RC2D×H′×W ′ with C2D channels and dimensions H ′ ×W ′ from the image branch and a set of C3D-dimensional embedding vectors G = { gi ∈ RC3D }N ′ i=1 of the corresponding N ′ 3D points.
Point-to-Pixel Module. We design this module to propagate geometric information from the 3D point cloud branch into the 2D image branch. Formally, we expect to construct a 2D feature map F P2I ∈ RC3D×H′×W ′ by querying from the acquired point-wise embeddings G. More specifically, for each pixel position (r, c) over the targeted 2D grid of F P2I of dimensions H ′ ×W ′, we collect the corresponding geometric features of points that are projected inside the range of this pixel and then perform feature aggregation by avg-pooling AVG(·):
F P2Ir,c = AVG({gj |j = 1, ..., n}), (1) where n counts the number of points that can be projected inside the range of the pixel located at the rth row and the cth column. Particularly, for empty pixel positions where there is no point projected inside, we simply define its feature value as 0.
After that, instead of directly feeding the “interpolated” 2D feature map F P2I into the subsequent 2D convolutional stage, we introduce a few additional refinement procedures for feature smoothness as described below: F fuse = CONV(CAT[CONV(F P2I),F ]), (2) where CAT[·, ·] and CONV(·) stand for feature channel concatenation and the convolution operation, respectively, and the resulting F fuse is further fed into the subsequent layers of the 2D image branch.
Pixel-to-Point Module. From an opposite direction, this module propagates semantic information from the 2D image branch into the 3D point cloud branch. We start by projecting the corresponding N ′ 3D points onto the 2D image plane, obtaining their planar coordinates denoted as X = {xi ∈ R2}N ′i=1. Considering that these 2D coordinates are continuously and irregularly distributed over the projection plane, we apply bilinear interpolation to compute exact image features at projected positions, deducing a set of point-wise embeddings F I2P = {F (xi) ∈ RC2D |i = 1, ..., N ′}. Similar to our practice in the point-to-pixel module, the interpolated 3D point-wise features F I2P need to be refined by the following procedures:
Gfuse = MLP ( CAT [ MLP ( F I2P ;θ1 ) ,G ] ;θ2 ) , (3)
where MLP(·;θ) denotes shared multi-layer perceptrons parameterized by learnable weights θ, and the resulting Gfuse further passes through the subsequent layers of the 3D point cloud branch.
3.2 AUXILIARY TASKS FOR TRAINING
The semantic and spatial-aware information is essential for determining the category and boundary of objects. To promote the network to uncover such information of input data, we also leverage auxiliary tasks when training the whole pipeline. For the point cloud branch, following SA-SSD (He et al., 2020b), we introduce 3D semantic segmentation and center estimation to learn structure-aware features. For the image branch, we particularly propose NLC map estimation, in addition to 2D semantic segmentation.
Normalized Local Coordinate (NLC) System. We define the NLC system as taking the center of an object as the origin, aligning the x-axis towards the head direction of its ground-truth (GT) bounding box, then normalizing the local coordinates with respect to the size of the GT bounding box. Fig. 4 (a) shows an example of the NLC system for a typical car. With the geometry relationship between the input RGB image and point
cloud, we can project NLCs to the 2D image plane to construct a 2D NLC map with three channels corresponding to three spatial dimensions, as illustrated in Fig. 4 (b). We refer the reader to Appendix A.1 for the detailed process.
NLC Map Estimation. Intuitively, vision-based semantic information is useful for network to distinguish the foreground from the background. However, the performance bottleneck of existing detectors lies in the limited localization accuracy in distant or highly occluded cases, where the spatial structure is incomplete due to the sparsity of point clouds. To this end, we further take NLC map as supervision to learn the relative position of each pixel inside the object from the image. We expect such an auxiliary task can drive the image branch to learn local spatial-aware features, which serve as a complement to the sparse spatial representation extracted from point clouds. Besides, this task could have an effect on augmenting the representation ability of the point cloud branch.
Remark. The proposed NLC map estimation shares the identical objective with pseudo-LiDARbased methods, i.e, enhancing the spatial representation limited by the sparsity of point clouds. However, compared with the manner of learning a pseudo-LiDAR representation via depth completion, our NLC map estimation has the following advantages: 1) the local NLC is easier to learn than the global depth owing to its scale-invariant property at different distances; 2) our NLC map estimation can be naturally incorporated and trained with the detection pipeline end-to-end, while it is non-trivial to optimize the depth estimator and LiDAR-based detector simultaneously in pseudoLiDAR based methods though it can be implemented in a somewhat complex way (Qian et al., 2020); and 3) the pseudo-LiDAR representations require ground-true dense depth images as supervision, which may not be available in reality, but our method does not require such information.
Semantic Segmentation. Leveraging semantics of 3D point clouds has demonstrated effectiveness in point-based detectors, owing to the explicit preservation of foreground points at downsampling (Zhang et al., 2022c; Chen et al., 2022). Therefore, we further introduce the auxiliary semantic segmentation tasks not only in the point cloud branch but also in the image branch, to exploit richer semantic information. Needless to say, additional semantic features extracted from images facilitate the network to distinguish true positive candidate boxes from the false-positive.
3.3 LOSS FUNCTION
The NLC map estimation task is optimized with the loss function defined as
LNLC = 1
Npos N∑ i ∥∥pNLCi − p̂NLCi ∥∥H · 1pi , (4)
where pi is the ith LiDAR point, Npos is the number of foreground LiDAR points, ∥·∥H is the Huber-loss, 1pi indicates the loss is calculated only with foreground points, and p NLC i and p̂ NLC i are the NLCs of ground-truth and prediction at corresponding pixel for foreground points.
We use the standard cross-entropy loss to optimize both 2D and 3D semantic segmentation, denoted as L2Dsem and L 3D sem respectively. And the loss of center estimation is computed as
Lctr = 1
Npos N∑ i ∥∆pi −∆p̂i∥H · 1pi , (5)
where ∆pi is the target offsets from points to the corresponding object center, and ∆p̂i denotes the output of center estimation head.
Besides the auxiliary tasks, we follow (Chen et al., 2022) to define the loss of the RPN stage as Lrpn = Lc + Lr + Ls + Ld, (6)
where Lc is the classification loss, Lr the regression loss, Ls the shifting loss between the predicted shifts and residuals from the sampled points to their corresponding object centers in candidate generate layer (Yang et al., 2020), and Ld the point segmentation loss in SA layers to enable semanticguided sampling. In addition, we also adopt commonly used proposal refinement loss Lrcnn as defined in (Shi et al., 2020a).
In all, the loss for optimizing the overall pipeline in an end-to-end manner is written as Ltotal = Lrpn + Lrcnn + λ1LNLC + λ2L 2D sem + λ3L 3D sem + λ4Lctr, (7) where λ1, λ2, λ3, and λ4 are hyper-parameters that are empirically set to 1.
4 EXPERIMENTS
4.1 EXPERIMENT SETTINGS
Datasets and Metrics. We conducted experiments on the prevailing KITTI benchmark dataset, which contains two modalities of 3D point clouds and 2D RGB images. Following previous works (Shi et al., 2020a), we divided all training data into two subsets, i.e., 3712 samples for training and the rest 3769 for validation. Performance is evaluated by the Average Precision (AP) metric under IoU thresholds of 0.7, 0.5, and 0.5 for car, pedestrian, and cyclist categories, respectively. We computed APs with 40 sampling recall positions by default, instead of 11. For the 2D auxiliary task of semantic segmentation, we used the instance segmentation annotations as provided in (Qi et al., 2019). Besides, we also conducted experiments on the Waymo Open Dataset (WOD) (Sun et al., 2020), which can be found in Appendix A.4.
Implementation Details. For the image branch, we used ResNet18 (He et al., 2016) as the backbone encoder, followed by a decoder composed of pyramid pooling module (Zhao et al., 2017) and several upsampling layers to give the outputs of semantic segmentation and NLC map estimation. For the point cloud branch, we deployed a point-based network like (Yang et al., 2020) with extra FP layers and further applied semantic-guided farthest point sampling (S-FPS) (Chen et al., 2022) in SA layers. Thus, we implemented bidirectional feature propagation between the top of each SA or FP layer and their corresponding locations at the image branch. Please refer to Appendix A.2 for more details.
4.2 COMPARISON WITH STATE-OF-THE-ART METHODS
We submitted our results to the official website of KITTI. As compared in Table 1, our BiProDet outperforms existing state-of-the-art methods on the KITTI test set by a remarkable margin, i.e., an absolute increase of 2.1% mAP over the second best method EQ-PVRCNN. Notably, by the time of submission, we ranked 1st on the KITTI 3D detection benchmark for the cyclist class. Particularly, it is observed that BiProDet shows consistent and more obvious superiority on “moderate” and “hard” levels, i.e., the distant or highly-occluded objects with sparse points. We ascribe such performance gains to the fact that our proposed bidirectional feature propagation mechanism contributes to more adequate exploitation of complementary information between multi-modalities as well as the effects of 2D auxiliary tasks (as verified in Sec. 4.3). Besides, we can observe from the actual visual results (as presented in Appendix A.7) that our BiProDet is able to produce higher-quality 3D bounding boxes in varying scenes.
4.3 ABLATION STUDY
We conducted comprehensive ablation studies to validate the effectiveness and explore the impacts of key modules involved in the overall learning framework.
Effectiveness of Bidirectional Propagation. We performed detailed ablation studies on specific multi-modal feature exploitation and interaction strategies. We started by presenting a single-modal baseline (Table 2 (a)) that only preserves the point cloud branch of our BiProDet for both training and inference. Based on our complete bidirectional propagation pipeline (Table 2 (e)), we explored another two variants as shown in (Table 2 (c)) and (Table 2 (d)), solely preserving the point-to-pixel and pixel-to-point feature propagation in our 3D detectors, respectively. Note that in Table 2 (c) the point-to-pixel feature flow was only enabled during training, and we detached the point-to-pixel module as well as the whole image branch during inference. In addition, in Table 2 (b), we replaced the bidirectional feature propagation of our BiProDet with the input fusion strategy as proposed in (Wang et al., 2021) that decorates point clouds with CNN features deduced from the image branch. Empirically, we can draw several aspects of conclusions that strongly support our preceding claims. First, the performance of Table 2 (a) turns out to be the worst among all variants, which reveals the superiority of cross-modal learning. Second, combining Table 2 (a) and Table 2 (c), the mAP is largely boosted from 75.88% to 77.01%. Considering that, during the inference stage, these two variants have identical forms of input and network structure, the resulting improvement strongly indicates that the representation ability of the 3D LiDAR branch is indeed strengthened. Third, comparing Table 2 (c) and Table 2 (d) with Table 2 (e), we can verify the superiority of bidirectional propagation (78.64%) over the unidirectional schemes (77.01% and 77.47%). If we particularly pay attention to Table 2 (d) and Table 2 (e), we can conclude that our newly proposed point-to-pixel feature propagation direction further brings 1.17% mAP increase based on the previously explored pixel-to-point paradigm. Besides, by comparing Table 2 (b) (77.07%) and Table 2 (d) (77.47%) whose information propagation directions are both from the 2D image domain to the 3D point cloud domain, we can demonstrate the superiority of feature-level propagation over its input-level counterpart (Wang et al., 2021).
Effectiveness of Image Branch. Comparing Table 3 (d) with Tables 3 (e)-(g), the performance is stably boosted from 75.88% to 78.64%, as the gradual addition of the 2D image branch and two
auxiliary tasks. These results indicate that the additional semantic and geometric features learned from the image modality are indeed effective supplements to the point cloud representation, leading to significantly improved 3D object detection performances.
Effectiveness of Auxiliary Tasks. Comparing Table 3 (a) with Tables 3 (b)-(d), the mAP is boosted by 0.70% when incorporating 3D semantic segmentation and 3D center estimation, and can be further improved by 2.76% after introducing 2D semantic segmentation and NLC map estimation (Tables 3 (d)-(g)). However, only integrating image features without 2D auxiliary tasks (comparing results of Tables 3 (d) and (e)) brings limited improvement of 0.5%. This observation shows that the 2D auxiliary tasks, especially the proposed 2D NLC map estimation, do enforce the image branch to learn complementary information for the detector.
Robustness against Input Corruption. We also conducted extensive experiments to verify the robustness of our BiProDet to sensor perturbation. Specifically, we added Gaussian noises to the reflectance value of points or RGB images. Fig. 5 shows that the mAP value of our cross-modal BiProDet is consistently higher than that of the single-modal baseline and decreases slower with the LiDAR noise level increasing. Particularly, as listed in Table 4, when the variance of the LiDAR noise is set to 0.15, the perturbation affects our cross-modal BiProDet much less than the singlemodal detector. Besides, even applying the corruption to both LiDAR input and RGB images, the mAP value of our BiProDet only drops by 2.49%.
5 CONCLUSION
We have presented a novel cross-modal 3D detector, namely BiProDet, by fully exploiting complementary information from the image domain in two aspects. First, we proposed point-to-pixel feature propagation, which enables gradients backpropagated from the training loss of the image branch to augment the expressive power of the 3D point cloud backbone. Second, we proposed NLC map estimation as an auxiliary task to promote the image branch to learn local spatial-aware representation rather than just semantic features. Our BiProDet achieves state-of-the-art results on the KITTI detection benchmark. And extensive ablative experiments demonstrate the robustness of our BiProDet against sensor noises and generalization to LiDAR signals with fewer beams. The decent results demonstrate the potential of joint training between 3D object detection and more 2D scene understanding tasks. We believe our new perspective will further inspire investigations on multi-task multi-modal learning for scene understanding in autonomous driving.
ACKNOWLEDGEMENT
This work was supported in part by the Hong Kong Research Grants Council under Grants 11219422, 11202320, and 11218121, and in part by Hong Kong Innovation and Technology Fund under Grant MHP/117/21.
REPRODUCIBILITY STATEMENT
We provide the source code and configuration in the supplementary material for the experimental results presented in the main text. We specify the settings of hyper-parameters, the training scheme, and the implementation details of our method in Section 4.1 and Appendix A.2. Besides, we also give a clear explanation of used datasets. We repeat the experiments on the KITTI dataset several times to check the rightness and reproducibility of the implementation.
ETHICS STATEMENT
Proposing effective cross-modal 3D object detectors would benefit a wide range of applications. For example, the incomplete information obtained from the single-modal sensor in autonomous driving vehicles may lead to serious traffic accidents. And the cross-modal detectors are expected to achieve more accurate detection results and be more robust to environment perturbation. Besides, this study was conducted using publicly available datasets and without surveying participants for experiments. Thus we do not consider this study to raise privacy or data security issues. We cited the creators when using existing datasets.
A APPENDIX
In this appendix, we provide the details omitted from the manuscript due to space limitation. We organize the appendix as follows.
• Section A.1: Details of the normalized local coordinate (NLC) System. • Section A.2: Implementation details. • Section A.3: More quantitative results. • Section A.4: Experimental results on Waymo dataset. • Section A.5: More ablation studies. • Section A.6: Efficiency analysis. • Section A.7: Visual results of 3D object detection. • Section A.8: Visual results of 2D semantic segmentation. • Section A.9: Details on the official KITTI test leaderboard.
A.1 DETAILS OF NORMALIZED LOCAL COORDINATE (NLC) SYSTEM
The aim of the cross-modal 3D detector is to estimate the bounding box parameterized by the object dimension (w, l, h), the center location (xc, yc, zc), and the orientation angle θ. The input contains an RGB image X ∈ R3×H×W , where H and W represent the height and width of the image respectively, and a 3D LiDAR point cloud {pi|i = 1, ..., N}, where N is the number of points, and each point pi ∈ R4 is represented with its 3D location (xp, yp, zp) in the LiDAR coordinate system and the reflectance value ρ.
To build the correspondence between the 2D and 3D modalities, we project the observed points from the 3D coordinate system to the 2D coordinate system on the image plane:
dc [ u v 1 ] = K [R T ] xpypzp 1 , (8) where u, v, dc denote the corresponding coordinates and depth on the image plane, R ∈ R3×3 and T ∈ R3×1 denote the rotation matrix and translation matrix of the LiDAR relative to the camera, and K ∈ R3×3 is the camera intrinsic matrix. Given the point cloud and the bounding box of an object, LiDAR coordinates of foreground points can be transformed into the NLC system by proper translation, rotation, and scaling, i.e.,xNLCpyNLCp
zNLCp
= [1/l 0 00 1/w 0 0 0 1/h ] · xLCSpyLCSp zLCSp + [0.50.5 0.5 ] = [ 1/l 0 0 0 1/w 0 0 0 1/h ] · [ cosθ −sinθ 0 sinθ cosθ 0 0 0 1 ] · [ xp − xc yp − yc zp − zc ] + [ 0.5 0.5 0.5 ] , (9)
where (xNLCp , y NLC p , z NLC p ) and (x LCS p , y LCS p , z LCS p ) denote the coordinates of points in NLC system and local coordinate system, respectively. Suppose that the bounding box is unknown with known global coordinates and NLCs of points, we can build a set of equations about the parameters of box (i.e., the xc, yc, zc, w, l, h, θ). In order to solve the total of 7 parameters, global coordinates and NLCs of at least 7 points are required to construct the equations.
Discussion. Notably, it is difficult to infer NLCs of points with only LiDAR data for objects far away or with low reflectivity, since the point clouds are sparse. Meanwhile, the contours and appearance of these cases are still visible in RGB images, so we propose to estimate the normalized local coordinates from images. Then, we can retrieve the estimated NLCs of observed points based on the 2D-3D correspondence. Finally, we expect the proposed cross-modal detector to achieve higher detection accuracy with estimated NLCs and known global coordinates.
A.2 IMPLEMENTATION DETAILS
Network Architecture. Fig. 6 illustrates the architectures of the point cloud and image backbone networks. For the encoder of the point cloud branch, we further show the details of multi-scale
grouping (MSG) network in Table 5. Following 3DSSD (Yang et al., 2020), we take key points sampled by the 3rd SA layer to generate vote points and estimate the 3D box. Then we feed these 3D boxes and features output by the last FP layer to the refinement stage. Besides, we adopt densityaware RoI grid pooling (Hu et al., 2022) to encode point density as an additional feature. Note that 3D center estimation (He et al., 2020b) aims to learn the relative position of each foreground point to the object center, while the 3D box is estimated based on sub-sampled points. Thus, the auxiliary task of 3D center estimation differs from 3D box estimation and can facilitate learning structureaware features of objects.
Training Details. Through the experiments on KITTI dataset, we adopted Adam (Kingma & Ba, 2015) (β1=0.9, β2=0.99) to optimize our BiProDet. We initialized the learning rate as 0.003 and updated it with the one-cycle policy (Smith, 2017). And we trained the model for a total of 80 epochs in an end-to-end manner. In our experiments, the batch size was set to 8, equally distributed on 4 NVIDIA 3090 GPUs. We kept the input image with the original resolution and padded it to
the size of 1248× 376, and down-sampled the input point cloud to 16384 points during training and inference. Following the common practice, we set the detection range of the x, y, and z axis to [0m, 70.4m], [-40m, 40m] and [-3m, 1m], respectively.
Data Augmentation. We applied common data augmentation strategies at global and object levels. The global-level augmentation includes random global flipping, global scaling with a random scaling factor between 0.95 and 1.05, and global rotation around the z-axis with a random angle in the range of [−π/4, π/4]. Each of the three augmentations was performed with a 50% probability for each sample. The object-level augmentation refers to copying objects from other scenes and pasting them to current scene (Yan et al., 2018). In order to perform sampling synchronously on point clouds and images, we utilized the instance masks provided in (Qi et al., 2019). Specifically, we pasted both the point clouds and pixels of sampled objects to the point cloud and images of new scenes, respectively.
A.3 MORE QUANTITATIVE RESULTS
Performance on KITTI Val Set. We also reported the performance of our BiProDet on all three classes of the KITTI validation set in Table 6, where it can be seen that our BiProDet also achieves the highest mAP of 77.73%, which is obviously higher than the second best method CAT-Det.
Performance of Single-Class Detector. Quite a few methods (Deng et al., 2021; Zheng et al., 2021) train models only for car detection. Empirically, the single-class detector performs better in the car class compared with multi-class detectors. Therefore, we also provided performance of BiProDet trained only for the car class, and compared it with several state-of-the-art methods in Table 7.
Performance of NLC Map Estimation. The NLC map estimation task is introduced to guide the image branch to learn local spatial-aware features. We evaluated the predicted NLC of pixels containing at least one projected point with mean absolute error (MAE) for each object:
MAEq = 1
Nobj N∑ i ∣∣qNLCpi − q̂NLCpi ∣∣ · 1pi , q ∈ {x, y, z} , (10)
where Nobj is the number of LiDAR points inside the ground-truth box of the object, 1pi indicates that the evaluation is only calculated with foreground points inside the box, qNLCpi and q̂ NLC pi are the normalized local coordinates of the ground-truth and the prediction at the corresponding pixel for the foreground point. Finally, we obtained the mean value of MAEq for all instances, namely mMAEq . We report the metric over different difficulty levels for three categories on the KITTI val set at Figure 7. We can observe that, for the car class, the mMAE error is only 0.0619, i.e., ±6.19 cm error per meter. For the challenging pedestrian and cyclist categories, the error becomes larger due to the smaller size and the non-rigid shape.
Generalization to Asymmetric Backbones. As shown in Fig. 6, we originally adopted an encoderdecoder network in the LiDAR branch that is architecturally similar to the image backbone. Nevertheless, it is worth clarifying that our approach is not limited to symmetrical structures and can be generalized to different point-based backbones. Here, we replaced the 3D branch of the original framework with an efficient single-stage detector—SASA (Chen et al., 2022), using a backbone only with the encoder in the LiDAR branch, which is asymmetric with the encoder-decoder structure of the image backbone. Accordingly, the proposed bidirectional propagation is only performed between the 3D backbone and the encoder of the 2D image backbone. The experimental results are shown in Table 8. We can observe that the proposed method works well even when the two backbones are asymmetric, which demonstrates the satisfactory generalization ability of our method for different LiDAR backbones.
A.4 RESULTS ON WAYMO OPEN DATASET
The Waymo Open Dataset (Sun et al., 2020) is a large-scale dataset for 3D object detection. It contains 798 sequences (15836 frames) for training, and 202 sequences (40077 frames) for validation. According to the number of points inside the object and the difficulty of annotation, the objects are further divided into two difficulty levels: LEVEL 1 and LEVEL 2. Following common practice, we adopted the metrics of mean Average Precision (mAP) and mean Average Precision weighted by
heading accuracy (mAPH), and reported the performance on both LEVEL 1 and LEVEL 2. We set the detection range to [-75.2m, 75.2m] for x and y axis, and [-2m, 4m] for z axis. Following Wang et al. (2021) and Bai et al. (2022), the training on Waymo dataset consists of two stages to allow flexible augmentations. First, we only trained the LiDAR branch without image inputs and bidirectional propagation for 30 epochs. We enabled the copy-and-paste augmentation in this stage. Then, we trained the whole pipeline for another 6 epochs, during which the copy-and-paste is disabled. Note that the image semantic segmentation head is disabled, since ground-truth segmentation maps are not provided (Sun et al., 2020).
As shown in Table 9, our method achieves substantial improvement compared with previous stateof-the-arts. Particularly, unlike existing approaches including PointAugmenting (Wang et al., 2021) and TransFusion (Bai et al., 2022) where the camera backbone is pre-trained on other datasets and then frozen, we trained the entire pipeline in an end-to-end manner. It can be seen that even without the 2D segmentation auxiliary task, our method still achieves higher accuracy under all scenarios except “Ped L2”, demonstrating its advantage.
A.5 MORE ABLATION STUDIES
Effectiveness of Multi-stage Interaction. As mentioned before, both 2D and 3D backbones adopt an encoder-decoder structure, and we perform bidirectional feature propagation at both downsampling and upsampling stages. Here, we conducted experiments to verify the superiority of the multistage interaction over single-stage interaction. As shown in Table 10, only performing the bidirectional feature propagation in the encoder (i.e., Table 10 (b)) or the decoder (i.e., Table 10 (c)) leads to worse performance than that of performing the module in both stages (i.e., Table 10 (d)).
Effect of Semantic-guided Point Sampling. When performing downsampling in the SA layers of the point cloud branch, we adopted S-FPS (Chen et al., 2022) to explicitly preserve as many foreground points as possible. We report the percentage of sampled foreground points and instance recall (i.e., the ratio of instances that have at least one point) in Table 11, where it can be seen that exploiting supplementary semantic features from images leads to substantial improvement of the ratio of sampled foreground points and better instance recall during S-FPS.
Influence on 2D Semantic Segmentation. We also aimed to demonstrate that the 2D-3D joint learning paradigm benefits not only the 3D object detection task but also the 2D semantic segmentation task. As shown in Table 12, the deep interaction between different modalities yields an improvement of 4.42% mIoU. The point features can naturally complement RGB image features by providing 3D geometry and semantics, which are robust to illumination changes and help distinguish different classes of objects, for 2D visual information. The results suggest the potential of joint training between 3D object detection and more 2D scene understanding tasks in autonomous driving.
Conditional Analysis. To better figure out where the improvement comes from when using additional image features, we compared BiProDet with the single-modal detector on different occlusion levels and distant ranges. The results shown in Table 13 and Table 14 include separate APs for objects belonging to different occlusion levels and APs for moderate class in different distance ranges. For car detection, our BiProDet achieves more accuracy gains for long-distance and highly occluded objects, which suffer from the sparsity of observed LiDAR points. The cyclist and pedestrian are much more difficult categories on account of small sizes, non-rigid structures, and fewer training samples. For these two categories, BiProDet still brings consistent and significant improvements on different levels even in extremely difficult cases.
Generalization to Sparse LiDAR Signals. We also compared our BiProDet with the single-modal baseline on LiDAR point clouds with various sparsity. In practice, following Pseudo-LiDAR++ (You et al., 2020), we simulated the 32-beam, 16-beam, and 8-beam LiDAR signals by selecting LiDAR points whose elevation angles fall within specific intervals. As shown in Table 15, the proposed BiProDet outperforms the single-modal baseline under all settings. The consistent improvements suggest our method can generalize to sparser signals. Besides, the proposed BiProDet significantly performs better than the baseline in the setting of LiDAR signals with fewer beams, demonstrating the effectiveness of our method in exploiting the supplementary information in the image domain.
A.6 EFFICIENCY ANALYSIS
We also compared the inference speed and number of parameters of the proposed BiProDet with state-of-the-art cross-modal approaches in Table 16. Our BiProDet has about the same number of parameters as CAT-Det (Zhang et al., 2022b), but a much higher inference speed at 9.52 frames per second on a single GeForce RTX 2080 Ti GPU. In general, our BiProDet is inevitably slower than some single-modal detectors, but it achieves a good trade-off between speed and accuracy among cross-modal approaches.
A.7 VISUAL RESULTS OF 3D OBJECT DETECTION
In Figure 8, we present the qualitative comparison of detection results between the single-modal baseline and our BiProDet. We can observe that the proposed BiProDet shows better localization capability than the single-modal baseline in challenging cases. Besides, we also show qualitative
results of BiProDet on the KITTI test split in Figure 9. We can clearly observe that our BiProDet performs well in challenging cases, such as pedestrians and cyclists (with small sizes) and highlyoccluded cars.
A.8 VISUAL RESULTS OF 2D SEMANTIC SEGMENTATION
Several examples are shown in Figure 10. For distant cars in the first and the second row as well as the pedestrian in the sixth row, the size of objects is small and PSPNet tends to treat them as background, while our BiProDet is able to correct such errors. In the third row, our BiProDet finds the dim cyclist missed by PSPNet. Our BiProDet also performs better for the highly occluded objects as shown in the fourth and the fifth lines. This observation shows the 3D feature representations extracted from point clouds can boost 2D semantic segmentation, since the image-based method is sensitive to illumination and can hardly handle corner cases with only single-modal inputs.
A.9 DETAILS ON OFFICIAL KITTI TEST LEADERBOARD
We submitted the results of our BiProDet to the official KITTI website, and it ranks 1st on the 3D object detection benchmark for the cyclist class. Figure 11 shows the screenshot of the leaderboard. Figure 12 illustrates the precision-recall curves along with AP scores on different categories of the KITTI test set. The samples of the KITTI test set are quite different from those of training/validation set in terms of scenes and camera parameters, so the impressive performance of our BiProDet on the test set demonstrates it also achieves good generalization. | 1. What is the focus and contribution of the paper on 3D object detection?
2. What are the strengths of the proposed approach, particularly in terms of bidirectional fusion and normalized relative coordinate estimation?
3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works?
4. Do you have any concerns or questions regarding the ablation study and the usefulness of the 3D segmentation auxiliary task?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposed a LiDAR+RGB fusion-based 3D detection method. Compared to prior work, the main claims are twofold: 1) the use of bidirectional fusion between 3D point cloud features and 2D image features so that the 3D backbone can be improved with gradients coming from supervision on the 2D branch; 2) the use of another 2D auxiliary task, i.e., normalized relative coordinate estimation for each object on the 2D image. Results on the KITTI test server shows solid performance compared to recent methods.
Strengths And Weaknesses
Paper Strengths
The design of the proposed fusion method is reasonable, i.e., by building on top of prior work with pixel-to-point fusion, this work also investigates point-to-pixel fusion and combines them to build bidirectional fusion. Experiments do support that both directions improve the performance and the 3D backbone can be improved with additional gradient flow
The performance of the proposed method on the KITTI test set is solid, which can make this work useful to the community if the code can be released (I haven’t checked personally for reproduction but the authors state that the code is in the supplementary)
The other proposed minor contribution – normalized local coordinate estimation task – seems intuitive to me. Since estimating relative coordinates requires some kind of 3D understanding, it can benefit the 3D features coming from the 3D point cloud backbone and potentially also improve the 3D backbone during training as the paper claims. Some ablation also supports this claim.
Paper Weaknesses
I do not see a critical concern that significantly impacts my score, but below are some minor concerns.
I am happy with the performance the paper achieves on the KITTI test set. However, as the KITTI dataset was built a long time ago and is quite small (and can potentially lead to overfitting issues) compared to other recent large-scale datasets, people tend to focus more on the nuScenes/Waymo datasets recently. So it would make the paper’s results significantly more trustable if the results can be demonstrated on these more recent large datasets, and use KITTI results to supplement. I understand that working with a new dataset can be practically time-consuming, but presumably, this could be something the authors have already been working on after the submission (since it is quite obvious).
Some of the claims about related work might be inappropriate and could be improved. For example, in related work and under “monocular 3D object detection”, the paper states that “images have not been demonstrated as an appropriate modality to estimate 3D boxes”. I think this claim might raise concerns from many readers and is quite outdated (might be true 2 years ago). For example, Tesla has been using images to do auto-piloting nowadays and shows reasonable performance. Also, looking at the new STOA methods on the nuScenes/Waymo leaderboard, there are many successful image-only methods (e.g., BEVStereo, BEVdepth, BEVFormer) that achieve strong performance, though not beating lidar-based methods but are getting closer. Similarly, the statement about the pseudo-lidar-based method – “it is non-trivial to optimize the depth estimator and LiDAR detection simultaneously in pseudo-lidar based methods” – might be problematic since there are already works in that direction (e.g., one of the first work, End2end pseudo-lidar, CVPR 2020). I would suggest that the authors may reconsider improving these statements about prior work
One minor question I have regarding the ablation study in Table 3 is: comparing row (a) and row (b) solely, it seems that adding 3D segmentation auxiliary task seems not to bring any improvement to the detection task. Are these two tasks really beneficial to each other in this multi-task learning setting? Any more concrete evidence in addition to the results in row (d)? I am asking this because from the results, it seems not worth adding this auxiliary task considering its cost during training/inference. It would be nice if the authors could comment on this.
Although pixel-to-point fusion is one of the mainstream directions in the field as the authors mentioned, I think there is still some prior work conducting 3D detection with the point-to-pixel fusion between LiDAR and camera. I believe that I have seen a few in the past, but at this moment only remember one (EPNet++: Cascade Bi-directional Fusion for Multi-Modal 3D Object Detection). It would be nice if the authors can do a literature search in this direction and add some comments to summarize the differences/similarity between these works.
Clarity, Quality, Novelty And Reproducibility
Quality and clarity are fine to me. Originality is related to my last point about the above weaknesses. Reproducibility is possible based on the paper but I haven't checked the code for performance reproduction |
ICLR | Title
Bidirectional Propagation for Cross-Modal 3D Object Detection
Abstract
Recent works have revealed the superiority of feature-level fusion for cross-modal 3D object detection, where fine-grained feature propagation from 2D image pixels to 3D LiDAR points has been widely adopted for performance improvement. Still, the potential of heterogeneous feature propagation between 2D and 3D domains has not been fully explored. In this paper, in contrast to existing pixelto-point feature propagation, we investigate an opposite point-to-pixel direction, allowing point-wise features to flow inversely into the 2D image branch. Thus, when jointly optimizing the 2D and 3D streams, the gradients back-propagated from the 2D image branch can boost the representation ability of the 3D backbone network working on LiDAR point clouds. Then, combining pixel-to-point and point-to-pixel information flow mechanisms, we construct an bidirectional feature propagation framework, dubbed BiProDet. In addition to the architectural design, we also propose normalized local coordinates map estimation, a new 2D auxiliary task for the training of the 2D image branch, which facilitates learning local spatial-aware features from the image modality and implicitly enhances the overall 3D detection performance. Extensive experiments and ablation studies validate the effectiveness of our method. Notably, we rank 1 on the highly competitive KITTI benchmark on the cyclist class by the time of submission. The source code is available at https://github.com/Eaphan/BiProDet.
1 INTRODUCTION
In recent years, 3D object detection has received increasing academic and industrial attention driven by the thriving of autonomous driving scenarios, where the two dominating data modalities 3D point clouds and 2D RGB images demonstrate complementary properties in many aspects. 3D point clouds scanned by LiDAR encode accurate structure and depth cues in the geometric domain but typically suffer from severe sparsity, incompleteness, and non-uniformity. The 2D RGB images captured by optical cameras convey rich semantic features in the visual domain and facilitate the application of well-developed image learning architectures (He et al., 2016) but pose challenges in reliably modeling spatial structures. More importantly, due to the essential difficulties in processing irregular and unordered point clouds, the actual learning ability of neural network backbones working on 3D LiDAR data is still relatively insufficient, thus limiting the performance of LiDAR-based detection frameworks. Practically, despite the efforts made by previous studies (Wang et al., 2019; Vora et al., 2020) devoted to cross-modal learning, how to reasonably mitigate the big modality gap between images and point clouds and fuse the corresponding heterogeneous feature representations still remains an open and non-trivial problem.
Depending on specific fusion strategies between image and point cloud modalities, existing crossmodal 3D object detection pipelines can be categorized into: 1) proposal-level; 2) result-level; 3) pseudo-LiDAR-based; and 4) point-level processing paradigms, as illustrated in Fig. 1. Concretely, proposal-level fusion methods (Chen et al., 2017; Ku et al., 2018) perform feature fusion for both modalities over each of the proposals/anchors, while result-level fusion methods (Pang et al., 2020) directly manipulate the output 2D and 3D object candidates by leveraging geometric and semantic consistencies. However, the former two fusion paradigms do not make full use of fine-grained correspondence between 3D LiDAR points and 2D images, and thus show inferior detection performance.
For another alternative fusion paradigm based on pseudo-LiDAR representations (You et al., 2020), auxiliary learning modules specialized for stereo depth estimation are integrated to synthesize dense pseudo-LiDAR point clouds as additional complements to the original real-scanned sparse LiDAR data. Such a strategy indeed brings performance boosts but it becomes more technically cumbersome and demanding, since ground-truth depth maps are required as supervision signals. Moreover, as investigated previously (Qian et al., 2020), it is non-trivial to harmonize the joint end-to-end optimization of the object detector and the depth estimator. To make use of point-pixel correspondences between images and point clouds, there has been another family of point-level fusion approaches (Huang et al., 2020; Vora et al., 2020) characterized by using projection from the 3D space of point cloud to 2D image planes. That is, for each 3D point, its geometric feature extracted from the 3D LiDAR learning branch is fused with the semantic feature (retrieved from the 2D image learning branch) of the corresponding 2D pixel. Benefiting from more fine-grained feature manipulation, point-level fusion typically demonstrates more significant performance improvement and thus is considered as a more promising cross-modal fusion paradigm.
Still, based on our observations, despite the superiority of point-level fusion, previous studies have not fully exploited its great potential in terms of specific architectural design and technical workflow. Fundamentally, the performance boost achieved by previous methods can be mainly credited to the fact that additional discriminative 2D semantic information is incorporated into the 3D detection pipeline. However, there is no evidence that the actual representation capability of the 3D LiDAR backbone network, which is also one of the most critical influencing factors, is strengthened. Besides, it is also noticed that the choice of 2D auxiliary tasks used for training the image learning branch also makes a difference, which essentially determines the consequent 2D feature representations to be propagated. Previous methods directly resort to classic semantic understanding tasks (e.g., image segmentation) for the learning of visual patterns, which may not be the optimal task that harmonizes with the 3D LiDAR learning branch for geometric modeling.
Given the above issues, in this paper we propose a bidirectional feature propagation architecture, namely BiProDet, for cross-modal 3D object detection. Foremost, our core component lies in a novel point-to-pixel feature propagation mechanism allowing 3D geometric features of LiDAR point clouds to flow into the 2D image learning branch. Our BiProDet differentiates from previous crossmodal 3D detection pipelines that solely pay attention to unidirectional 2D-to-3D (pixel-to-point) feature propagation while neglecting the potential effects of the opposite 3D-to-2D (point-to-pixel) direction. Functionally, different from existing pixel-to-point propagation, the proposed point-topixel propagation turns out to be capable of enhancing the representation ability of 3D LiDAR backbone networks. In fact, it is worth emphasizing that this is an interesting phenomenon, and we reason that this is because the gradients back-propagated from the auxiliary 2D image branch built upon mature 2D convolutional neural networks (CNNs) with the impressive learning ability can effectively strengthen the 3D LiDAR branch. Taking a step forward, we integrate pixel-to-point and point-to-pixel propagation directions into a unified interactive bidirectional feature propagation framework such that 2D and 3D learning branches are inter-strengthened, eventually contributing to more significant performance gains. Note that the previous work also develops a bidirectional model for joint optical and scene flow estimation(Liu et al., 2022). Still, we design a concise yet effective bidirectional propagation strategy without bells and whistles. Unlike Liu et al. (2022) detach the gradients in bidirectional fusion modules, we demonstrate that gradients back-propagated from the auxiliary 2D image branch could enhance the 3D backbone. In addition, we also particularly customize a new 2D auxiliary task called normalized local coordinate (NLC) map estimation, which facilitates learning local spatial-aware feature representations from the 2D image. Such a task also
implicitly helps to capture more discriminative geometric information from the main 3D LiDAR point cloud learning branch, thus improving the overall detection performance, especially for hard cases of distant and/or highly-occluded objects.
We conduct extensive experiments on the prevailing KITTI (Geiger et al., 2012) benchmark dataset. Compared with state-of-the-art single-modal and cross-modal 3D object detectors, our framework achieves remarkable performance improvement. Notably, our method ranks 1st on the cyclist class of KITTI 3D detection benchmark1. Conclusively, our main contributions are three-fold:
• In contrary to existing pixel-to-point cross-modal fusion strategies, we explore an opposite point-to-pixel feature propagation paradigm, which turns out to be capable of enhancing the representation capability of 3D backbone networks working on LiDAR point clouds.
• Combining the existing pixel-to-point and the newly proposed point-to-pixel feature propagation schemes, we further construct the powerful bidirectional feature propagation for crossmodal 3D objection detection.
• We reveal the importance of specific 2D auxiliary tasks used for training the 2D image learning branch which has not received enough attention in previous works, and introduce 2D NLC map estimation to facilitate learning spatial-aware features and implicitly boost the overall 3D detection performance.
2 RELATED WORK
LiDAR-based 3D Object Detection could be roughly divided into two categories. 1) Voxel-based 3D detectors typically voxelize the point clouds into grid-structure forms of a fixed size (Zhou & Tuzel, 2018). Yan et al. (2018) introduced a more efficient sparse convolution to accelerate training and inference. He et al. (2020a) employed auxiliary tasks including center estimation and foreground segmentation to guide the network to learn the intra-object relationship. 2) Point-based 3D detectors consume the raw 3D point clouds directly and generate predictions based on (downsampled) points. Shi et al. (2019) applied a point-based feature extractor and generated high-quality proposals on foreground points. 3DSSD (Yang et al., 2020) adopted a new sampling strategy named F-FPS as a supplement to D-FPS to preserve enough interior points of foreground instances. It also built a onestage anchor-free 3D object detector based on feasible representative points. Zhang et al. (2022c) and Chen et al. (2022) introduced semantic-aware down-sampling strategies to preserve foreground points as much as possible. Shi & Rajkumar (2020b) proposed to encode the point cloud by a fixedradius near-neighbor graph. Besides, PV-RCNN (Shi et al., 2020a) adopted a voxel-based RPN and leverages point-wise features extracted from the voxel set abstraction module to refine the proposals.
Camera-based 3D Object Detection. Early works (Mousavian et al., 2017; Manhardt et al., 2019) designed monocular 3D detectors by referring to 2D detectors (Ren et al., 2015; Tian et al., 2019) and utilizing 2D-3D geometric constraints. Another way is to convert images to pseudo-lidar representations via monocular depth estimation and then resort to LiDAR-based methods (Wang et al., 2019). Chen et al. (2021) built dense 2D-3D correspondence by predicting object coordinate map and adopted uncertainty-driven PnP to estimate object location. Recently, multi-view detectors like BEVStereo (Li et al., 2022) have also achieved promising performance, getting closer to LiDARbased methods.
Cross-Modal 3D Object Detection can be roughly divided into four categories: proposal-level, result-level, point-level, and pseudo-lidar-based approaches. Proposal-level fusion methods (Chen et al., 2017; Ku et al., 2018; Liang et al., 2019) adopted a two-stage framework to fuse image features and point cloud features corresponding to the same anchor or proposal. For result-level fusion, Pang et al. (2020) exploited the geometric and semantic consistencies of 2D detection results and 3D detection results to improve single-modal detectors. Qi et al. (2018) generated 2D region proposals using a CNN and lifted them to frustum proposals, where a PointNet-based network is utilized to estimate 3D box. Obviously, both proposal-level and decision-level fusion strategies are coarse and do not make full use of correspondence between LiDAR points and images. Pseudo-LiDAR based methods (You et al., 2020; Wu et al., 2022; Liang et al., 2019) employed depth estimation/completion and converted the estimated depth map to pseudo-LiDAR points to complement raw sparse point clouds. However, such methods require extra depth map annotations that are high-cost. Regarding point-level fusion methods, Liang et al. (2018) proposed the continuous fusion layer to retrieve corresponding image features of nearest 3D points for each grid in BEV feature map. Especially, Vora
1www.cvlibs.net/datasets/kitti/eval object.php?obj benchmark=3d
et al. (2020) and Huang et al. (2020) retrieved the semantic scores or features by projecting points to image plane. Liu et al. (2021) proposed EPNet++, where an LI-Fusion layer is used to enable more interaction between the two modalities to obtain more comprehensive features. And Wang et al. (2021) decorated point clouds with middle CNN features extracted from 2D detection models. However, neither semantic segmentation nor 2D detection tasks can enforce the network to learn 3D-spatial-aware image features. By contrast, we introduce NLC Map estimation as an auxiliary task to promote the image branch to learn local spatial information to supplement the sparse point cloud. Besides, Bai et al. (2022) proposed TransFusion, a transformer-based detector that performs fine-grained fusion with attention mechanisms on the BEV level. Philion & Fidler (2020) and Liang et al. (2022) estimated the depth of multi-view images and transformed the camera feature maps into the BEV space before fusing them with LiDAR BEV features.
3 PROPOSED METHOD
Overview. Architecturally, the overall processing pipeline of cross-modal 3D object detection consists of an image branch and a point cloud branch, learning feature representations from 2D RGB images and 3D LiDAR point clouds, respectively. Existing methods implement cross-modal learning only by incorporating pixel-wise semantic cues from the 2D image branch into the 3D point cloud branch for feature fusion. Here, motivated by the observation that point-based networks typically show insufficient learning ability due to the essential difficulties in processing irregular and unordered point cloud data modality (Zhang et al., 2022a), we additionally seek to boost the expressive power of the point cloud backbone network with the assistance of the 2D image branch, whose potential is ignored in previous studies. It is also worth reminding that our ultimate goal is not to design a more powerful point cloud backbone network structure but to elegantly make full use of the available resources in this cross-modal application scenario.
As shown in Fig. 2, in addition to pixel-to-point feature propagation explored by mainstream pointlevel fusion methods (Liang et al., 2018; Huang et al., 2020), we propose point-to-pixel propagation allowing features to flow inversely from the point cloud branch to the image branch, based on which we can achieve bidirectional feature propagation implemented in a multi-stage fashion. In this way, not only the image features propagated via the pixel-to-point module can provide additional semantic information, but also the gradients backpropagated from the training objectives of the image branch can boost the representation ability of the point cloud backbone. Besides, we also employ auxiliary tasks to train the pipeline, aiming to enforce the network to learn rich semantic and spatial representations. In particular, we propose NLC map estimation to promote the image branch to learn spatial-aware features, providing a necessary complement to the sparse spatial representation
extracted from point clouds, especially for distant or highly occluded cases. We refer the readers to Appendix A.2 for detailed network structures of the image and point cloud backbones.
3.1 BIDIRECTIONAL FEATURE PROPAGATION
As illustrated in Fig. 3, the proposed bidirectional feature propagation consists of a pixel-to-point module and a point-to-pixel module, which bridges the two learning branches that operate on RGB images and LiDAR point clouds. Functionally, the point-to-pixel module applies grid-level interpolation on point-wise features to produce a 2D feature map, while the pixel-to-point module retrieves the corresponding 2D image features by projecting 3D points to the 2D image plane.
In general, we partition the overall 2D and 3D branches into the same number of stages, such that we can perform bidirectional feature propagation at each stage between the corresponding layers of the image and point cloud learning networks. Without loss of generality, below we only focus on a certain stage, where we can acquire a 2D feature map F ∈ RC2D×H′×W ′ with C2D channels and dimensions H ′ ×W ′ from the image branch and a set of C3D-dimensional embedding vectors G = { gi ∈ RC3D }N ′ i=1 of the corresponding N ′ 3D points.
Point-to-Pixel Module. We design this module to propagate geometric information from the 3D point cloud branch into the 2D image branch. Formally, we expect to construct a 2D feature map F P2I ∈ RC3D×H′×W ′ by querying from the acquired point-wise embeddings G. More specifically, for each pixel position (r, c) over the targeted 2D grid of F P2I of dimensions H ′ ×W ′, we collect the corresponding geometric features of points that are projected inside the range of this pixel and then perform feature aggregation by avg-pooling AVG(·):
F P2Ir,c = AVG({gj |j = 1, ..., n}), (1) where n counts the number of points that can be projected inside the range of the pixel located at the rth row and the cth column. Particularly, for empty pixel positions where there is no point projected inside, we simply define its feature value as 0.
After that, instead of directly feeding the “interpolated” 2D feature map F P2I into the subsequent 2D convolutional stage, we introduce a few additional refinement procedures for feature smoothness as described below: F fuse = CONV(CAT[CONV(F P2I),F ]), (2) where CAT[·, ·] and CONV(·) stand for feature channel concatenation and the convolution operation, respectively, and the resulting F fuse is further fed into the subsequent layers of the 2D image branch.
Pixel-to-Point Module. From an opposite direction, this module propagates semantic information from the 2D image branch into the 3D point cloud branch. We start by projecting the corresponding N ′ 3D points onto the 2D image plane, obtaining their planar coordinates denoted as X = {xi ∈ R2}N ′i=1. Considering that these 2D coordinates are continuously and irregularly distributed over the projection plane, we apply bilinear interpolation to compute exact image features at projected positions, deducing a set of point-wise embeddings F I2P = {F (xi) ∈ RC2D |i = 1, ..., N ′}. Similar to our practice in the point-to-pixel module, the interpolated 3D point-wise features F I2P need to be refined by the following procedures:
Gfuse = MLP ( CAT [ MLP ( F I2P ;θ1 ) ,G ] ;θ2 ) , (3)
where MLP(·;θ) denotes shared multi-layer perceptrons parameterized by learnable weights θ, and the resulting Gfuse further passes through the subsequent layers of the 3D point cloud branch.
3.2 AUXILIARY TASKS FOR TRAINING
The semantic and spatial-aware information is essential for determining the category and boundary of objects. To promote the network to uncover such information of input data, we also leverage auxiliary tasks when training the whole pipeline. For the point cloud branch, following SA-SSD (He et al., 2020b), we introduce 3D semantic segmentation and center estimation to learn structure-aware features. For the image branch, we particularly propose NLC map estimation, in addition to 2D semantic segmentation.
Normalized Local Coordinate (NLC) System. We define the NLC system as taking the center of an object as the origin, aligning the x-axis towards the head direction of its ground-truth (GT) bounding box, then normalizing the local coordinates with respect to the size of the GT bounding box. Fig. 4 (a) shows an example of the NLC system for a typical car. With the geometry relationship between the input RGB image and point
cloud, we can project NLCs to the 2D image plane to construct a 2D NLC map with three channels corresponding to three spatial dimensions, as illustrated in Fig. 4 (b). We refer the reader to Appendix A.1 for the detailed process.
NLC Map Estimation. Intuitively, vision-based semantic information is useful for network to distinguish the foreground from the background. However, the performance bottleneck of existing detectors lies in the limited localization accuracy in distant or highly occluded cases, where the spatial structure is incomplete due to the sparsity of point clouds. To this end, we further take NLC map as supervision to learn the relative position of each pixel inside the object from the image. We expect such an auxiliary task can drive the image branch to learn local spatial-aware features, which serve as a complement to the sparse spatial representation extracted from point clouds. Besides, this task could have an effect on augmenting the representation ability of the point cloud branch.
Remark. The proposed NLC map estimation shares the identical objective with pseudo-LiDARbased methods, i.e, enhancing the spatial representation limited by the sparsity of point clouds. However, compared with the manner of learning a pseudo-LiDAR representation via depth completion, our NLC map estimation has the following advantages: 1) the local NLC is easier to learn than the global depth owing to its scale-invariant property at different distances; 2) our NLC map estimation can be naturally incorporated and trained with the detection pipeline end-to-end, while it is non-trivial to optimize the depth estimator and LiDAR-based detector simultaneously in pseudoLiDAR based methods though it can be implemented in a somewhat complex way (Qian et al., 2020); and 3) the pseudo-LiDAR representations require ground-true dense depth images as supervision, which may not be available in reality, but our method does not require such information.
Semantic Segmentation. Leveraging semantics of 3D point clouds has demonstrated effectiveness in point-based detectors, owing to the explicit preservation of foreground points at downsampling (Zhang et al., 2022c; Chen et al., 2022). Therefore, we further introduce the auxiliary semantic segmentation tasks not only in the point cloud branch but also in the image branch, to exploit richer semantic information. Needless to say, additional semantic features extracted from images facilitate the network to distinguish true positive candidate boxes from the false-positive.
3.3 LOSS FUNCTION
The NLC map estimation task is optimized with the loss function defined as
LNLC = 1
Npos N∑ i ∥∥pNLCi − p̂NLCi ∥∥H · 1pi , (4)
where pi is the ith LiDAR point, Npos is the number of foreground LiDAR points, ∥·∥H is the Huber-loss, 1pi indicates the loss is calculated only with foreground points, and p NLC i and p̂ NLC i are the NLCs of ground-truth and prediction at corresponding pixel for foreground points.
We use the standard cross-entropy loss to optimize both 2D and 3D semantic segmentation, denoted as L2Dsem and L 3D sem respectively. And the loss of center estimation is computed as
Lctr = 1
Npos N∑ i ∥∆pi −∆p̂i∥H · 1pi , (5)
where ∆pi is the target offsets from points to the corresponding object center, and ∆p̂i denotes the output of center estimation head.
Besides the auxiliary tasks, we follow (Chen et al., 2022) to define the loss of the RPN stage as Lrpn = Lc + Lr + Ls + Ld, (6)
where Lc is the classification loss, Lr the regression loss, Ls the shifting loss between the predicted shifts and residuals from the sampled points to their corresponding object centers in candidate generate layer (Yang et al., 2020), and Ld the point segmentation loss in SA layers to enable semanticguided sampling. In addition, we also adopt commonly used proposal refinement loss Lrcnn as defined in (Shi et al., 2020a).
In all, the loss for optimizing the overall pipeline in an end-to-end manner is written as Ltotal = Lrpn + Lrcnn + λ1LNLC + λ2L 2D sem + λ3L 3D sem + λ4Lctr, (7) where λ1, λ2, λ3, and λ4 are hyper-parameters that are empirically set to 1.
4 EXPERIMENTS
4.1 EXPERIMENT SETTINGS
Datasets and Metrics. We conducted experiments on the prevailing KITTI benchmark dataset, which contains two modalities of 3D point clouds and 2D RGB images. Following previous works (Shi et al., 2020a), we divided all training data into two subsets, i.e., 3712 samples for training and the rest 3769 for validation. Performance is evaluated by the Average Precision (AP) metric under IoU thresholds of 0.7, 0.5, and 0.5 for car, pedestrian, and cyclist categories, respectively. We computed APs with 40 sampling recall positions by default, instead of 11. For the 2D auxiliary task of semantic segmentation, we used the instance segmentation annotations as provided in (Qi et al., 2019). Besides, we also conducted experiments on the Waymo Open Dataset (WOD) (Sun et al., 2020), which can be found in Appendix A.4.
Implementation Details. For the image branch, we used ResNet18 (He et al., 2016) as the backbone encoder, followed by a decoder composed of pyramid pooling module (Zhao et al., 2017) and several upsampling layers to give the outputs of semantic segmentation and NLC map estimation. For the point cloud branch, we deployed a point-based network like (Yang et al., 2020) with extra FP layers and further applied semantic-guided farthest point sampling (S-FPS) (Chen et al., 2022) in SA layers. Thus, we implemented bidirectional feature propagation between the top of each SA or FP layer and their corresponding locations at the image branch. Please refer to Appendix A.2 for more details.
4.2 COMPARISON WITH STATE-OF-THE-ART METHODS
We submitted our results to the official website of KITTI. As compared in Table 1, our BiProDet outperforms existing state-of-the-art methods on the KITTI test set by a remarkable margin, i.e., an absolute increase of 2.1% mAP over the second best method EQ-PVRCNN. Notably, by the time of submission, we ranked 1st on the KITTI 3D detection benchmark for the cyclist class. Particularly, it is observed that BiProDet shows consistent and more obvious superiority on “moderate” and “hard” levels, i.e., the distant or highly-occluded objects with sparse points. We ascribe such performance gains to the fact that our proposed bidirectional feature propagation mechanism contributes to more adequate exploitation of complementary information between multi-modalities as well as the effects of 2D auxiliary tasks (as verified in Sec. 4.3). Besides, we can observe from the actual visual results (as presented in Appendix A.7) that our BiProDet is able to produce higher-quality 3D bounding boxes in varying scenes.
4.3 ABLATION STUDY
We conducted comprehensive ablation studies to validate the effectiveness and explore the impacts of key modules involved in the overall learning framework.
Effectiveness of Bidirectional Propagation. We performed detailed ablation studies on specific multi-modal feature exploitation and interaction strategies. We started by presenting a single-modal baseline (Table 2 (a)) that only preserves the point cloud branch of our BiProDet for both training and inference. Based on our complete bidirectional propagation pipeline (Table 2 (e)), we explored another two variants as shown in (Table 2 (c)) and (Table 2 (d)), solely preserving the point-to-pixel and pixel-to-point feature propagation in our 3D detectors, respectively. Note that in Table 2 (c) the point-to-pixel feature flow was only enabled during training, and we detached the point-to-pixel module as well as the whole image branch during inference. In addition, in Table 2 (b), we replaced the bidirectional feature propagation of our BiProDet with the input fusion strategy as proposed in (Wang et al., 2021) that decorates point clouds with CNN features deduced from the image branch. Empirically, we can draw several aspects of conclusions that strongly support our preceding claims. First, the performance of Table 2 (a) turns out to be the worst among all variants, which reveals the superiority of cross-modal learning. Second, combining Table 2 (a) and Table 2 (c), the mAP is largely boosted from 75.88% to 77.01%. Considering that, during the inference stage, these two variants have identical forms of input and network structure, the resulting improvement strongly indicates that the representation ability of the 3D LiDAR branch is indeed strengthened. Third, comparing Table 2 (c) and Table 2 (d) with Table 2 (e), we can verify the superiority of bidirectional propagation (78.64%) over the unidirectional schemes (77.01% and 77.47%). If we particularly pay attention to Table 2 (d) and Table 2 (e), we can conclude that our newly proposed point-to-pixel feature propagation direction further brings 1.17% mAP increase based on the previously explored pixel-to-point paradigm. Besides, by comparing Table 2 (b) (77.07%) and Table 2 (d) (77.47%) whose information propagation directions are both from the 2D image domain to the 3D point cloud domain, we can demonstrate the superiority of feature-level propagation over its input-level counterpart (Wang et al., 2021).
Effectiveness of Image Branch. Comparing Table 3 (d) with Tables 3 (e)-(g), the performance is stably boosted from 75.88% to 78.64%, as the gradual addition of the 2D image branch and two
auxiliary tasks. These results indicate that the additional semantic and geometric features learned from the image modality are indeed effective supplements to the point cloud representation, leading to significantly improved 3D object detection performances.
Effectiveness of Auxiliary Tasks. Comparing Table 3 (a) with Tables 3 (b)-(d), the mAP is boosted by 0.70% when incorporating 3D semantic segmentation and 3D center estimation, and can be further improved by 2.76% after introducing 2D semantic segmentation and NLC map estimation (Tables 3 (d)-(g)). However, only integrating image features without 2D auxiliary tasks (comparing results of Tables 3 (d) and (e)) brings limited improvement of 0.5%. This observation shows that the 2D auxiliary tasks, especially the proposed 2D NLC map estimation, do enforce the image branch to learn complementary information for the detector.
Robustness against Input Corruption. We also conducted extensive experiments to verify the robustness of our BiProDet to sensor perturbation. Specifically, we added Gaussian noises to the reflectance value of points or RGB images. Fig. 5 shows that the mAP value of our cross-modal BiProDet is consistently higher than that of the single-modal baseline and decreases slower with the LiDAR noise level increasing. Particularly, as listed in Table 4, when the variance of the LiDAR noise is set to 0.15, the perturbation affects our cross-modal BiProDet much less than the singlemodal detector. Besides, even applying the corruption to both LiDAR input and RGB images, the mAP value of our BiProDet only drops by 2.49%.
5 CONCLUSION
We have presented a novel cross-modal 3D detector, namely BiProDet, by fully exploiting complementary information from the image domain in two aspects. First, we proposed point-to-pixel feature propagation, which enables gradients backpropagated from the training loss of the image branch to augment the expressive power of the 3D point cloud backbone. Second, we proposed NLC map estimation as an auxiliary task to promote the image branch to learn local spatial-aware representation rather than just semantic features. Our BiProDet achieves state-of-the-art results on the KITTI detection benchmark. And extensive ablative experiments demonstrate the robustness of our BiProDet against sensor noises and generalization to LiDAR signals with fewer beams. The decent results demonstrate the potential of joint training between 3D object detection and more 2D scene understanding tasks. We believe our new perspective will further inspire investigations on multi-task multi-modal learning for scene understanding in autonomous driving.
ACKNOWLEDGEMENT
This work was supported in part by the Hong Kong Research Grants Council under Grants 11219422, 11202320, and 11218121, and in part by Hong Kong Innovation and Technology Fund under Grant MHP/117/21.
REPRODUCIBILITY STATEMENT
We provide the source code and configuration in the supplementary material for the experimental results presented in the main text. We specify the settings of hyper-parameters, the training scheme, and the implementation details of our method in Section 4.1 and Appendix A.2. Besides, we also give a clear explanation of used datasets. We repeat the experiments on the KITTI dataset several times to check the rightness and reproducibility of the implementation.
ETHICS STATEMENT
Proposing effective cross-modal 3D object detectors would benefit a wide range of applications. For example, the incomplete information obtained from the single-modal sensor in autonomous driving vehicles may lead to serious traffic accidents. And the cross-modal detectors are expected to achieve more accurate detection results and be more robust to environment perturbation. Besides, this study was conducted using publicly available datasets and without surveying participants for experiments. Thus we do not consider this study to raise privacy or data security issues. We cited the creators when using existing datasets.
A APPENDIX
In this appendix, we provide the details omitted from the manuscript due to space limitation. We organize the appendix as follows.
• Section A.1: Details of the normalized local coordinate (NLC) System. • Section A.2: Implementation details. • Section A.3: More quantitative results. • Section A.4: Experimental results on Waymo dataset. • Section A.5: More ablation studies. • Section A.6: Efficiency analysis. • Section A.7: Visual results of 3D object detection. • Section A.8: Visual results of 2D semantic segmentation. • Section A.9: Details on the official KITTI test leaderboard.
A.1 DETAILS OF NORMALIZED LOCAL COORDINATE (NLC) SYSTEM
The aim of the cross-modal 3D detector is to estimate the bounding box parameterized by the object dimension (w, l, h), the center location (xc, yc, zc), and the orientation angle θ. The input contains an RGB image X ∈ R3×H×W , where H and W represent the height and width of the image respectively, and a 3D LiDAR point cloud {pi|i = 1, ..., N}, where N is the number of points, and each point pi ∈ R4 is represented with its 3D location (xp, yp, zp) in the LiDAR coordinate system and the reflectance value ρ.
To build the correspondence between the 2D and 3D modalities, we project the observed points from the 3D coordinate system to the 2D coordinate system on the image plane:
dc [ u v 1 ] = K [R T ] xpypzp 1 , (8) where u, v, dc denote the corresponding coordinates and depth on the image plane, R ∈ R3×3 and T ∈ R3×1 denote the rotation matrix and translation matrix of the LiDAR relative to the camera, and K ∈ R3×3 is the camera intrinsic matrix. Given the point cloud and the bounding box of an object, LiDAR coordinates of foreground points can be transformed into the NLC system by proper translation, rotation, and scaling, i.e.,xNLCpyNLCp
zNLCp
= [1/l 0 00 1/w 0 0 0 1/h ] · xLCSpyLCSp zLCSp + [0.50.5 0.5 ] = [ 1/l 0 0 0 1/w 0 0 0 1/h ] · [ cosθ −sinθ 0 sinθ cosθ 0 0 0 1 ] · [ xp − xc yp − yc zp − zc ] + [ 0.5 0.5 0.5 ] , (9)
where (xNLCp , y NLC p , z NLC p ) and (x LCS p , y LCS p , z LCS p ) denote the coordinates of points in NLC system and local coordinate system, respectively. Suppose that the bounding box is unknown with known global coordinates and NLCs of points, we can build a set of equations about the parameters of box (i.e., the xc, yc, zc, w, l, h, θ). In order to solve the total of 7 parameters, global coordinates and NLCs of at least 7 points are required to construct the equations.
Discussion. Notably, it is difficult to infer NLCs of points with only LiDAR data for objects far away or with low reflectivity, since the point clouds are sparse. Meanwhile, the contours and appearance of these cases are still visible in RGB images, so we propose to estimate the normalized local coordinates from images. Then, we can retrieve the estimated NLCs of observed points based on the 2D-3D correspondence. Finally, we expect the proposed cross-modal detector to achieve higher detection accuracy with estimated NLCs and known global coordinates.
A.2 IMPLEMENTATION DETAILS
Network Architecture. Fig. 6 illustrates the architectures of the point cloud and image backbone networks. For the encoder of the point cloud branch, we further show the details of multi-scale
grouping (MSG) network in Table 5. Following 3DSSD (Yang et al., 2020), we take key points sampled by the 3rd SA layer to generate vote points and estimate the 3D box. Then we feed these 3D boxes and features output by the last FP layer to the refinement stage. Besides, we adopt densityaware RoI grid pooling (Hu et al., 2022) to encode point density as an additional feature. Note that 3D center estimation (He et al., 2020b) aims to learn the relative position of each foreground point to the object center, while the 3D box is estimated based on sub-sampled points. Thus, the auxiliary task of 3D center estimation differs from 3D box estimation and can facilitate learning structureaware features of objects.
Training Details. Through the experiments on KITTI dataset, we adopted Adam (Kingma & Ba, 2015) (β1=0.9, β2=0.99) to optimize our BiProDet. We initialized the learning rate as 0.003 and updated it with the one-cycle policy (Smith, 2017). And we trained the model for a total of 80 epochs in an end-to-end manner. In our experiments, the batch size was set to 8, equally distributed on 4 NVIDIA 3090 GPUs. We kept the input image with the original resolution and padded it to
the size of 1248× 376, and down-sampled the input point cloud to 16384 points during training and inference. Following the common practice, we set the detection range of the x, y, and z axis to [0m, 70.4m], [-40m, 40m] and [-3m, 1m], respectively.
Data Augmentation. We applied common data augmentation strategies at global and object levels. The global-level augmentation includes random global flipping, global scaling with a random scaling factor between 0.95 and 1.05, and global rotation around the z-axis with a random angle in the range of [−π/4, π/4]. Each of the three augmentations was performed with a 50% probability for each sample. The object-level augmentation refers to copying objects from other scenes and pasting them to current scene (Yan et al., 2018). In order to perform sampling synchronously on point clouds and images, we utilized the instance masks provided in (Qi et al., 2019). Specifically, we pasted both the point clouds and pixels of sampled objects to the point cloud and images of new scenes, respectively.
A.3 MORE QUANTITATIVE RESULTS
Performance on KITTI Val Set. We also reported the performance of our BiProDet on all three classes of the KITTI validation set in Table 6, where it can be seen that our BiProDet also achieves the highest mAP of 77.73%, which is obviously higher than the second best method CAT-Det.
Performance of Single-Class Detector. Quite a few methods (Deng et al., 2021; Zheng et al., 2021) train models only for car detection. Empirically, the single-class detector performs better in the car class compared with multi-class detectors. Therefore, we also provided performance of BiProDet trained only for the car class, and compared it with several state-of-the-art methods in Table 7.
Performance of NLC Map Estimation. The NLC map estimation task is introduced to guide the image branch to learn local spatial-aware features. We evaluated the predicted NLC of pixels containing at least one projected point with mean absolute error (MAE) for each object:
MAEq = 1
Nobj N∑ i ∣∣qNLCpi − q̂NLCpi ∣∣ · 1pi , q ∈ {x, y, z} , (10)
where Nobj is the number of LiDAR points inside the ground-truth box of the object, 1pi indicates that the evaluation is only calculated with foreground points inside the box, qNLCpi and q̂ NLC pi are the normalized local coordinates of the ground-truth and the prediction at the corresponding pixel for the foreground point. Finally, we obtained the mean value of MAEq for all instances, namely mMAEq . We report the metric over different difficulty levels for three categories on the KITTI val set at Figure 7. We can observe that, for the car class, the mMAE error is only 0.0619, i.e., ±6.19 cm error per meter. For the challenging pedestrian and cyclist categories, the error becomes larger due to the smaller size and the non-rigid shape.
Generalization to Asymmetric Backbones. As shown in Fig. 6, we originally adopted an encoderdecoder network in the LiDAR branch that is architecturally similar to the image backbone. Nevertheless, it is worth clarifying that our approach is not limited to symmetrical structures and can be generalized to different point-based backbones. Here, we replaced the 3D branch of the original framework with an efficient single-stage detector—SASA (Chen et al., 2022), using a backbone only with the encoder in the LiDAR branch, which is asymmetric with the encoder-decoder structure of the image backbone. Accordingly, the proposed bidirectional propagation is only performed between the 3D backbone and the encoder of the 2D image backbone. The experimental results are shown in Table 8. We can observe that the proposed method works well even when the two backbones are asymmetric, which demonstrates the satisfactory generalization ability of our method for different LiDAR backbones.
A.4 RESULTS ON WAYMO OPEN DATASET
The Waymo Open Dataset (Sun et al., 2020) is a large-scale dataset for 3D object detection. It contains 798 sequences (15836 frames) for training, and 202 sequences (40077 frames) for validation. According to the number of points inside the object and the difficulty of annotation, the objects are further divided into two difficulty levels: LEVEL 1 and LEVEL 2. Following common practice, we adopted the metrics of mean Average Precision (mAP) and mean Average Precision weighted by
heading accuracy (mAPH), and reported the performance on both LEVEL 1 and LEVEL 2. We set the detection range to [-75.2m, 75.2m] for x and y axis, and [-2m, 4m] for z axis. Following Wang et al. (2021) and Bai et al. (2022), the training on Waymo dataset consists of two stages to allow flexible augmentations. First, we only trained the LiDAR branch without image inputs and bidirectional propagation for 30 epochs. We enabled the copy-and-paste augmentation in this stage. Then, we trained the whole pipeline for another 6 epochs, during which the copy-and-paste is disabled. Note that the image semantic segmentation head is disabled, since ground-truth segmentation maps are not provided (Sun et al., 2020).
As shown in Table 9, our method achieves substantial improvement compared with previous stateof-the-arts. Particularly, unlike existing approaches including PointAugmenting (Wang et al., 2021) and TransFusion (Bai et al., 2022) where the camera backbone is pre-trained on other datasets and then frozen, we trained the entire pipeline in an end-to-end manner. It can be seen that even without the 2D segmentation auxiliary task, our method still achieves higher accuracy under all scenarios except “Ped L2”, demonstrating its advantage.
A.5 MORE ABLATION STUDIES
Effectiveness of Multi-stage Interaction. As mentioned before, both 2D and 3D backbones adopt an encoder-decoder structure, and we perform bidirectional feature propagation at both downsampling and upsampling stages. Here, we conducted experiments to verify the superiority of the multistage interaction over single-stage interaction. As shown in Table 10, only performing the bidirectional feature propagation in the encoder (i.e., Table 10 (b)) or the decoder (i.e., Table 10 (c)) leads to worse performance than that of performing the module in both stages (i.e., Table 10 (d)).
Effect of Semantic-guided Point Sampling. When performing downsampling in the SA layers of the point cloud branch, we adopted S-FPS (Chen et al., 2022) to explicitly preserve as many foreground points as possible. We report the percentage of sampled foreground points and instance recall (i.e., the ratio of instances that have at least one point) in Table 11, where it can be seen that exploiting supplementary semantic features from images leads to substantial improvement of the ratio of sampled foreground points and better instance recall during S-FPS.
Influence on 2D Semantic Segmentation. We also aimed to demonstrate that the 2D-3D joint learning paradigm benefits not only the 3D object detection task but also the 2D semantic segmentation task. As shown in Table 12, the deep interaction between different modalities yields an improvement of 4.42% mIoU. The point features can naturally complement RGB image features by providing 3D geometry and semantics, which are robust to illumination changes and help distinguish different classes of objects, for 2D visual information. The results suggest the potential of joint training between 3D object detection and more 2D scene understanding tasks in autonomous driving.
Conditional Analysis. To better figure out where the improvement comes from when using additional image features, we compared BiProDet with the single-modal detector on different occlusion levels and distant ranges. The results shown in Table 13 and Table 14 include separate APs for objects belonging to different occlusion levels and APs for moderate class in different distance ranges. For car detection, our BiProDet achieves more accuracy gains for long-distance and highly occluded objects, which suffer from the sparsity of observed LiDAR points. The cyclist and pedestrian are much more difficult categories on account of small sizes, non-rigid structures, and fewer training samples. For these two categories, BiProDet still brings consistent and significant improvements on different levels even in extremely difficult cases.
Generalization to Sparse LiDAR Signals. We also compared our BiProDet with the single-modal baseline on LiDAR point clouds with various sparsity. In practice, following Pseudo-LiDAR++ (You et al., 2020), we simulated the 32-beam, 16-beam, and 8-beam LiDAR signals by selecting LiDAR points whose elevation angles fall within specific intervals. As shown in Table 15, the proposed BiProDet outperforms the single-modal baseline under all settings. The consistent improvements suggest our method can generalize to sparser signals. Besides, the proposed BiProDet significantly performs better than the baseline in the setting of LiDAR signals with fewer beams, demonstrating the effectiveness of our method in exploiting the supplementary information in the image domain.
A.6 EFFICIENCY ANALYSIS
We also compared the inference speed and number of parameters of the proposed BiProDet with state-of-the-art cross-modal approaches in Table 16. Our BiProDet has about the same number of parameters as CAT-Det (Zhang et al., 2022b), but a much higher inference speed at 9.52 frames per second on a single GeForce RTX 2080 Ti GPU. In general, our BiProDet is inevitably slower than some single-modal detectors, but it achieves a good trade-off between speed and accuracy among cross-modal approaches.
A.7 VISUAL RESULTS OF 3D OBJECT DETECTION
In Figure 8, we present the qualitative comparison of detection results between the single-modal baseline and our BiProDet. We can observe that the proposed BiProDet shows better localization capability than the single-modal baseline in challenging cases. Besides, we also show qualitative
results of BiProDet on the KITTI test split in Figure 9. We can clearly observe that our BiProDet performs well in challenging cases, such as pedestrians and cyclists (with small sizes) and highlyoccluded cars.
A.8 VISUAL RESULTS OF 2D SEMANTIC SEGMENTATION
Several examples are shown in Figure 10. For distant cars in the first and the second row as well as the pedestrian in the sixth row, the size of objects is small and PSPNet tends to treat them as background, while our BiProDet is able to correct such errors. In the third row, our BiProDet finds the dim cyclist missed by PSPNet. Our BiProDet also performs better for the highly occluded objects as shown in the fourth and the fifth lines. This observation shows the 3D feature representations extracted from point clouds can boost 2D semantic segmentation, since the image-based method is sensitive to illumination and can hardly handle corner cases with only single-modal inputs.
A.9 DETAILS ON OFFICIAL KITTI TEST LEADERBOARD
We submitted the results of our BiProDet to the official KITTI website, and it ranks 1st on the 3D object detection benchmark for the cyclist class. Figure 11 shows the screenshot of the leaderboard. Figure 12 illustrates the precision-recall curves along with AP scores on different categories of the KITTI test set. The samples of the KITTI test set are quite different from those of training/validation set in terms of scenes and camera parameters, so the impressive performance of our BiProDet on the test set demonstrates it also achieves good generalization. | 1. What is the focus of the paper regarding 3D object detection?
2. What are the strengths of the proposed approach, particularly in feature propagation schemes?
3. What are the weaknesses of the paper, especially in terms of experimental design and comparisons with other works?
4. Do you have any concerns or suggestions regarding the efficacy of auxiliary tasks and cross-modal fusion?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper tackles the problem of multiple modal (Lidar + Camera) based 3D object detection. In this work, auhtors explored to use both point-to-pixel and pixel-to-point feature propagation schemes to achieve bidirectional feature propagation. Great experiment results are achieved in KITTI dataset.
Strengths And Weaknesses
Strength:
The idea of point-to-pixel projection is simple but effective.
In this work, the effect of auxiliary tasks are well explored. In Table 3, we can find 2D tasks (2D Seg and NLC) really helped the 3D task.
Weekness
Based on Table 3, the main performance gain of image branch comes from the auxiliary tasks. Simply adopting image branch brings few promotion. It is not fair enough to compare a model with only detection ground-truth and a model with multiple surpervision. Meahwhile, it is important to verify the effectiveness of auxiliary tasks on other methods. Thus, an important missing experiment is to add 2D seg and NLC losses to other cross-modal models.
In this work, camera to lidar (pixel to point) propagation is conduct on point-level. Recent works demonstrate that conduct cross-modal fusion on BEV space is more effective. Have authors tried achieve camera to lidar propagation with BEV fashion?
Missing comparison experiment: there are plenty of cross-modal 3D detection methods this year, most of them conduct experiments on nuScenes dataset. Authors should discuss these recent works (Transfusion, BEVFusion e.g.) and conduct experiment on nuScenes dataset to further verify the effectiveness of proposed method.
Clarity, Quality, Novelty And Reproducibility
The writing of this paper is clear. There is no Reproducibility issue as codes provided. The novelty of this work is somehow limited. |
ICLR | Title
The Importance of Pessimism in Fixed-Dataset Policy Optimization
Abstract
We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. Our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. This analysis reveals that for naı̈ve approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. To avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. These theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four MinAtar environments.
N/A
We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. Our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. This analysis reveals that for naı̈ve approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. To avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. These theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four MinAtar environments.
1 INTRODUCTION
We consider fixed-dataset policy optimization (FDPO), in which a dataset of transitions from an environment is used to find a policy with high return.1 We compare FDPO algorithms by their worst-case performance, expressed as high-probability guarantees on the suboptimality of the learned policy. It is perhaps obvious that in order to maximize worst-case performance, a good FDPO algorithm should select a policy with high worst-case value. We call this the pessimism principle of exploitation, as it is analogous to the widely-known optimism principle (Lattimore & Szepesvári, 2020) of exploration.2
Our main contribution is a theoretical justification of the pessimism principle in FDPO, based on a bound that characterizes the suboptimality incurred by an FDPO algorithm. We further demonstrate how this bound may be used to derive principled algorithms. Note that the core novelty of our work is not the idea of pessimism, which is an intuitive concept that appears in a variety of contexts; rather, our contribution is a set of theoretical results rigorously explaining how pessimism is important in the specific setting of FDPO. An example conveying the intuition behind our results can be found in Appendix G.1.
We first analyze a family of non-pessimistic naı̈ve FDPO algorithms, which estimate the environment from the dataset via maximum likelihood and then apply standard dynamic programming techniques. We prove a bound which shows that the worst-case suboptimality of these algorithms is guaranteed to be small when the dataset contains enough data that we are certain about the value of every possible policy. This is caused by the outsized impact of value overestimation errors on suboptimality, sometimes called the optimizer’s curse (Smith & Winkler, 2006). It is a fundamental consequence of ignoring the disconnect between the true environment and the picture painted by our limited observations. Importantly, it is not reliant on errors introduced by function approximation.
1We use the term fixed-dataset policy optimization to emphasize the computational procedure; this setting has also been referred to as batch RL (Ernst et al., 2005; Lange et al., 2012) and more recently, offline RL (Levine et al., 2020). We emphasize that this is a well-studied setting, and we are simply choosing to refer to it by a more descriptive name.
2The optimism principle states that we should select a policy with high best-case value.
We contrast these findings with an analysis of pessimistic FDPO algorithms, which select a policy that maximizes some notion of worst-case expected return. We show that these algorithms do not require datasets which inform us about the value of every policy to achieve small suboptimality, due to the critical role that pessimism plays in preventing overestimation. Our analysis naturally leads to two families of principled pessimistic FDPO algorithms. We prove their improved suboptimality guarantees, and confirm our claims with experiments on a gridworld.
Finally, we extend one of our pessimistic algorithms to the deep learning setting. Recently, several deep-learning-based algorithms for fixed-dataset policy optimization have been proposed (Agarwal et al., 2019; Fujimoto et al., 2019; Kumar et al., 2019; Laroche et al., 2019; Jaques et al., 2019; Kidambi et al., 2020; Yu et al., 2020; Wu et al., 2019; Wang et al., 2020; Kumar et al., 2020; Liu et al., 2020). Our work is complementary to these results, as our contributions are conceptual, rather than algorithmic. Our primary goal is to theoretically unify existing approaches and motivate the design of pessimistic algorithms more broadly. Using experiments in the MinAtar game suite (Young & Tian, 2019), we provide empirical validation for the predictions of our analysis.
The problem of fixed-dataset policy optimization is closely related to the problem of reinforcement learning, and as such, there is a large body of work which contains ideas related to those discussed in this paper. We discuss these works in detail in Appendix E.
2 BACKGROUND
We anticipate most readers will be familiar with the concepts and notation, which is fairly standard in the reinforcement learning literature. In the interest of space, we relegate a full presentation to Appendix A. Here, we briefly give an informal overview of the background necessary to understand the main results.
We represent the environment as a Markov Decision Process (MDP), denoted M := 〈S,A,R, P, γ, ρ〉. We assume without loss of generality that R(〈s, a〉) ∈ [0, 1], and denote its expectation as r(〈s, a〉). ρ represents the start-state distribution. Policies π can act in the environment, represented by action matrix Aπ , which maps each state to the probability of each state-action when following π. Value functions v assign some real value to each state. We use vπM to denote the value function which assigns the sum of discounted rewards in the environment when following policy π. A dataset D contains transitions sampled from the environment. From a dataset, we can compute the empirical reward and transition functions, rD and PD, and the empirical policy, π̂D.
An important concept for our analysis is the value uncertainty function, denoted µπD,δ , which returns a high-probability upper-bound to the error of a value function derived from datasetD. Certain value uncertainty functions are decomposable by states or state-actions, meaning they can be written as the weighted sum of more local uncertainties. See Appendix B for more detail.
Our goal is to analyze the suboptimality of a specific class of FDPO algorithms, called value-based FDPO algorithms, which have a straightforward structure: they use a fixed-dataset policy evaluation (FDPE) algorithm to assign a value to each policy, and then select the policy with the maximum value. Furthermore, we consider FDPE algorithms whose solutions satisfy a fixed-point equation. Thus, a fixed-point equation defines a FDPE objective, which in turn defines a value-based FDPO objective; we call the set of all algorithms that implement these objectives the family of algorithms defined by the fixed-point equation.
3 OVER/UNDER DECOMPOSITION OF SUBOPTIMALITY
Our first theoretical contribution is a simple but informative bound on the suboptimality of any value-based FDPO algorithm. Next, in Section 4, we make this concrete by defining the family of naı̈ve algorithms and invoking this bound. This bound is insightful because it distinguishes the impact of errors of value overestimation from errors of value underestimation, defined as: Definition 1. Consider any fixed-dataset policy evaluation algorithm E on any dataset D and any policy π. Denote vπD := E(D,π). We define the underestimation error as Eρ[vπM − vπD] and overestimation error as Eρ[vπD − vπM].
The following lemma shows how these quantities can be used to bound suboptimality.
Lemma 1 (Value-based FDPO suboptimality bound). Consider any value-based fixed-dataset policy optimization algorithmOVB, with fixed-dataset policy evaluation subroutine E . For any policy π and dataset D, denote vπD := E(D,π). The suboptimality of OVB is bounded by
SUBOPT(OVB(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ[vπM − vπD] ) + sup
π
( Eρ[vπD − vπM] ) Proof. See Appendix C.1.
This bound is tight; see Appendix C.2. The bound highlights the potentially outsized impact of overestimation on the suboptimality of a FDPO algorithm. To see this, we consider each of its terms in isolation:
SUBOPT(OVB(D)) ≤ inf π
( (A1)︷ ︸︸ ︷ Eρ[v π∗M M − v π M] + (A2)︷ ︸︸ ︷ Eρ[vπM − vπD] ) ︸ ︷︷ ︸
(A)
+ sup π
( (B1)︷ ︸︸ ︷ Eρ[vπD − vπM] ) ︸ ︷︷ ︸
(B)
The term labeled (A) reflects the degree to which the dataset informs us of a near-optimal policy. For any policy π, (A1) captures the suboptimality of that policy, and (A2) captures its underestimation error. Since (A) takes an infimum, this term will be small whenever there is at least one reasonable policy whose value is not very underestimated.
On the other hand, the term labeled (B) corresponds to the largest overestimation error on any policy. Because it consists of a supremum over all policies, it will be small only when no policies are overestimated at all. Even a single overestimation can lead to significant suboptimality.
We see from these two terms that errors of overestimation and underestimation have differing impacts on suboptimality, suggesting that algorithms should be designed with this asymmetry in mind. We will see in Section 5 how this may be done. But first, let us further understand why this is necessary by studying in more depth a family of algorithms which treats its errors of overestimation and underestimation equivalently.
4 NAÏVE ALGORITHMS
The goal of this section is to paint a high-level picture of the worst-case suboptimality guarantees of a specific family of non-pessimistic approaches, which we call naı̈ve FDPO algorithms. Informally, the naı̈ve approach is to take the limited dataset of observations at face value, treating it as though it paints a fully accurate picture of the environment. Naı̈ve algorithms construct a maximum-likelihood MDP from the dataset, then use standard dynamic programming approaches on this empirical MDP. Definition 2. A naı̈ve algorithm is any algorithm in the family defined by the fixed-point function
fnaı̈ve(v π) := Aπ(rD + γPDv π).
Various FDPE and FDPO algorithms from this family could be described; in this work, we do not study these implementations in detail, although we do give pseudocode for some implementations in Appendix D.1.
One example of a naı̈ve FDPO algorithm which can be found in the literature is certainty equivalence (Jiang, 2019a). The core ideas behind naı̈ve algorithms can also be found in the function approximation literature, for example in FQI (Ernst et al., 2005; Jiang, 2019b). Additionally, when available data is held fixed, nearly all existing deep reinforcement learning algorithms are transformed into naı̈ve value-based FDPO algorithms. For example, DQN (Mnih et al., 2015) with a fixed replay buffer is a naı̈ve value-based FDPO algorithm. Theorem 1 (Naı̈ve FDPO suboptimality bound). Consider any naı̈ve value-based fixed-dataset policy optimization algorithm OVBnaı̈ve. Let µ be any value uncertainty function. With probability at least 1− δ, the suboptimality of OVBnaı̈ve is bounded with probability at least 1− δ by
SUBOPT(OVBnaı̈ve(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ[µπD,δ] ) + sup
π Eρ[µπD,δ]
Proof. This result follows directly from Lemma 1 and Lemma 3.
The infimum term is small whenever there is some reasonably good policy with low value uncertainty. In practice, this condition can typically be satisfied, for example by including expert demonstrations in the dataset. On the other hand, the supremum term will only be small if we have low value uncertainty for all policies – a much more challenging requirement. This explains the behavior of pathological examples, e.g. in Appendix G.1, where performance is poor despite access to virtually unlimited amounts of data from a near-optimal policy. Such a dataset ensures that the first term will be small by reducing value uncertainty of the near-optimal data collection policy, but does little to reduce the value uncertainty of any other policy, leading the second term to be large.
However, although pathological examples exist, it is clear that this bound will not be tight on all environments. It is reasonable to ask: is it likely that this bound will be tight on real-world examples? We argue that it likely will be. We identify two properties that most real-world tasks share: (1) The set of policies is pyramidal: there are an enormous number of bad policies, many mediocre policies, a few good policies, etc. (2) Due to the size of the state space and cost of data collection, most policies have high value uncertainty.
Given that these assumptions hold, naı̈ve algorithms will perform as poorly on most real-world environments as they do on pathological examples. Consider: there are many more policies than there is data, so there will be many policies with high value uncertainty; naı̈ve algorithms will likely overestimate several of these policies, and erroneously select one; since good policies are rare, the selected policy will likely be bad. It follows that running naı̈ve algorithms on real-world problems will typically yield suboptimality close to our worst-case bound. And, indeed, on deep RL benchmarks, which are selected due to their similarity to real-world settings, overestimation has been widely observed, typically correlated with poor performance (Bellemare et al., 2016; Van Hasselt et al., 2016; Fujimoto et al., 2019).
5 THE PESSIMISM PRINCIPLE
“Behave as though the world was plausibly worse than you observed it to be.” The pessimism principle tells us how to exploit our current knowledge to find the stationary policy with the best worstcase guarantee on expected return. We consider two specific families of pessimistic algorithms, the uncertainty-aware pessimistic algorithms and proximal pessimistic algorithms, and bound the worst-case suboptimality of each. These algorithms each include a hyperparameter, α, controlling the amount of pessimism, interpolating from fully-naı̈ve to fully-pessimistic. (For a discussion of the implications of the latter extreme, see Appendix G.2.) Then, we will compare the two families, and see how the proximal family is simply a trivial special case of the more general uncertainty-aware family of methods.
5.1 UNCERTAINTY-AWARE PESSIMISTIC ALGORITHMS
Our first family of pessimistic algorithms is the uncertainty-aware (UA) pessimistic algorithms. As the name suggests, this family of algorithms estimates the state-wise Bellman uncertainty and penalizes policies accordingly, leading to a pessimistic value estimate and a preference for policies with low value uncertainty. Definition 3. An uncertainty-aware pessimistic algorithm, with a Bellman uncertainty function uπD,δ and pessimism hyperparameter α ∈ [0, 1], is any algorithm in the family defined by the fixed-point function
fua(v π) = Aπ(rD + γPDv π)− αuπD,δ This fixed-point function is simply the naı̈ve fixed-point function penalized by the Bellman uncertainty. This can be interpreted as being pessimistic about the outcome of every action. Note that it remains to specify a technique to compute the Bellman uncertainty function, e.g. Appendix B.1, in order to get a concrete algorithm. It is straightforward to construct algorithms from this family by modifying naı̈ve algorithms to subtract the penalty term. Similar algorithms have been explored in the safe RL literature (Ghavamzadeh et al., 2016; Laroche et al., 2019) and the robust MDP literature (Givan et al., 1997), where algorithms with high-probability performance guarantees are useful in the context of ensuring safety.
Theorem 2 (Uncertainty-aware pessimistic FDPO suboptimality bound). Consider an uncertaintyaware pessimistic value-based fixed-dataset policy optimization algorithm OVBua . Let uπD,δ be any Bellman uncertainty function, µπD,δ be a corresponding value uncertainty function, and α ∈ [0, 1] be any pessimism hyperparameter. The suboptimality of OVBua is bounded with probability at least 1− δ by
SUBOPT(OVBua (D)) ≤ inf π
( Eρ[v π∗M M − v π M] + (1 + α) · Eρ[µπD,δ] ) + (1− α) · ( sup π Eρ[µπD,δ] )
Proof. See Appendix C.7.
This bound should be contrasted with our result from Theorem 1. With α = 0, the family of pessimistic algorithms reduces to the family of naı̈ve algorithms, so the bound is correspondingly identical. We can add pessimism by increasing α, and this corresponds to a decrease in the magnitude of the supremum term. When α = 1, there is no supremum term at all. In general, the optimal value of α lies between the two extremes.
To further understand the power of this approach, it is illustrative to compare it to imitation learning. Consider the case where the dataset contains a small number of expert trajectories but also a large number of interactions from a random policy, i.e. when learning from suboptimal demonstrations (Brown et al., 2019). If the dataset contained only a small amount of expert data, then both an UA pessimistic FDPO algorithm and an imitation learning algorithm would return a high-value policy. However, the injection of sufficiently many random interactions would degrade the performance of imitation learning algorithms, whereas UA pessimistic algorithms would continue to behave similarly to the expert data.
5.2 PROXIMAL PESSIMISTIC ALGORITHMS
The next family of algorithms we study are the proximal pessimistic algorithms, which implement pessimism by penalizing policies that deviate from the empirical policy. The name proximal was chosen to reflect the idea that these algorithms prefer policies which stay “nearby” to the empirical policy. Many FDPO algorithms in the literature, and in particular several recently-proposed deep learning algorithms (Fujimoto et al., 2019; Kumar et al., 2019; Laroche et al., 2019; Jaques et al., 2019; Wu et al., 2019; Liu et al., 2020), resemble members of the family of proximal pessimistic algorithms; see Appendix E. Also, another variant of the proximal pessimistic family, which uses state density instead of state-conditional action density, can be found in Appendix C.9.
Definition 4. A proximal pessimistic algorithm with pessimism hyperparameter α ∈ [0, 1] is any algorithm in the family defined by the fixed-point function
fproximal(v π) = Aπ(rD + γPDv π)− α (
TVS(π, π̂D) (1− γ)2 ) Theorem 3 (Proximal pessimistic FDPO suboptimality bound). Consider any proximal pessimistic value-based fixed-dataset policy optimization algorithm OVBproximal. Let µ be any state-action-wise decomposable value uncertainty function, and α ∈ [0, 1] be a pessimism hyperparameter. For any dataset D, the suboptimality of OVBproximal is bounded with probability at least 1− δ by
SUBOPT(Oproximal(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ [ µπD,δ + α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2 )]) + sup
π
( Eρ [ µπD,δ − α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2 )]) Proof. See Appendix C.8.
Once again, we see that as α grows, the large supremum term shrinks; similarly, by Lemma 5, when we have α = 1, the supremum term is guaranteed to be non-positive.3 The primary limitation of
3Initially, it will contain µπ ′
D,δ , but this can be removed since it is not dependent on π.
the proximal approach is the looseness of the value lower-bound. Intuitively, this algorithm can be understood as performing imitation learning, but permitting minor deviations. Constraining the policy to be near in distribution to the empirical policy can fail to take advantage of highly-visited states which are reached via many trajectories. In fact, in contrast to both the naı̈ve approach and the UA pessimistic approach, in the limit of infinite data this approach is not guaranteed to converge to the optimal policy. Also, note that when α ≥ 1− γ, this algorithm is identical to imitation learning.
5.3 THE RELATIONSHIP BETWEEN UNCERTAINTY-AWARE AND PROXIMAL ALGORITHMS
Though these two families may appear on the surface to be quite different, they are in fact closely related. A key insight of our theoretical work is that it reveals the important connection between these two approaches. Concretely: proximal algorithms are uncertainty-aware algorithms which use a trivial value uncertainty function.
To see this, we show how to convert an uncertainty-aware penalty into a proximal penalty. let µ be any state-action-wise decomposable value uncertainty function. For any dataset D, we have
µπD,δ = µ π̂D D,δ + (I − γA πPD) −1 ( (Aπ −Aπ̂D )(uD,δ + γPDµπ̂DD,δ) )
(Lemma 4)
≤ µπ̂DD,δ + (I − γA πPD)
−1 ( TVS(π, π′) (
1
(1− γ)2
)) . (Lemma 5)
We began with the uncertainty penalty. In the first step, we rewrote the uncertainty for π into the sum of two terms: the uncertainty for π̂D, and the difference in uncertainty between π and π̂D on various actions. In the second step, we chose our state-action-wise Bellman uncertainty to be 11−γ , which is a trivial upper bound; we also upper-bound the signed policy difference with the total variation. This results in the proximal penalty.4
Thus, we see that proximal penalties are equivalent to uncertainty-aware penalties which use a specific, trivial uncertainty function. This result suggests that uncertainty-aware algorithms are strictly better than their proximal counterparts. There is no looseness in this result: for any proximal penalty, we will always be able to find a tighter uncertainty-aware penalty by replacing the trivial uncertainty function with something tighter.
However, currently, proximal algorithms are quite useful in the context of deep learning. This is because the only uncertainty function that can currently be implemented for neural networks is the trivial uncertainty function. Until we discover how to compute uncertainties for neural networks, proximal pessimistic algorithms will remain the only theoretically-motivated family of algorithms.
6 EXPERIMENTS
We implement algorithms from each family to empirically investigate whether their performance of follows the predictions of our bounds. Below, we summarize the key predictions of our theory.
• Imitation. This algorithm simply learns to copy the empirical policy. It performs well if and only if the data collection policy performs well.
• Naı̈ve. This algorithm performs well only when almost no policies have high value uncertainty. This means that when the data is collected from any mostly-deterministic policy, performance of this algorithm will be poor, since many states will be missing data. Stochastic data collection improves performance. As the size of the dataset grows, this algorithm approaches optimality.
• Uncertainty-aware. This algorithm performs well when there is data on states visited by near-optimal policies. This is the case when a small amount of data has been collected from a near-optimal policy, or a large amount of data has been collected from a worse policy. As the size of the dataset grows, this algorithm to approaches optimality. This approach outperforms all other approaches.
4When constructing the penalty, we can ignore the first term, which does not contain π, and so is irrelevant to optimization.
• Proximal. This algorithm roughly mirrors the performance of the imitation approach, but improves upon it. As the size of the dataset grows, this algorithm does not approach optimality, as the penalty persists even when the environment’s dynamics are perfectly captured by the dataset.
Our experimental results qualitatively align with our predictions in both the tabular and deep learning settings, giving evidence that the picture painted by our theoretical analysis truly describes the FDPO setting. See Appendix D for pseudocode of all algorithms; see Appendix F for details on the experimental setup; see Appendix G.3 for additional experimental considerations for deep learning experiments that will be of interest to practicioners. For an open-source implementation, including full details suitable for replication, please refer to the code in the accompanying GitHub repository: github.com/anonymized
Tabular. The first tabular experiment, whose results are shown in Figure 1(a), compares the performance of the algorithms as the policy used to collect the dataset is interpolated from the uniform random policy to an optimal policy using -greedy. The second experiment, whose results are shown in Figure 1(b), compares the performance of the algorithms as we increase the size of the dataset from 1 sample to 200000 samples. In both experiments, we notice a qualitative difference between the trends of the various algorithms, which aligns with the predictions of our theory.
Neural network. The results of these experiments can be seen in Figure 2. Similarly to the tabular experiments, we see that the naı̈ve approach performs well when data is fully exploratory, and poorly when data is collected via an optimal policy; the pure imitation approach performs better when the data collection policy is closer to optimal. The pessimistic approach achieves the best of both worlds: it correctly imitates a near-optimal policy, but also learns to improve upon it somewhat when the data is more exploratory. One notable failure case is in FREEWAY, where the performance of the pessimistic approach barely improves upon the imitation policy, despite the naı̈ve approach performing near-optimally for intermediate values of .
7 DISCUSSION AND CONCLUSION
In this work, we provided a conceptual and mathematical framework for thinking about fixed-dataset policy optimization. Starting from intuitive building blocks of uncertainty and the over-under decomposition, we showed the core issue with naı̈ve approaches, and introduced the pessimism principle as the defining characteristic of solutions. We described two families of pessimistic algorithms, uncertainty-aware and proximal. We see theoretically that both of these approaches have advantages over the naı̈ve approach, and observed these advantages empirically. Comparing these two families of pessimistic algorithms, we see both theoretically and empirically that uncertainty-aware
algorithms are strictly better than proximal algorithms, and that proximal algorithms may not yield the optimal policy, even with infinite data.
Future directions. Our results indicate that research in FDPO should not focus on proximal algorithms. The development of neural uncertainty estimation techniques will enable principled uncertainty-aware deep learning algorithms. As is evidenced by our tabular results, we expect these approaches to yield dramatic performance improvements, rendering algorithms derived from the proximal family (Kumar et al., 2019; Fujimoto et al., 2019; Laroche et al., 2019; Kumar et al., 2020) obsolete.
On ad-hoc solutions. It is undoubtably disappointing to see that proximal algorithms, which are far easier to implement, are fundamentally limited in this way. It is tempting to propose various adhoc solutions to mitigate the flaws of proximal pessimistic algorithms in practice. However, in order to ensure that the resulting algorithm is principled, one must be careful. For example, one might consider tuning α; however, doing the tuning requires evaluating each policy in the environment, which involves gaining information by interacting with the environment, which is not permitted by the problem setting. Or, one might consider e.g. an adaptive pessimism hyperparameter which decays with the size of the dataset; however, in order for such a penalty to be principled, it must be based on an uncertainty function, at which point we may as well just use an uncertainty-aware algorithm.
Stochastic policies. One surprising property of pessimsitic algorithms is that the optimal policy is often stochastic. This is because the penalty term included in their fixed-point objective is often minimized by stochastic policies. For the penalty of proximal pessimistic algorithms, it is easy to see that this will be the case for any non-deterministic empirical policy; for UA pessimsitic algorithms, it is dependent on the choice of Bellman uncertainty function, but often still holds (see Appendix B.2 for the derivation of a Bellman uncertainty function with this property). This observation lends mathematical rigor to the intuition that agents should ‘hedge their bets’ in the face of epistemic uncertainty. This property also means that the simple approach of selecting the argmax action is no longer adequate for policy improvement. In Appendix D.2.2 we discuss a policy improvement procedure that takes into account the proximal penalty to find the stochastic optimal policy.
Implications for RL. Finally, due to the close connection between the FDPO and RL settings, this work has implications for deep reinforcement learning. Many popular deep RL algorithms utilize a replay buffer to break the correlation between samples in each minibatch (Mnih et al., 2015). However, since these algorithms typically alternate between collecting data and training the network, the replay buffer can also be viewed as a ‘temporarily fixed’ dataset during the training phase. These algorithms are often very sensitive to hyperparemters; in particular, they perform poorly when the number of learning steps per interaction is large (Fedus et al., 2020). This effect can be explained by our analysis: additional steps of learning cause the policy to approach its naı̈ve FDPO fixed-point, which has poor worst-case suboptimality. A pessimistic algorithm with a better fixed-point could therefore allow us to train more per interaction, improving sample efficiency. A potential direction of future work is therefore to incorporate pessimism into deep RL.
A BACKGROUND
We write vectors using bold lower-case letters, a, and matrices using upper-case letters, A. To refer to individual cells of a vector or rows of a matrix, we use function notation, a(x). We write the identity matrix as I . We use the notation Ep[·] to denote the average value of a function under a distribution p, i.e. for any space X , distribution p ∈ Dist(X ), and function a : X → R, we have Ep[a] := Ex∼p[a(x)].
When applied to vectors or matrices, we use <,>,≤,≥ to denote element-wise comparison. Similarly, we use | · | to denote the element-wise absolute value of a vector: |a|(x) = |a(x)|. We use |a|+ to denote the element-wise maximum of a and the zero vector. To denote the total variation distance between two probability distributions, we use TV(p,q) = 12 |p−q|1. When P andQ are conditional probability distributions, we adopt the convention TVX (p,q) = 〈 12 |p(·|x)−q(·|x)|1 : x ∈ X〉, i.e., the vector of total variation distances conditioned on each x ∈ X .
Markov Decision Processes. We represent the environment with which we are interacting as a Markov Decision Process (MDP), defined in standard fashion: M := 〈S,A,R, P, γ, ρ〉. S and A denote the state and action space, which we assume are discrete. We useZ := S×A as the shorthand for the joint state-action space. The reward functionR : Z → Dist([0, 1]) maps state-action pairs to distributions over the unit interval, while the transition function P : Z → Dist(S) maps state-action pairs to distributions over next states. Finally, ρ ∈ Dist(S) is the distribution over initial states. We use r to denote the expected reward function, r(〈s, a〉) := Er∼R(·|〈s,a〉)[r], which can also be interpreted as a vector r ∈ R|Z|. Similarly, note that P can be described as P : (Z×S)→ R, which can be represented as a stochastic matrix P ∈ R|Z|×|S|. In order to emphasize that these reward and transition functions correspond to the true environment, we sometimes equivalently denote them as rM, PM. To denote the vectors of a constant whose sizes are the state and state-action space, we use a single dot to mean state and two dots to mean state-action, e.g., 1̇ ∈ R|S| and 1̈ ∈ R|Z|. A policy π : S → Dist(A) defines a distribution over actions, conditioned on a state. We denote the space of all possible policies as Π. We define an “activity matrix” for each policy, Aπ ∈ RS×Z , which encodes the state-conditional state-action distribution of π, by letting Aπ(s, 〈ṡ, a〉) := π(a|s) if s = ṡ, otherwise Aπ(s, 〈ṡ, a〉) := 0. Acting in the MDP according to π can thus be represented by AπP ∈ R|S|×|S| or PAπ ∈ R|Z|×|Z|. We define a value function as any v : Π → S → R or q : Π → Z → R whose output is bounded by [0, 11−γ ]. Note that this is a slight generalization of the standard definition (Sutton & Barto, 2018) since it accepts a policy as an input. We use the shorthand vπ := v(π) and qπ := q(π) to denote the result of applying a value function to a specific policy, which can also be represented as a vector, vπ ∈ R|S| and qπ ∈ R|Z|. To denote the output of an arbitrary value function on an arbitrary policy, we use unadorned v and q. The expected return of an MDPM, denoted vM or qM, is the discounted sum of rewards acquired when interacting with the environment:
vM(π) := ∞∑ t=0 (γAπP ) t Aπr qM(π) := ∞∑ t=0 (γPAπ) t r
Note that vπM = A πqπM. An optimal policy of an MDP, which we denote π ∗ M, is a policy for which the expected return vM is maximized under the initial state distribution: π∗M := arg maxπ Eρ[vπM]. The statewise expected returns of an optimal policy can be written as vπ ∗ M M . Of particular interest are value functions whose outputs obey fixed-point relationships, vπ = f(vπ) for some f : (S → R) → (S → R). The Bellman consistency equation for a policy π, BπM(x) := Aπ(r + γPx), which uniquely identifies the vector of expected returns for π, since vπM is the only vector for which vπM = BπM(vπM) holds. Finally, for any state s, the probability of being in the state s′ after t time steps when following policy π is [(AπP )t](s, s′). Furthermore, ∑∞ t=0 (γA πP ) t
= (I − γAπP )−1. We refer to (I − γAπP )−1 as the discounted visitation of π.
Datasets. We next introduce basic concepts that are helpful for framing the problem of fixeddataset policy optimization. We define a dataset of d transitions D := {〈s, a, r, s′〉}d, and denote the space of all datasets as D. In this work, we specifically consider datasets sampled from a data distribution Φ : Dist(Z); for example, the distribution of state-actions reached by following some stationary policy. We use D ∼ Φd to denote constructing a dataset of d tuples 〈s, a, r, s′〉, by first sampling each 〈s, a〉 ∼ Φ, and then sampling r and s′ i.i.d. from the environment reward function and transition function respectively, i.e. each r ∼ R(·|〈s, a〉) and s′ ∼ P (·|〈s, a〉).5 We sometimes index D using function notation, using D(s, a) to denote the multiset of all 〈r, s′〉 such that 〈s, a, r, s′〉 ∈ D. We use n̈D ∈ R|Z| to denote the vectors of counts, that is, n̈D(〈s, a〉) := |D(s, a)|. We sometimes use state-wise versions of these vectors, which we denote with ṅD. It is further useful to consider the maximum-likelihood reward and transition functions, computed by averaging all rewards and transitions observed in the dataset for each state-action. To this end, we define empirical reward vector rD(〈s, a〉) := ∑ r,s′∈D(〈s,a〉) r |D(〈s,a〉)| and empirical transition
matrix PD(s′|〈s, a〉) := ∑ r,ṡ′∈D(〈s,a〉) I(ṡ′=s′) |D(〈s,a〉)| at all state-actions for which n̈D(〈s, a〉) > 0. Where with n̈D(〈s, a〉) = 0, there is no clear way to define the maximum-likelihood estimates of 5Note that this is in some sense a simplifying assumption. In practice, datasets will typically be collected using a trajectory from a non-stationary policy, rather than i.i.d. sampling from the stationary distribution of a stationary policy. This greatly complicates the analysis, so we do not consider that setting in this work.
reward and transition, so we do not specify them. All our results hold no matter how these values are chosen, so long as rD ∈ [0, 11−γ ] and PD is stochastic. The empirical policy of a dataset D is defined as π̂D(a|s) := |D(〈s,a〉)||D(〈s,·〉)| except where n̈D(〈s, a〉) = 0, where it can similarly be any valid action distribution. The empirical visitation distribution of a dataset D is computed in the same way as the visitation distribution, but with PD replacing P , i.e. (I − γAπPD)−1.
Problem setting. The primary focus of this work is on the properties of fixed-dataset policy optimization (FDPO) algorithms. These algorithms take the form of a function O : D → Π, which maps from a dataset to a policy.6 Note that in this work, we consider D ∼ Φd, so the dataset is a random variable, and therefore O(D) is also a random variable. The goal of any FDPO algorithm is to output a policy with minimum suboptimality, i.e. maximum return. Suboptimality is a random variable dependent on D, computed by taking the difference between the expected return of an optimal policy and the learned policy under the initial state distribution,
SUBOPT(O(D)) = Eρ[v π∗M M ]− Eρ[v O(D) M ].
A related concept is the fixed-dataset policy evaluation algorithm, which is any function E : D → Π→ S → R, which uses a dataset to compute a value function. In this work, we focus our analysis on value-based FDPO algorithms, the subset of FDPO algorithms that utilize FDPE algorithms.7 A value-based FDPO algorithm with FDPE subroutine Esub is any algorithm with the following structure:
OVBsub (D) := arg max π Eρ[Esub(D,π)].
We define a fixed-point family of algorithms, sometimes referred to as just a family, in the following way. Any family called family is based on a specific fixed-point identity ffamily. We use the notation Efamily to denote any FDPE algorithm whose output vπD := Efamily(D,π) obeys vπD = ffamily(vπD). Finally, OVBfamily refers to any value-based FDPO algorithm whose subroutine is Efamily. We call the set of all algorithms that could implement Efamily the family of FDPE algorithms, and the set of all algorithms that could implement OVBfamily as the family of FDPO algorithms.
B UNCERTAINTY
Epistemic uncertainty measures an agent’s knowledge about the world, and is therefore a core concept in reinforcement learning, both for exploration and exploitation; it plays an important role in this work. Most past analyses in the literature implicitly compute some form of epistemic uncertainty to derive their bounds. An important analytical choice in this work is to cleanly separate the estimation of epistemic uncertainty from the problem of decision making. Our approach is to first define a notion of uncertainty as a function with certain properties, then assume that such a function exists, and provide the remainder of our technical results under such an assumption. We also describe several approaches to computing uncertainty.
Definition 5 (Uncertainty). A function uD,δ : Z → R is a state-action-wise Bellman uncertainty function if for a dataset D ∼ Φd, it obeys with probability at least 1 − δ for all policies π and all values v:
uD,δ ≥ |(rM + γPMv)− (rD + γPDv)|
6This formulation hides a dependency on ρ, the start-state distribution of the MDP. In general, ρ can be estimated from the dataset D, but this estimation introduces some error that affects the analysis. In this work, we assume for analytical simplicity that ρ is known a priori. Technically, this means that it must be provided as an input to O. We hide this dependency for notational clarity.
7Intuitively, these algorithms use a policy evaluation subroutine to convert a dataset into a value function, and return an optimal policy according to that value function. Importantly, this definition constrains only the objective of the approach, not its actual algorithmic implementation, i.e., it includes algorithms which never actually invoke the FDPE subroutine. For many online model-free reinforcement learning algorithms, including policy iteration, value iteration, and Q-learning, we can construct closely analogous value-based FDPO algorithms; see Appendix D.1. Furthermore, model-based techniques can be interpreted as using a model to implicitly define a value function, and then optimizing that value function; our results also apply to model-based approaches.
A function uπD,δ : S → R is a state-wise Bellman uncertainty function if for a dataset D ∼ Φd and policy π, it obeys with probability at least 1− δ for all policies π and all values v:
uπD,δ ≥ |Aπ (rM + γPMv)−Aπ (rD + γPDv)| A function µπD,δ : S → R is a value uncertainty function if for a dataset D ∼ Φd and policy π, it obeys with probability at least 1− δ for all policies π and all values v:
µπD,δ ≥ ∞∑ t=0 (γAπPD) t |Aπ (rM + γPMv)−Aπ (rD + γPDv)|
We refer to the quantity returned by an uncertainty function as uncertainty, e.g., value uncertainty refers to the quantity returned by a value uncertainty function.
Given a state-action-wise Bellman uncertainty function uD,δ , it is easy to verify that AπuD,δ is a state-wise Bellman uncertainty function. Similarly, given a state-wise Bellman uncertainty function uπD,δ , it is easy to verify that (I − γAπPD) −1 uπD,δ is a value uncertainty function. Uncertainty functions which can be constructed in this way are called decomposable. Definition 6 (Decomposablility). A state-wise Bellman uncertainty function uπD,δ is state-actionwise decomposable if there exists a state-action-wise Bellman uncertainty function uD,δ such that uπD,δ = A
πuD,δ . A value uncertainty function µ is state-wise decomposable if there exists a statewise Bellman uncertainty function uπD,δ such that µ = (I − γAπPD) −1 uπD,δ , and further, it is state-action-wise decomposable if that uπD,δ is itself state-action-wise decomposable.
Our definition of Bellman uncertainty captures the intuitive notion that the uncertainty at each state captures how well an approximate Bellman update matches the true environment. Value uncertainty represents the accumulation of these errors over all future timesteps.
How do these definitions correspond to our intuitions about uncertainty? An application of the environment’s true Bellman update can be viewed as updating the value of each state to reflect information about its future. However, no algorithm in the FDPO setting can apply such an update, because the true environment dynamics are unknown. We may use the dataset to estimate what such an update would look like, but since the limited information in the dataset may not fully specify the properties of the environment, this update will be slightly wrong. It is intuitive to say that the uncertainty at each state corresponds to how well the approximate update matches the truth. Our definition of Bellman uncertainty captures precisely this notion. Further, value uncertainty represents the accumulation of these errors over all future timesteps.
How can we algorithmically implement uncertainty functions? In other words, how can we compute a function with the properties required for Definition 5, using only the information in the dataset? All forms of uncertainty are upper-bounds to a quantity, and a tighter upper-bound means that other results down the line which leverage this quantity will be improved. Therefore, it is worth considering this question carefully.
A trivial approach to computing any uncertainty function is simply to return 11−γ for all s and 〈s, a〉. But although this is technically a valid uncertainty function, it is not very useful, because it does not concentrate with data and does not distinguish between certain and uncertain states. It is very loose and so leads to poor guarantees.
In tabular environments, one way to implement a Bellman uncertainty function is to use a concentration inequality. Depending on which concentration inequality is used, many Bellman uncertainty functions are possible. These approaches lead to Bellman uncertainty which is lower at states with more data, typically in proportion to the square root of the count. To illustrate how to do this, we show in Appendix B.1 how a basic application of Hoeffding’s inequality can be used to derive a state-action-wise Bellman uncertainty. In Appendix B.2, we show an alternative application of Hoeffding’s which results in a state-wise Bellman uncertainty, which is a tighter bound on error. In Appendix B.3, we discuss other techniques which may be useful in tightening further.
When the value function is represented by a neural network, it is not currently known how to implement a Bellman uncertainty function. When an empirical Bellman update is applied to a neural
network, the change in value of any given state is impacted by generalization from other states. Therefore, the counts are not meaningful, and concentration inequalities are not applicable. In the neural network literature, many “uncertainty estimation techniques” have been proposed, which capture something analogous to an intuitive notion of uncertainty; however, none are principled enough to be useful in computing Bellman uncertainty.
B.1 STATE-ACTION-WISE BOUND
We seek to construct an uπD,δ such that |Aπ (rM + γPMv)−Aπ (rD + γPDv)| ≤ uπD,δ with probability at least 1− δ. Firstly, let’s consider the simplest possible bound. v is bounded in [0, 11−γ ], so both Aπ (rM + γPMv) and Aπ (rD + γPDv) must be as well. Thus, their difference is also bounded:
|Aπ (rM + γPMv)−Aπ (rD + γPDv)| ≤ 1
1− γ
Next, consider that for any 〈s, a〉, the expression rD(〈s, a〉) + γPD(〈s, a〉)vπ can be equivalently expressed as an expectation of random variables,
rD(〈s, a〉) + γPD(〈s, a〉)v = 1 n̈D(〈s, a〉) ∑
r,s′∈D(〈s,a〉)
r + γv(s′),
each with expected value
Er,s′∈D(〈s,a〉)[r + γv(s′)] = Er∼R(·|〈s,a〉) s′∼P (·|〈s,a〉) [r + γv(s′)] = [rM + γPMv](〈s, a〉).
Note also that each of these random variables is bounded [0, 11−γ ]. Thus, Hoeffding’s inequality tells us that this mean of bounded random variables must be close to their expectation with high probability. By invoking Hoeffding’s at each of the |S ×A| state-actions, and taking a union bound, we see that with probability at least 1− δ,
|(rM + γPMv)− (rD + γPDv)| ≤ 1
1− γ
√ 1
2 ln 2|S × A| δ n̈−1D
We can left-multiply Aπ and rearrange to get:
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤
( 1
1− γ
√ 1
2 ln 2|S × A| δ
) Aπn̈
− 12 D
Finally, we simply intersect this bound with the 11−γ bound from earlier. Thus, we see that with probability at least 1− δ,
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤ 1
1− γ ·min
((√ 1
2 ln 2|S × A| δ
) Aπn̈
− 12 D , 1
)
B.2 STATE-WISE BOUND
This bound is similar to the previous, but uses Hoeffding’s to bound the value at each state all at once, rather than bounding the value at each state-action.
Choose a collection of possible state-local policies Πlocal ⊆ Dist(A). Each state-local policy is a member of the action simplex.
For any s ∈ S and π ∈ Πlocal, the expression [Aπ(rD + γPDv)](s) can be equivalently expressed as a mean of random variables,
[Aπ(rD + γPDv)](s) = 1
ṅD(s) ∑ a,r,s′∈D(s) π(a|s) π̂D(a|s) (r + γv(s′)),
each with expected value Ea,r,s′∈D(s) [ π(a|s) π̂D(a|s) (r + γv(s′)) ] = E a∼π(·|s)
r∼R(·|〈s,a〉) s′∼P (·|〈s,a〉)
[r + γv(s′)] = [Aπ(rM + γPMv)](s).
Note also that each of these random variables is bounded [0, 11−γ π(a|s) π̂D(a|s) ]. Thus, Hoeffding’s inequality tells us that this sum of bounded random variables must be close to its expectation with high probability. By invoking Hoeffding’s at each of the |S| states and |Πlocal| local policies, and taking a union bound, we see that with probability at least 1− δ,
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤ 1
1− γ
√ 1
2 ln 2|S ×Πlocal| δ (Aπ)◦2n̈−1D
where the term (Aπ)◦2 refers to the elementwise square. Finally, we once again intersect with 11−γ , yielding that with probability at least 1− δ,
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤ 1
1− γ ·min
(√ 1
2 ln 2|S ×Πlocal| δ (Aπ)◦2n̈−1D , 1
)
Comparing this to the Bellman uncertainty function in Appendix B.1, we see two differences. Firstly, we have replaced a factor of |A| with |Πlocal|, typically loosening the bound somewhat (depending on choice of considered local policies). Secondly, Aπ has now moved inside of the square root; since the square root is concave, Jensen’s inequality says that√
(Aπ)◦2n̈−1D ≤ √ (Aπ)◦2 √ n̈−1D = A πn̈ − 12 D
and so this represents a tightening of the bound.
When Πlocal is the set of deterministic policies, this bound is equivalent to that of Appendix B.1. This can easily be seen by noting that for a deterministic policy, all elements of (Aπ)◦2 are either 1 or 0, and so √
(Aπ)◦2n̈−1D = A πn̈ − 12 D
and also that the size of the set of deterministic policies is exactly |A|. An important property of this bound is that it shows that stochastic policies can be often be evaluated with lower error than deterministic policies. We prove this by example. Consider an MDP with a single state s and two actions a0, a1, and a dataset with n̈D(〈s, a0〉) = n̈D(〈s, a1〉) = 2. We can parameterize the policy by a single number ξ ∈ [0, 1] by setting π(a0|s) = ξ, π(a1|s) = 1− ξ. The size of this bound will be proportional to √ ξ2
2 + (1−ξ)2
2 , and setting the derivative equal to zero, we see that the minimum is ξ = 12 . (Of course, we would need to increase the size of our local policy set to include this, in order to be able to actually select it, and doing so will increase the overall bound; this example only shows that for a given policy set, the selected policy may in general be stochastic.)
Finding the optimum for larger local policy sets is non-trivial, so we leave a full treatment of algorithms which leverage this bound for future work.
B.3 OTHER BOUNDS
There are a few other paths by which this bound can be made tighter still. The above bounds take an extra factor of 11−γ because we bound the overall return, rather than simply bounding the reward and transition functions. This was done because a bound on the transition function would add a cost of O( √ S). However, this can be mitigated by intersecting the above confidence interval with a Good-Turing interval, as proposed in Taleghan et al. (2015). Doing so will cause the bound to concentrate much more quickly in MDPs where the transition function is relatively deterministic. We expect this to be the case for most practical MDPs.
Similarly, empirical Bernstein confidence intervals can be used in place of Hoeffding’s, to increase the rate of concentration for low-variance rewards and transitions (Maurer & Pontil, 2009), leading to improved performance in MDPs where these are common.
Finally, we may be able to apply a concentration inequality in a more advanced fashion to compute a value uncertainty function which is not statewise decomposable: we bound some useful notion of value error directly, rather than computing the Bellman uncertainty function and taking the visitation-weighted sum. This would result in an overall tighter bound on value uncertainty by hedging over data across multiple timesteps. However, in doing so, we would sacrifice the monotonic improvement property needed for convergence of algorithms like policy iteration. This idea has a parallel in the robust MDP literature. The bounds in Appendix B.1 can be seen as constructing an sarectangular robust MDP, whereas Appendix B.2 is similar to constructing an s-rectangular robust MDP Wiesemann et al. (2013). More recently, approaches have been proposed which go beyond s-rectangular (Goyal & Grand-Clement, 2018), and such approaches likely have natural parallels in implementing value uncertainty functions.
C PROOFS
C.1 PROOF OF OVER/UNDER DECOMPOSITION
Starting from the definition of suboptimality, we see
SUBOPT(OVB(D)) = Eρ[v π∗M M ]− Eρ[v π∗D M ]
= Eρ[v π∗M M + (−v π D + v π D) + (−v π∗D D + v π∗D D )− v π∗D M ] (valid for any π) ≤ Eρ[v∗M − vπD] + Eρ[v π∗D D − v π∗D M ] (using Eρ[v π D − v π∗D D ] ≤ 0)
Since the above holds for all π,
SUBOPT(OVB(D)) ≤ inf π
( Eρ[v π∗M M − v π D] ) + Eρ[v π∗D D − v π∗D M ]
≤ inf π
( Eρ[v π∗M M − v π D] )
+ sup π
( Eρ[vπD − vπM] ) (using vπ ∗ D D ∈ Π)
= inf π
( Eρ[v π∗M M − v π M] + Eρ[vπM − vπD] ) + sup
π
( Eρ[vπD − vπM] )
C.2 TIGHTNESS OF OVER/UNDER DECOMPOSITION
We show that the bound given in Lemma 1 is tight via a simple example.
Proof. Consider a bandit-structured MDP with a single state and two actions, A and B, with rewards of 0 and 1 respectively, which both lead to terminal states.
First, consider the left-hand side. If an FDPE subroutine estimates the value of both arms to be 1, then the policy which always selects arm A is an optimal policy of the corresponding FDPO algorithm. In this case, the suboptimality is clearly equal to 1. This is clearly the worst-case suboptimality, since it is the largest possible suboptimality in the environment.
On the right-hand side, note that term (A) is 0 when π is the policy that always picks B, while term (B) is 1 when π is the policy that always picks A. Thus, the left-hand and right-hand sides are equal, and the bound is tight.
C.3 RESIDUAL VISITATION LEMMA
We prove a basic lemma showing that the error of any value function is equal to its one-step Bellman residual, summed over its visitation distribution. Though this result is well-known, it is not clearly stated elsewhere in the literature, so we prove it here for clarity. Lemma 2. For any MDP ξ and policy π, consider the Bellman fixed-point equation given by, let vπξ be defined as the unique value vector such that vπξ = A π(rξ +γPξv π ξ ), and let v be any other value
vector. We have
vπξ − v = (I − γAπPξ)−1(Aπ(rξ + γPξv)− v) (1) v − vπξ = (I − γAπPξ)−1(v −Aπ(rξ + γPξv)) (2) |vπξ − v| = (I − γAπPξ)−1|Aπ(rξ + γPξv)− v| (3)
Proof.
Aπ(rξ + γPξv)− v = Aπ(rξ + γPξv)− vπξ + vπξ − v = Aπ(rξ + γPξv)−Aπ(rξ + γPξvπξ ) + vπξ − v = γAπPξ(v − vπξ ) + (vπξ − v) = (vπξ − v)− γAπPξ(vπξ − v) = (I − γAπPξ)(vπξ − v)
Thus, we see (I−γAπPξ)−1(Aπ(rξ +γPξv)−v) = vπξ −v. An identical proof can be completed starting with v −Aπ(rξ + γPξv), leading to the desired result.
C.4 NAÏVE FDPE ERROR BOUND
We show how the error of naı̈ve FDPE algorithms is bounded by the value uncertainty of stateactions visited by the policy under evaluation. Next, in Section 4, we use this bound to derive a suboptimality guarantee for naı̈ve FDPO. Lemma 3 (Naı̈ve FDPE error bound). Consider any naı̈ve fixed-dataset policy evaluation algorithm Enaı̈ve. For any policy π and dataset D, denote vπD := Enaı̈ve(D,π). Let µπD,δ be any value uncertainty function. The following component-wise bound holds with probability at least 1− δ:
|vπM − vπD| ≤ µπD,δ
Proof. Notice that the naı̈ve fixed-point function is equivalent to the Bellman fixed-point equation for a specific MDP: the empirical MDP defined by 〈S,A, rD, PD, γ, ρ〉. Thus, invoking Lemma 2, for any values v we have
|vπD − v| = (I − γAπPD)−1|Aπ(rD + γPDv)− v|.
Since vπM is a value vector, this immediately implies that
|vπD − vπM| = (I − γAπPD)−1|Aπ(rD + γPDvπM)− vπM|.
Since vπM is the solution to the Bellman consistency fixed-point,
|vπD − vπM| = (I − γAπPD)−1|Aπ(rD + γPDvπM)−Aπ(rM + γPMvπM)|.
Using the definition of a value uncertainty function µπD,δ , we arrive at
|vπD − vπM| ≤ µπD,δ completing the proof.
Thus, reducing value uncertainty improves our guarantees on evaluation error. For any fixed policy, value uncertainty can be reduced by reducing the Bellman uncertainty on states visited by that policy. In the tabular setting this means observing more interactions from the state-actions that that policy visits frequently. Conversely, for any fixed dataset, we will have a certain Bellman uncertainty in each state, and policies mostly visit low-Bellman-uncertainty states can be evaluated with lower error.8
8In the function approximation setting, we do not necessarily need to observe an interaction with a particular state-action to reduce our Bellman uncertainty on it. This is because observing other state-actions may allow us to reduce Bellman uncertainty through generalization. Similarly, the most-certain policy for a fixed dataset may not be the policy for which we have the most data, but rather, the policy which our dataset informs us about the most.
Our bound differs from prior work (Jiang, 2019a; Ghavamzadeh et al., 2016) in that it is significantly more fine-grained. We provide a component-wise bound on error, whereas previous results bound the l∞ norm. Furthermore, our bounds are sensitive to the Bellman uncertainty in each individual reward and transition, rather than only relying on the most-uncertain. As a result, our bound does not require all states to have the same number of samples, and is non-vacuous even in the case where some state-actions have no data.
Our bound can also be viewed as an extension of work on approximate dynamic programming. In that setting, the literature contains fine-grained results on the accumulation of local errors (Munos, 2007). However, those results are typically understood as applying to errors induced by approximation via some limited function class. Our bound can be seen as an application of those ideas to the case where errors are induced by limited observations.
C.5 RELATIVE VALUE UNCERTAINTY
The key to the construction of proximal pessimistic algorithms is the relationship between the value uncertainties of any two policies π, π′, for any state-action-wise decomposable value function.
Lemma 4 (Relative value uncertainty). For any two policies π, π′, and any state-action-wise decomposable value uncertainty µ, we have
µπD,δ − µπ ′ D,δ = (I − γAπPD)−1 ( (Aπ −Aπ ′ )(uD,δ + γPDµ π′ D,δ) )
Proof. Firstly, note that since the value uncertainty function is state-action-wise decomposable, we can express it in a Bellman-like form for some state-wise Bellman uncertainty uπD,δ as µ π D,δ = (I − γAπPD)−1 uπD,δ . Further, uπD,δ is itself state-action-wise decomposable, so it can be written as uπD,δ = A πuD,δ .
We can use this to derive the following relationship.
µπD,δ = (I − γAπPD) −1 uπD,δ
µπD,δ − γPDAπµπD,δ = uπD,δ µπD,δ = u π D,δ + γA πPDµ π D,δ
Next, we bound the difference between the value uncertainty of π and π′.
µπD,δ − µπ ′ D,δ = (u π D,δ + γA πPDµ π D,δ)− (uπ ′ D,δ + γA π′PDµ π′ D,δ)
= (uπD,δ − uπ ′ D,δ) + γA πPDµ π D,δ − γAπ ′ PDµ π′ D,δ = (Aπ −Aπ ′ )uD,δ + γA πPDµ π D,δ − γ(Aπ ′ −Aπ +Aπ)PDµπ ′ D,δ
= γAπPD ( µπD,δ − µπ ′ D,δ ) + (Aπ −Aπ ′ )uD,δ + γ(A π −Aπ ′ )PDµ π′ D,δ
= γAπPD ( µπD,δ − µπ ′ D,δ ) + (Aπ −Aπ ′ )(uD,δ + γPDµ π′ D,δ)
This is a geometric series, so (I − γAπPD) ( µπD,δ − µπ ′ D,δ ) = (Aπ − Aπ′)(uD,δ + γPDµπ ′ D,δ). Left-multiplying by (I − γAπPD)−1 we arrive at the desired result.
C.6 NAÏVE FDPE RELATIVE ERROR BOUND
Lemma 5 (Naı̈ve FDPE relative error bound). Consider any naı̈ve fixed-dataset policy evaluation algorithm Enaı̈ve. For any policy π and dataset D, denote vπD := Enaı̈ve(D,π). Then, for any other policy π′, the following bound holds with probability at least 1− δ:
|vπD − vπM| ≤ µπD,δ ≤ µπ ′ D,δ + (I − γAπPD)−1 (
1
(1− γ)2
) TVS(π, π′)
Proof. The goal of this section is to construct an error bound for which we can optimize π without needing to compute any uncertainties. To do this, we must replace this quantity with a looser upper bound.
First, consider a state-action-wise decomposable value uncertainty function µπD,δ . We have µ π D,δ = (I − γPDAπ)−1uπD,δ for some uπD,δ , where uπD,δ = AπuD,δ for some uD,δ .
Note that state-action-wise Bellman uncertainty can be trivially bounded as uD,δ ≤ 11−γ 1̈. Also, since (I−γPDAπ)−1 ≤ 11−γ , any state-action-wise decomposable value uncertainty can be trivially bounded as µπD,δ ≤ 1(1−γ)2 1̇.
We now invoke Lemma 4. We then substitute the above bounds into the second term, after ensuring that all coefficients are positive.
µπD,δ − µπ ′ D,δ = (I − γAπPD)−1 ( (Aπ −Aπ ′ )(uD,δ + γPDµ π′ D,δ) )
≤ (I − γAπPD)−1 ( |Aπ −Aπ ′ |+(uD,δ + γPDµπ ′ D,δ) )
≤ (I − γAπPD)−1 ( |Aπ −Aπ ′ |+ ( 1
1− γ 1̈ + γPD
1
(1− γ)2 1̇ )) = (I − γAπPD)−1 ( |Aπ −Aπ ′ |+ ( 1
1− γ 1̈ +
γ
(1− γ)2 1̈ )) = (I − γAπPD)−1 ( 1
(1− γ)2
) |Aπ −Aπ ′ |+1̈
= (I − γAπPD)−1 (
1
(1− γ)2
) TVS(π, π′)
The third-to-last step follows from the fact that PD is stochastic. The final step follows from the fact that the positive and negative components of the state-wise difference between policies must be symmetric, so |Aπ − Aπ′ |+1̈ is precisely equivalent to the state-wise total variation distance, TVS(π, π′). Thus, we have
µπD,δ ≤ µπ ′ D,δ + (I − γAπPD)−1 (
1
(1− γ)2
) TVS(π, π′).
Finally, invoking Lemma 3, we arrive at the desired result.
C.7 SUBOPTIMALITY OF UNCERTAINTY-AWARE PESSIMISTIC FDPO ALGORITHMS
Let vπD := Eua(D,π). From the definition of the UA family, we have the fixed-point property vπD = A π(rD + γPDv π D) − αuπD,δ , and the standard geometric series rearrangement yields vπD = (I − γAπPD)−1 (AπrD − αuπD,δ). From here, we see:
vπD = (I − γAπPD) −1 (AπrD − αuπD,δ)
= (I − γAπPD)−1AπrD − (I − γAπPD)−1 αuπD,δ = Enaı̈ve(D,π)− αµπD,δ
We now use this to bound overestimation and underestimation error of Eua(D,π) by invoking Lemma 3, which holds with probability at least 1− δ. First, for underestimation, we see:
vπM − vπD = vπM − ( Enaı̈ve(D,π)− αµπD,δ ) = (vπM − Enaı̈ve(D,π)) + αµπD,δ ≤ µπD,δ + αµπD,δ = (1 + α)µπD,δ
and thus, vπM − vπD ≤ (1 + α)µπD,δ . Next, for overestimation, we see: vπD − vπM = ( Enaı̈ve(D,π)− αµπD,δ ) − vπM
= (Enaı̈ve(D,π)− vπM)− αµπD,δ ≤ µπD,δ − αµπD,δ = (1− α)µπD,δ
and thus, vπD − vπM ≤ (1 − α)µπD,δ . Substituting these bounds into Lemma 1 gives the desired result.
C.8 SUBOPTIMALITY OF PROXIMAL PESSIMSITIC FDPO ALGORITHMS
Let vπD := Eproximal(D,π). From the definition of the proximal family, we have the fixed-point property
vπD = A π(rD + γPDv π D)− α
( TVS(π, π̂D)
(1− γ)2 ) and the standard geometric series rearrangement yields
vπD = (I − γAπPD) −1 ( AπrD − α ( TVS(π, π̂D)
(1− γ)2 )) From here, we see:
vπD = (I − γAπPD) −1 ( AπrD − α ( TVS(π, π̂D)
(1− γ)2 )) = (I − γAπPD)−1AπrD − (I − γAπPD)−1 α ( TVS(π, π̂D)
(1− γ)2 ) = Enaı̈ve(D,π)− (I − γAπPD)−1 α ( TVS(π, π̂D)
(1− γ)2 ) Next, we define a new family of FDPE algorithms,
Eproximal-full(D,π) := Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2
)) .
We use Lemma 3, which holds with probability at least 1 − δ, to bound the overestimation and underestimation. First, the underestimation:
vπM − Eproximal-full(D,π) = vπM − ( Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 ))) = (vπM − Enaı̈ve(D,π)) + α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) ≤ µπD,δ + α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) Next, we analagously bound the overestimation:
Eproximal-full(D,π)− vπM = ( Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2
))) − vπM
= (vπM − Enaı̈ve(D,π))− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) ≤ µπD,δ − α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) We can now invoke Lemma 1 to bound the suboptimality of any value-based FDPO algorithm which uses Eproximal-full, which we denote with O VB proximal-full. Crucially, note that since αµ π′
D,δ is not dependent on π, it can be removed from the infimum and supremum terms, and cancels. Substituting and rearranging, we see that with probability at least 1− δ,
SUBOPT(OVBproximal-full(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ [ µπD,δ + α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2
)])
+ sup π
( Eρ [ µπD,δ − α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2 )]) Finally, we see that FDPO algorithms which use Eproximal-full as their subroutine will return the same policy as FDPO algorithms which use Eproximal. First, we once again use the property that µπ ′
D,δ is not dependent on π. Second, we note that since the total visitation of every policy sums to 11−γ , we
know Eρ [ (I − γAπPD)−1 ( 1 γ )] = 1(1−γ)2 for all π, and thus it is also not dependent on π.
arg max π Eρ[Eproximal-full(D,π)] = arg max π
Eρ [ Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 ))] = arg max π Eρ [ Enaı̈ve(D,π)− (I − γAπPD)−1α ( TVS(π, π̂D) (1− γ)2
)] = arg max
π Eρ[Eproximal(D,π)]
Thus, the suboptimality of arg maxπ Eρ[Eproximal(D,π)] must be equivalent to that of arg maxπ Eρ[Eproximal-full(D,π)], leading to the desired result.
C.9 STATE-WISE PROXIMAL PESSIMISTIC ALGORITHMS
In the main body of the work, we focus on proximal pessimistic algorithms which are based on a state-conditional density model, since this approach is much more common in the literature. However, it is also possible to derive proximal pessimistic algorithms which use state-action densities, which have essentially the same properties. In this section we briefly provide the main results.
In this section, we use dπ := (1− γ)(I − γAπPD)−1 to indicate the state visitation distribution of any policy π. Definition 7. A state-action-density proximal pessimistic algorithm with pessimism hyperparameter α ∈ [0, 1] is any algorithm in the family defined by the fixed-point function
fsad-proximal(v π) = Aπ(rD + γPDv π)− α ( |dπ − dπ̂D | (1− γ)2 ) Theorem 4 (State-action-density proximal pessimistic FDPO suboptimality bound). Consider any state-action-density proximal pessimistic value-based fixed-dataset policy optimization algorithm OVBsad-proximal. Let µ be any state-action-wise decomposable value uncertainty function, and α ∈ [0, 1] be a pessimism hyperparameter. For any datasetD, the suboptimality ofOVBsad-proximal is bounded with probability at least 1− δ by
SUBOPT(Osad-proximal(D)) ≤ 2Eρ[µπ̂DD,δ] + infπ
( Eρ[v π∗M M − v π M] + (1 + α) · Eρ [ (I − γAπPD)−1 ( |dπ − dπ̂D |+
| 1. What is the main contribution of the paper regarding fixed-dataset reinforcement learning?
2. What are the strengths of the proposed approach, particularly in its ability to unify prior work?
3. What are the weaknesses of the paper, especially regarding comparisons between upper bounds and implementation issues?
4. How does the reviewer assess the clarity, organization, and notation usage in the paper?
5. Are there any suggestions for improving the paper's presentation or resolving some of the issues mentioned in the review? | Review | Review
Summary:
This paper attempts to unify prior work on fixed-dataset (aka "batch" or "offline") reinforcement learning. Specifically, it emphasizes the importance of pessimism to account for faulty over-estimation from finite datasets. The paper shows that naive algorithms (with no pessimism) can recover the optimal policy with enough data, but do so more efficiently. The pessimistic algorithms are divided into "uncertainty-aware" and "proximal" algorithms where the uncertainty-aware algorithms are shown to be more principled, but most prior work falls into the computationally easier proximal family of algorithms that is closer to imitation learning. These insights are proven both theoretically and with some small experiments.
Strengths:
A nice decomposition of suboptimality. The main workhorse of the paper is the decomposition provided in Lemma 1 which is novel and can provide some good intuition about the necessity of pessimism (although the intuition is only given in appendix G.3, which should definitely find it's way into the main text). The Lemma cleanly and formally demonstrates why we may expect over-estimation to be more damaging than under-estimation.
A clear framework to examine prior work. The paper does well to capture the majority of recent work into a few broad families of algorithms: naive, proximal pessimistic, and uncertainty-aware pessimistic. The bound derived from the main Lemma for each algorithm family provide evidence to prefer uncertainty-aware algorithms. This is supported by the tabular experiments.
The formal statements of Lemmas and Theorems seem to be correct and experimental methodology seems sound.
Weaknesses:
I am wary of the comparison of upper bounds done in the paper. Just because one algorithm has a lower upper bound does not prove superior performance. I agree that since all the proofs are derived from Lemma 1 and are very similar, the differences are indeed suggestive. However, the bound in Theorem 3 seems to be more loose than the others. For example, when
α
=
0
it does not recover the bound for the naive algorithm as would be expected. A more measured tone and careful description of these comparisons is needed. Claims like "uncertainty-aware algorithms are strictly better than proximal algorithms" in the conclusion are not substantiated.
Lack of discussion of issues with implementation and function approximation. As the authors get into in Appendix G.6 and Appendix F.2 and briefly in the paper it is not clear how to implement the uncertainty-aware family of algorithms in a scalable way. I am not saying that this paper needs to resolve this issue (it is clearly hard), but this drawback needs to be made more clear in the main text of the paper, so as to not mislead the reader.
Notation is heavy and sometimes nonstandard. I understand that the nature of this paper will lead to a lot of notation, but I think the paper could be made more accessible if the authors go back through the paper and remove notation that may only be needed in the proofs and may be unnecessary to present the main results. For example, the several different notions of uncertainty funtions might be useful in the appendix, but do not seem to all be necessary to present the main results. Similarly, the notion of decomposability is introduced and then largely forgotten for the rest of the paper. Some notation is nonstandard. For example:
d
is used for number of datapoints (usually it would be dimension) and
Φ
is used as the data distribution (usually if would be a feature generating function or feature matrix).
Abuse of the appendix. While I understant that the 8 page limit can be difficult, this paper especially abuses the appendix often sending important parts of the discussion and intuition for the results into appendix G. The paper would be stronger with some editing of the notation and organization of the main text to make room for more of the needed discussion and intuition in main body of the paper.
Recommendation:
I gave the paper a score of 7, and recommend acceptance. The paper provides a nice framing of prior work on fixed-dataset RL. While it leaves some things to be desired in terms of carefulness, scalability, and clarity, I think it provides a solid contribution that will be useful to researchers in the field.
If the authors are able to sufficiently improve the clarity of presentation as discussed in the weaknesses section, I could consider raising my score.
Questions for the authors:
It is natural to think that a practical proximal pessimistic algorithm would reduce the level of pessimism with the dataset size (so that it approaches the naive algorithm with infinite data). Do approaches like this resolve many of the issues that you bring up with proximal pessimistic algorithms (albeit by introducing another hyperparameter to tune)?
Additional feedback:
Typos:
The first sentence on page 4 is not grammatically correct.
In the statements of Lemma 1 and Theorem 1,
π
D
∗
is defined and never used.
In the statement of Theorem 1
u
D
,
δ
π
is defined but then only
μ
D
,
δ
π
is used without being defined. |
ICLR | Title
The Importance of Pessimism in Fixed-Dataset Policy Optimization
Abstract
We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. Our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. This analysis reveals that for naı̈ve approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. To avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. These theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four MinAtar environments.
N/A
We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. Our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. This analysis reveals that for naı̈ve approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. To avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. These theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four MinAtar environments.
1 INTRODUCTION
We consider fixed-dataset policy optimization (FDPO), in which a dataset of transitions from an environment is used to find a policy with high return.1 We compare FDPO algorithms by their worst-case performance, expressed as high-probability guarantees on the suboptimality of the learned policy. It is perhaps obvious that in order to maximize worst-case performance, a good FDPO algorithm should select a policy with high worst-case value. We call this the pessimism principle of exploitation, as it is analogous to the widely-known optimism principle (Lattimore & Szepesvári, 2020) of exploration.2
Our main contribution is a theoretical justification of the pessimism principle in FDPO, based on a bound that characterizes the suboptimality incurred by an FDPO algorithm. We further demonstrate how this bound may be used to derive principled algorithms. Note that the core novelty of our work is not the idea of pessimism, which is an intuitive concept that appears in a variety of contexts; rather, our contribution is a set of theoretical results rigorously explaining how pessimism is important in the specific setting of FDPO. An example conveying the intuition behind our results can be found in Appendix G.1.
We first analyze a family of non-pessimistic naı̈ve FDPO algorithms, which estimate the environment from the dataset via maximum likelihood and then apply standard dynamic programming techniques. We prove a bound which shows that the worst-case suboptimality of these algorithms is guaranteed to be small when the dataset contains enough data that we are certain about the value of every possible policy. This is caused by the outsized impact of value overestimation errors on suboptimality, sometimes called the optimizer’s curse (Smith & Winkler, 2006). It is a fundamental consequence of ignoring the disconnect between the true environment and the picture painted by our limited observations. Importantly, it is not reliant on errors introduced by function approximation.
1We use the term fixed-dataset policy optimization to emphasize the computational procedure; this setting has also been referred to as batch RL (Ernst et al., 2005; Lange et al., 2012) and more recently, offline RL (Levine et al., 2020). We emphasize that this is a well-studied setting, and we are simply choosing to refer to it by a more descriptive name.
2The optimism principle states that we should select a policy with high best-case value.
We contrast these findings with an analysis of pessimistic FDPO algorithms, which select a policy that maximizes some notion of worst-case expected return. We show that these algorithms do not require datasets which inform us about the value of every policy to achieve small suboptimality, due to the critical role that pessimism plays in preventing overestimation. Our analysis naturally leads to two families of principled pessimistic FDPO algorithms. We prove their improved suboptimality guarantees, and confirm our claims with experiments on a gridworld.
Finally, we extend one of our pessimistic algorithms to the deep learning setting. Recently, several deep-learning-based algorithms for fixed-dataset policy optimization have been proposed (Agarwal et al., 2019; Fujimoto et al., 2019; Kumar et al., 2019; Laroche et al., 2019; Jaques et al., 2019; Kidambi et al., 2020; Yu et al., 2020; Wu et al., 2019; Wang et al., 2020; Kumar et al., 2020; Liu et al., 2020). Our work is complementary to these results, as our contributions are conceptual, rather than algorithmic. Our primary goal is to theoretically unify existing approaches and motivate the design of pessimistic algorithms more broadly. Using experiments in the MinAtar game suite (Young & Tian, 2019), we provide empirical validation for the predictions of our analysis.
The problem of fixed-dataset policy optimization is closely related to the problem of reinforcement learning, and as such, there is a large body of work which contains ideas related to those discussed in this paper. We discuss these works in detail in Appendix E.
2 BACKGROUND
We anticipate most readers will be familiar with the concepts and notation, which is fairly standard in the reinforcement learning literature. In the interest of space, we relegate a full presentation to Appendix A. Here, we briefly give an informal overview of the background necessary to understand the main results.
We represent the environment as a Markov Decision Process (MDP), denoted M := 〈S,A,R, P, γ, ρ〉. We assume without loss of generality that R(〈s, a〉) ∈ [0, 1], and denote its expectation as r(〈s, a〉). ρ represents the start-state distribution. Policies π can act in the environment, represented by action matrix Aπ , which maps each state to the probability of each state-action when following π. Value functions v assign some real value to each state. We use vπM to denote the value function which assigns the sum of discounted rewards in the environment when following policy π. A dataset D contains transitions sampled from the environment. From a dataset, we can compute the empirical reward and transition functions, rD and PD, and the empirical policy, π̂D.
An important concept for our analysis is the value uncertainty function, denoted µπD,δ , which returns a high-probability upper-bound to the error of a value function derived from datasetD. Certain value uncertainty functions are decomposable by states or state-actions, meaning they can be written as the weighted sum of more local uncertainties. See Appendix B for more detail.
Our goal is to analyze the suboptimality of a specific class of FDPO algorithms, called value-based FDPO algorithms, which have a straightforward structure: they use a fixed-dataset policy evaluation (FDPE) algorithm to assign a value to each policy, and then select the policy with the maximum value. Furthermore, we consider FDPE algorithms whose solutions satisfy a fixed-point equation. Thus, a fixed-point equation defines a FDPE objective, which in turn defines a value-based FDPO objective; we call the set of all algorithms that implement these objectives the family of algorithms defined by the fixed-point equation.
3 OVER/UNDER DECOMPOSITION OF SUBOPTIMALITY
Our first theoretical contribution is a simple but informative bound on the suboptimality of any value-based FDPO algorithm. Next, in Section 4, we make this concrete by defining the family of naı̈ve algorithms and invoking this bound. This bound is insightful because it distinguishes the impact of errors of value overestimation from errors of value underestimation, defined as: Definition 1. Consider any fixed-dataset policy evaluation algorithm E on any dataset D and any policy π. Denote vπD := E(D,π). We define the underestimation error as Eρ[vπM − vπD] and overestimation error as Eρ[vπD − vπM].
The following lemma shows how these quantities can be used to bound suboptimality.
Lemma 1 (Value-based FDPO suboptimality bound). Consider any value-based fixed-dataset policy optimization algorithmOVB, with fixed-dataset policy evaluation subroutine E . For any policy π and dataset D, denote vπD := E(D,π). The suboptimality of OVB is bounded by
SUBOPT(OVB(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ[vπM − vπD] ) + sup
π
( Eρ[vπD − vπM] ) Proof. See Appendix C.1.
This bound is tight; see Appendix C.2. The bound highlights the potentially outsized impact of overestimation on the suboptimality of a FDPO algorithm. To see this, we consider each of its terms in isolation:
SUBOPT(OVB(D)) ≤ inf π
( (A1)︷ ︸︸ ︷ Eρ[v π∗M M − v π M] + (A2)︷ ︸︸ ︷ Eρ[vπM − vπD] ) ︸ ︷︷ ︸
(A)
+ sup π
( (B1)︷ ︸︸ ︷ Eρ[vπD − vπM] ) ︸ ︷︷ ︸
(B)
The term labeled (A) reflects the degree to which the dataset informs us of a near-optimal policy. For any policy π, (A1) captures the suboptimality of that policy, and (A2) captures its underestimation error. Since (A) takes an infimum, this term will be small whenever there is at least one reasonable policy whose value is not very underestimated.
On the other hand, the term labeled (B) corresponds to the largest overestimation error on any policy. Because it consists of a supremum over all policies, it will be small only when no policies are overestimated at all. Even a single overestimation can lead to significant suboptimality.
We see from these two terms that errors of overestimation and underestimation have differing impacts on suboptimality, suggesting that algorithms should be designed with this asymmetry in mind. We will see in Section 5 how this may be done. But first, let us further understand why this is necessary by studying in more depth a family of algorithms which treats its errors of overestimation and underestimation equivalently.
4 NAÏVE ALGORITHMS
The goal of this section is to paint a high-level picture of the worst-case suboptimality guarantees of a specific family of non-pessimistic approaches, which we call naı̈ve FDPO algorithms. Informally, the naı̈ve approach is to take the limited dataset of observations at face value, treating it as though it paints a fully accurate picture of the environment. Naı̈ve algorithms construct a maximum-likelihood MDP from the dataset, then use standard dynamic programming approaches on this empirical MDP. Definition 2. A naı̈ve algorithm is any algorithm in the family defined by the fixed-point function
fnaı̈ve(v π) := Aπ(rD + γPDv π).
Various FDPE and FDPO algorithms from this family could be described; in this work, we do not study these implementations in detail, although we do give pseudocode for some implementations in Appendix D.1.
One example of a naı̈ve FDPO algorithm which can be found in the literature is certainty equivalence (Jiang, 2019a). The core ideas behind naı̈ve algorithms can also be found in the function approximation literature, for example in FQI (Ernst et al., 2005; Jiang, 2019b). Additionally, when available data is held fixed, nearly all existing deep reinforcement learning algorithms are transformed into naı̈ve value-based FDPO algorithms. For example, DQN (Mnih et al., 2015) with a fixed replay buffer is a naı̈ve value-based FDPO algorithm. Theorem 1 (Naı̈ve FDPO suboptimality bound). Consider any naı̈ve value-based fixed-dataset policy optimization algorithm OVBnaı̈ve. Let µ be any value uncertainty function. With probability at least 1− δ, the suboptimality of OVBnaı̈ve is bounded with probability at least 1− δ by
SUBOPT(OVBnaı̈ve(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ[µπD,δ] ) + sup
π Eρ[µπD,δ]
Proof. This result follows directly from Lemma 1 and Lemma 3.
The infimum term is small whenever there is some reasonably good policy with low value uncertainty. In practice, this condition can typically be satisfied, for example by including expert demonstrations in the dataset. On the other hand, the supremum term will only be small if we have low value uncertainty for all policies – a much more challenging requirement. This explains the behavior of pathological examples, e.g. in Appendix G.1, where performance is poor despite access to virtually unlimited amounts of data from a near-optimal policy. Such a dataset ensures that the first term will be small by reducing value uncertainty of the near-optimal data collection policy, but does little to reduce the value uncertainty of any other policy, leading the second term to be large.
However, although pathological examples exist, it is clear that this bound will not be tight on all environments. It is reasonable to ask: is it likely that this bound will be tight on real-world examples? We argue that it likely will be. We identify two properties that most real-world tasks share: (1) The set of policies is pyramidal: there are an enormous number of bad policies, many mediocre policies, a few good policies, etc. (2) Due to the size of the state space and cost of data collection, most policies have high value uncertainty.
Given that these assumptions hold, naı̈ve algorithms will perform as poorly on most real-world environments as they do on pathological examples. Consider: there are many more policies than there is data, so there will be many policies with high value uncertainty; naı̈ve algorithms will likely overestimate several of these policies, and erroneously select one; since good policies are rare, the selected policy will likely be bad. It follows that running naı̈ve algorithms on real-world problems will typically yield suboptimality close to our worst-case bound. And, indeed, on deep RL benchmarks, which are selected due to their similarity to real-world settings, overestimation has been widely observed, typically correlated with poor performance (Bellemare et al., 2016; Van Hasselt et al., 2016; Fujimoto et al., 2019).
5 THE PESSIMISM PRINCIPLE
“Behave as though the world was plausibly worse than you observed it to be.” The pessimism principle tells us how to exploit our current knowledge to find the stationary policy with the best worstcase guarantee on expected return. We consider two specific families of pessimistic algorithms, the uncertainty-aware pessimistic algorithms and proximal pessimistic algorithms, and bound the worst-case suboptimality of each. These algorithms each include a hyperparameter, α, controlling the amount of pessimism, interpolating from fully-naı̈ve to fully-pessimistic. (For a discussion of the implications of the latter extreme, see Appendix G.2.) Then, we will compare the two families, and see how the proximal family is simply a trivial special case of the more general uncertainty-aware family of methods.
5.1 UNCERTAINTY-AWARE PESSIMISTIC ALGORITHMS
Our first family of pessimistic algorithms is the uncertainty-aware (UA) pessimistic algorithms. As the name suggests, this family of algorithms estimates the state-wise Bellman uncertainty and penalizes policies accordingly, leading to a pessimistic value estimate and a preference for policies with low value uncertainty. Definition 3. An uncertainty-aware pessimistic algorithm, with a Bellman uncertainty function uπD,δ and pessimism hyperparameter α ∈ [0, 1], is any algorithm in the family defined by the fixed-point function
fua(v π) = Aπ(rD + γPDv π)− αuπD,δ This fixed-point function is simply the naı̈ve fixed-point function penalized by the Bellman uncertainty. This can be interpreted as being pessimistic about the outcome of every action. Note that it remains to specify a technique to compute the Bellman uncertainty function, e.g. Appendix B.1, in order to get a concrete algorithm. It is straightforward to construct algorithms from this family by modifying naı̈ve algorithms to subtract the penalty term. Similar algorithms have been explored in the safe RL literature (Ghavamzadeh et al., 2016; Laroche et al., 2019) and the robust MDP literature (Givan et al., 1997), where algorithms with high-probability performance guarantees are useful in the context of ensuring safety.
Theorem 2 (Uncertainty-aware pessimistic FDPO suboptimality bound). Consider an uncertaintyaware pessimistic value-based fixed-dataset policy optimization algorithm OVBua . Let uπD,δ be any Bellman uncertainty function, µπD,δ be a corresponding value uncertainty function, and α ∈ [0, 1] be any pessimism hyperparameter. The suboptimality of OVBua is bounded with probability at least 1− δ by
SUBOPT(OVBua (D)) ≤ inf π
( Eρ[v π∗M M − v π M] + (1 + α) · Eρ[µπD,δ] ) + (1− α) · ( sup π Eρ[µπD,δ] )
Proof. See Appendix C.7.
This bound should be contrasted with our result from Theorem 1. With α = 0, the family of pessimistic algorithms reduces to the family of naı̈ve algorithms, so the bound is correspondingly identical. We can add pessimism by increasing α, and this corresponds to a decrease in the magnitude of the supremum term. When α = 1, there is no supremum term at all. In general, the optimal value of α lies between the two extremes.
To further understand the power of this approach, it is illustrative to compare it to imitation learning. Consider the case where the dataset contains a small number of expert trajectories but also a large number of interactions from a random policy, i.e. when learning from suboptimal demonstrations (Brown et al., 2019). If the dataset contained only a small amount of expert data, then both an UA pessimistic FDPO algorithm and an imitation learning algorithm would return a high-value policy. However, the injection of sufficiently many random interactions would degrade the performance of imitation learning algorithms, whereas UA pessimistic algorithms would continue to behave similarly to the expert data.
5.2 PROXIMAL PESSIMISTIC ALGORITHMS
The next family of algorithms we study are the proximal pessimistic algorithms, which implement pessimism by penalizing policies that deviate from the empirical policy. The name proximal was chosen to reflect the idea that these algorithms prefer policies which stay “nearby” to the empirical policy. Many FDPO algorithms in the literature, and in particular several recently-proposed deep learning algorithms (Fujimoto et al., 2019; Kumar et al., 2019; Laroche et al., 2019; Jaques et al., 2019; Wu et al., 2019; Liu et al., 2020), resemble members of the family of proximal pessimistic algorithms; see Appendix E. Also, another variant of the proximal pessimistic family, which uses state density instead of state-conditional action density, can be found in Appendix C.9.
Definition 4. A proximal pessimistic algorithm with pessimism hyperparameter α ∈ [0, 1] is any algorithm in the family defined by the fixed-point function
fproximal(v π) = Aπ(rD + γPDv π)− α (
TVS(π, π̂D) (1− γ)2 ) Theorem 3 (Proximal pessimistic FDPO suboptimality bound). Consider any proximal pessimistic value-based fixed-dataset policy optimization algorithm OVBproximal. Let µ be any state-action-wise decomposable value uncertainty function, and α ∈ [0, 1] be a pessimism hyperparameter. For any dataset D, the suboptimality of OVBproximal is bounded with probability at least 1− δ by
SUBOPT(Oproximal(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ [ µπD,δ + α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2 )]) + sup
π
( Eρ [ µπD,δ − α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2 )]) Proof. See Appendix C.8.
Once again, we see that as α grows, the large supremum term shrinks; similarly, by Lemma 5, when we have α = 1, the supremum term is guaranteed to be non-positive.3 The primary limitation of
3Initially, it will contain µπ ′
D,δ , but this can be removed since it is not dependent on π.
the proximal approach is the looseness of the value lower-bound. Intuitively, this algorithm can be understood as performing imitation learning, but permitting minor deviations. Constraining the policy to be near in distribution to the empirical policy can fail to take advantage of highly-visited states which are reached via many trajectories. In fact, in contrast to both the naı̈ve approach and the UA pessimistic approach, in the limit of infinite data this approach is not guaranteed to converge to the optimal policy. Also, note that when α ≥ 1− γ, this algorithm is identical to imitation learning.
5.3 THE RELATIONSHIP BETWEEN UNCERTAINTY-AWARE AND PROXIMAL ALGORITHMS
Though these two families may appear on the surface to be quite different, they are in fact closely related. A key insight of our theoretical work is that it reveals the important connection between these two approaches. Concretely: proximal algorithms are uncertainty-aware algorithms which use a trivial value uncertainty function.
To see this, we show how to convert an uncertainty-aware penalty into a proximal penalty. let µ be any state-action-wise decomposable value uncertainty function. For any dataset D, we have
µπD,δ = µ π̂D D,δ + (I − γA πPD) −1 ( (Aπ −Aπ̂D )(uD,δ + γPDµπ̂DD,δ) )
(Lemma 4)
≤ µπ̂DD,δ + (I − γA πPD)
−1 ( TVS(π, π′) (
1
(1− γ)2
)) . (Lemma 5)
We began with the uncertainty penalty. In the first step, we rewrote the uncertainty for π into the sum of two terms: the uncertainty for π̂D, and the difference in uncertainty between π and π̂D on various actions. In the second step, we chose our state-action-wise Bellman uncertainty to be 11−γ , which is a trivial upper bound; we also upper-bound the signed policy difference with the total variation. This results in the proximal penalty.4
Thus, we see that proximal penalties are equivalent to uncertainty-aware penalties which use a specific, trivial uncertainty function. This result suggests that uncertainty-aware algorithms are strictly better than their proximal counterparts. There is no looseness in this result: for any proximal penalty, we will always be able to find a tighter uncertainty-aware penalty by replacing the trivial uncertainty function with something tighter.
However, currently, proximal algorithms are quite useful in the context of deep learning. This is because the only uncertainty function that can currently be implemented for neural networks is the trivial uncertainty function. Until we discover how to compute uncertainties for neural networks, proximal pessimistic algorithms will remain the only theoretically-motivated family of algorithms.
6 EXPERIMENTS
We implement algorithms from each family to empirically investigate whether their performance of follows the predictions of our bounds. Below, we summarize the key predictions of our theory.
• Imitation. This algorithm simply learns to copy the empirical policy. It performs well if and only if the data collection policy performs well.
• Naı̈ve. This algorithm performs well only when almost no policies have high value uncertainty. This means that when the data is collected from any mostly-deterministic policy, performance of this algorithm will be poor, since many states will be missing data. Stochastic data collection improves performance. As the size of the dataset grows, this algorithm approaches optimality.
• Uncertainty-aware. This algorithm performs well when there is data on states visited by near-optimal policies. This is the case when a small amount of data has been collected from a near-optimal policy, or a large amount of data has been collected from a worse policy. As the size of the dataset grows, this algorithm to approaches optimality. This approach outperforms all other approaches.
4When constructing the penalty, we can ignore the first term, which does not contain π, and so is irrelevant to optimization.
• Proximal. This algorithm roughly mirrors the performance of the imitation approach, but improves upon it. As the size of the dataset grows, this algorithm does not approach optimality, as the penalty persists even when the environment’s dynamics are perfectly captured by the dataset.
Our experimental results qualitatively align with our predictions in both the tabular and deep learning settings, giving evidence that the picture painted by our theoretical analysis truly describes the FDPO setting. See Appendix D for pseudocode of all algorithms; see Appendix F for details on the experimental setup; see Appendix G.3 for additional experimental considerations for deep learning experiments that will be of interest to practicioners. For an open-source implementation, including full details suitable for replication, please refer to the code in the accompanying GitHub repository: github.com/anonymized
Tabular. The first tabular experiment, whose results are shown in Figure 1(a), compares the performance of the algorithms as the policy used to collect the dataset is interpolated from the uniform random policy to an optimal policy using -greedy. The second experiment, whose results are shown in Figure 1(b), compares the performance of the algorithms as we increase the size of the dataset from 1 sample to 200000 samples. In both experiments, we notice a qualitative difference between the trends of the various algorithms, which aligns with the predictions of our theory.
Neural network. The results of these experiments can be seen in Figure 2. Similarly to the tabular experiments, we see that the naı̈ve approach performs well when data is fully exploratory, and poorly when data is collected via an optimal policy; the pure imitation approach performs better when the data collection policy is closer to optimal. The pessimistic approach achieves the best of both worlds: it correctly imitates a near-optimal policy, but also learns to improve upon it somewhat when the data is more exploratory. One notable failure case is in FREEWAY, where the performance of the pessimistic approach barely improves upon the imitation policy, despite the naı̈ve approach performing near-optimally for intermediate values of .
7 DISCUSSION AND CONCLUSION
In this work, we provided a conceptual and mathematical framework for thinking about fixed-dataset policy optimization. Starting from intuitive building blocks of uncertainty and the over-under decomposition, we showed the core issue with naı̈ve approaches, and introduced the pessimism principle as the defining characteristic of solutions. We described two families of pessimistic algorithms, uncertainty-aware and proximal. We see theoretically that both of these approaches have advantages over the naı̈ve approach, and observed these advantages empirically. Comparing these two families of pessimistic algorithms, we see both theoretically and empirically that uncertainty-aware
algorithms are strictly better than proximal algorithms, and that proximal algorithms may not yield the optimal policy, even with infinite data.
Future directions. Our results indicate that research in FDPO should not focus on proximal algorithms. The development of neural uncertainty estimation techniques will enable principled uncertainty-aware deep learning algorithms. As is evidenced by our tabular results, we expect these approaches to yield dramatic performance improvements, rendering algorithms derived from the proximal family (Kumar et al., 2019; Fujimoto et al., 2019; Laroche et al., 2019; Kumar et al., 2020) obsolete.
On ad-hoc solutions. It is undoubtably disappointing to see that proximal algorithms, which are far easier to implement, are fundamentally limited in this way. It is tempting to propose various adhoc solutions to mitigate the flaws of proximal pessimistic algorithms in practice. However, in order to ensure that the resulting algorithm is principled, one must be careful. For example, one might consider tuning α; however, doing the tuning requires evaluating each policy in the environment, which involves gaining information by interacting with the environment, which is not permitted by the problem setting. Or, one might consider e.g. an adaptive pessimism hyperparameter which decays with the size of the dataset; however, in order for such a penalty to be principled, it must be based on an uncertainty function, at which point we may as well just use an uncertainty-aware algorithm.
Stochastic policies. One surprising property of pessimsitic algorithms is that the optimal policy is often stochastic. This is because the penalty term included in their fixed-point objective is often minimized by stochastic policies. For the penalty of proximal pessimistic algorithms, it is easy to see that this will be the case for any non-deterministic empirical policy; for UA pessimsitic algorithms, it is dependent on the choice of Bellman uncertainty function, but often still holds (see Appendix B.2 for the derivation of a Bellman uncertainty function with this property). This observation lends mathematical rigor to the intuition that agents should ‘hedge their bets’ in the face of epistemic uncertainty. This property also means that the simple approach of selecting the argmax action is no longer adequate for policy improvement. In Appendix D.2.2 we discuss a policy improvement procedure that takes into account the proximal penalty to find the stochastic optimal policy.
Implications for RL. Finally, due to the close connection between the FDPO and RL settings, this work has implications for deep reinforcement learning. Many popular deep RL algorithms utilize a replay buffer to break the correlation between samples in each minibatch (Mnih et al., 2015). However, since these algorithms typically alternate between collecting data and training the network, the replay buffer can also be viewed as a ‘temporarily fixed’ dataset during the training phase. These algorithms are often very sensitive to hyperparemters; in particular, they perform poorly when the number of learning steps per interaction is large (Fedus et al., 2020). This effect can be explained by our analysis: additional steps of learning cause the policy to approach its naı̈ve FDPO fixed-point, which has poor worst-case suboptimality. A pessimistic algorithm with a better fixed-point could therefore allow us to train more per interaction, improving sample efficiency. A potential direction of future work is therefore to incorporate pessimism into deep RL.
A BACKGROUND
We write vectors using bold lower-case letters, a, and matrices using upper-case letters, A. To refer to individual cells of a vector or rows of a matrix, we use function notation, a(x). We write the identity matrix as I . We use the notation Ep[·] to denote the average value of a function under a distribution p, i.e. for any space X , distribution p ∈ Dist(X ), and function a : X → R, we have Ep[a] := Ex∼p[a(x)].
When applied to vectors or matrices, we use <,>,≤,≥ to denote element-wise comparison. Similarly, we use | · | to denote the element-wise absolute value of a vector: |a|(x) = |a(x)|. We use |a|+ to denote the element-wise maximum of a and the zero vector. To denote the total variation distance between two probability distributions, we use TV(p,q) = 12 |p−q|1. When P andQ are conditional probability distributions, we adopt the convention TVX (p,q) = 〈 12 |p(·|x)−q(·|x)|1 : x ∈ X〉, i.e., the vector of total variation distances conditioned on each x ∈ X .
Markov Decision Processes. We represent the environment with which we are interacting as a Markov Decision Process (MDP), defined in standard fashion: M := 〈S,A,R, P, γ, ρ〉. S and A denote the state and action space, which we assume are discrete. We useZ := S×A as the shorthand for the joint state-action space. The reward functionR : Z → Dist([0, 1]) maps state-action pairs to distributions over the unit interval, while the transition function P : Z → Dist(S) maps state-action pairs to distributions over next states. Finally, ρ ∈ Dist(S) is the distribution over initial states. We use r to denote the expected reward function, r(〈s, a〉) := Er∼R(·|〈s,a〉)[r], which can also be interpreted as a vector r ∈ R|Z|. Similarly, note that P can be described as P : (Z×S)→ R, which can be represented as a stochastic matrix P ∈ R|Z|×|S|. In order to emphasize that these reward and transition functions correspond to the true environment, we sometimes equivalently denote them as rM, PM. To denote the vectors of a constant whose sizes are the state and state-action space, we use a single dot to mean state and two dots to mean state-action, e.g., 1̇ ∈ R|S| and 1̈ ∈ R|Z|. A policy π : S → Dist(A) defines a distribution over actions, conditioned on a state. We denote the space of all possible policies as Π. We define an “activity matrix” for each policy, Aπ ∈ RS×Z , which encodes the state-conditional state-action distribution of π, by letting Aπ(s, 〈ṡ, a〉) := π(a|s) if s = ṡ, otherwise Aπ(s, 〈ṡ, a〉) := 0. Acting in the MDP according to π can thus be represented by AπP ∈ R|S|×|S| or PAπ ∈ R|Z|×|Z|. We define a value function as any v : Π → S → R or q : Π → Z → R whose output is bounded by [0, 11−γ ]. Note that this is a slight generalization of the standard definition (Sutton & Barto, 2018) since it accepts a policy as an input. We use the shorthand vπ := v(π) and qπ := q(π) to denote the result of applying a value function to a specific policy, which can also be represented as a vector, vπ ∈ R|S| and qπ ∈ R|Z|. To denote the output of an arbitrary value function on an arbitrary policy, we use unadorned v and q. The expected return of an MDPM, denoted vM or qM, is the discounted sum of rewards acquired when interacting with the environment:
vM(π) := ∞∑ t=0 (γAπP ) t Aπr qM(π) := ∞∑ t=0 (γPAπ) t r
Note that vπM = A πqπM. An optimal policy of an MDP, which we denote π ∗ M, is a policy for which the expected return vM is maximized under the initial state distribution: π∗M := arg maxπ Eρ[vπM]. The statewise expected returns of an optimal policy can be written as vπ ∗ M M . Of particular interest are value functions whose outputs obey fixed-point relationships, vπ = f(vπ) for some f : (S → R) → (S → R). The Bellman consistency equation for a policy π, BπM(x) := Aπ(r + γPx), which uniquely identifies the vector of expected returns for π, since vπM is the only vector for which vπM = BπM(vπM) holds. Finally, for any state s, the probability of being in the state s′ after t time steps when following policy π is [(AπP )t](s, s′). Furthermore, ∑∞ t=0 (γA πP ) t
= (I − γAπP )−1. We refer to (I − γAπP )−1 as the discounted visitation of π.
Datasets. We next introduce basic concepts that are helpful for framing the problem of fixeddataset policy optimization. We define a dataset of d transitions D := {〈s, a, r, s′〉}d, and denote the space of all datasets as D. In this work, we specifically consider datasets sampled from a data distribution Φ : Dist(Z); for example, the distribution of state-actions reached by following some stationary policy. We use D ∼ Φd to denote constructing a dataset of d tuples 〈s, a, r, s′〉, by first sampling each 〈s, a〉 ∼ Φ, and then sampling r and s′ i.i.d. from the environment reward function and transition function respectively, i.e. each r ∼ R(·|〈s, a〉) and s′ ∼ P (·|〈s, a〉).5 We sometimes index D using function notation, using D(s, a) to denote the multiset of all 〈r, s′〉 such that 〈s, a, r, s′〉 ∈ D. We use n̈D ∈ R|Z| to denote the vectors of counts, that is, n̈D(〈s, a〉) := |D(s, a)|. We sometimes use state-wise versions of these vectors, which we denote with ṅD. It is further useful to consider the maximum-likelihood reward and transition functions, computed by averaging all rewards and transitions observed in the dataset for each state-action. To this end, we define empirical reward vector rD(〈s, a〉) := ∑ r,s′∈D(〈s,a〉) r |D(〈s,a〉)| and empirical transition
matrix PD(s′|〈s, a〉) := ∑ r,ṡ′∈D(〈s,a〉) I(ṡ′=s′) |D(〈s,a〉)| at all state-actions for which n̈D(〈s, a〉) > 0. Where with n̈D(〈s, a〉) = 0, there is no clear way to define the maximum-likelihood estimates of 5Note that this is in some sense a simplifying assumption. In practice, datasets will typically be collected using a trajectory from a non-stationary policy, rather than i.i.d. sampling from the stationary distribution of a stationary policy. This greatly complicates the analysis, so we do not consider that setting in this work.
reward and transition, so we do not specify them. All our results hold no matter how these values are chosen, so long as rD ∈ [0, 11−γ ] and PD is stochastic. The empirical policy of a dataset D is defined as π̂D(a|s) := |D(〈s,a〉)||D(〈s,·〉)| except where n̈D(〈s, a〉) = 0, where it can similarly be any valid action distribution. The empirical visitation distribution of a dataset D is computed in the same way as the visitation distribution, but with PD replacing P , i.e. (I − γAπPD)−1.
Problem setting. The primary focus of this work is on the properties of fixed-dataset policy optimization (FDPO) algorithms. These algorithms take the form of a function O : D → Π, which maps from a dataset to a policy.6 Note that in this work, we consider D ∼ Φd, so the dataset is a random variable, and therefore O(D) is also a random variable. The goal of any FDPO algorithm is to output a policy with minimum suboptimality, i.e. maximum return. Suboptimality is a random variable dependent on D, computed by taking the difference between the expected return of an optimal policy and the learned policy under the initial state distribution,
SUBOPT(O(D)) = Eρ[v π∗M M ]− Eρ[v O(D) M ].
A related concept is the fixed-dataset policy evaluation algorithm, which is any function E : D → Π→ S → R, which uses a dataset to compute a value function. In this work, we focus our analysis on value-based FDPO algorithms, the subset of FDPO algorithms that utilize FDPE algorithms.7 A value-based FDPO algorithm with FDPE subroutine Esub is any algorithm with the following structure:
OVBsub (D) := arg max π Eρ[Esub(D,π)].
We define a fixed-point family of algorithms, sometimes referred to as just a family, in the following way. Any family called family is based on a specific fixed-point identity ffamily. We use the notation Efamily to denote any FDPE algorithm whose output vπD := Efamily(D,π) obeys vπD = ffamily(vπD). Finally, OVBfamily refers to any value-based FDPO algorithm whose subroutine is Efamily. We call the set of all algorithms that could implement Efamily the family of FDPE algorithms, and the set of all algorithms that could implement OVBfamily as the family of FDPO algorithms.
B UNCERTAINTY
Epistemic uncertainty measures an agent’s knowledge about the world, and is therefore a core concept in reinforcement learning, both for exploration and exploitation; it plays an important role in this work. Most past analyses in the literature implicitly compute some form of epistemic uncertainty to derive their bounds. An important analytical choice in this work is to cleanly separate the estimation of epistemic uncertainty from the problem of decision making. Our approach is to first define a notion of uncertainty as a function with certain properties, then assume that such a function exists, and provide the remainder of our technical results under such an assumption. We also describe several approaches to computing uncertainty.
Definition 5 (Uncertainty). A function uD,δ : Z → R is a state-action-wise Bellman uncertainty function if for a dataset D ∼ Φd, it obeys with probability at least 1 − δ for all policies π and all values v:
uD,δ ≥ |(rM + γPMv)− (rD + γPDv)|
6This formulation hides a dependency on ρ, the start-state distribution of the MDP. In general, ρ can be estimated from the dataset D, but this estimation introduces some error that affects the analysis. In this work, we assume for analytical simplicity that ρ is known a priori. Technically, this means that it must be provided as an input to O. We hide this dependency for notational clarity.
7Intuitively, these algorithms use a policy evaluation subroutine to convert a dataset into a value function, and return an optimal policy according to that value function. Importantly, this definition constrains only the objective of the approach, not its actual algorithmic implementation, i.e., it includes algorithms which never actually invoke the FDPE subroutine. For many online model-free reinforcement learning algorithms, including policy iteration, value iteration, and Q-learning, we can construct closely analogous value-based FDPO algorithms; see Appendix D.1. Furthermore, model-based techniques can be interpreted as using a model to implicitly define a value function, and then optimizing that value function; our results also apply to model-based approaches.
A function uπD,δ : S → R is a state-wise Bellman uncertainty function if for a dataset D ∼ Φd and policy π, it obeys with probability at least 1− δ for all policies π and all values v:
uπD,δ ≥ |Aπ (rM + γPMv)−Aπ (rD + γPDv)| A function µπD,δ : S → R is a value uncertainty function if for a dataset D ∼ Φd and policy π, it obeys with probability at least 1− δ for all policies π and all values v:
µπD,δ ≥ ∞∑ t=0 (γAπPD) t |Aπ (rM + γPMv)−Aπ (rD + γPDv)|
We refer to the quantity returned by an uncertainty function as uncertainty, e.g., value uncertainty refers to the quantity returned by a value uncertainty function.
Given a state-action-wise Bellman uncertainty function uD,δ , it is easy to verify that AπuD,δ is a state-wise Bellman uncertainty function. Similarly, given a state-wise Bellman uncertainty function uπD,δ , it is easy to verify that (I − γAπPD) −1 uπD,δ is a value uncertainty function. Uncertainty functions which can be constructed in this way are called decomposable. Definition 6 (Decomposablility). A state-wise Bellman uncertainty function uπD,δ is state-actionwise decomposable if there exists a state-action-wise Bellman uncertainty function uD,δ such that uπD,δ = A
πuD,δ . A value uncertainty function µ is state-wise decomposable if there exists a statewise Bellman uncertainty function uπD,δ such that µ = (I − γAπPD) −1 uπD,δ , and further, it is state-action-wise decomposable if that uπD,δ is itself state-action-wise decomposable.
Our definition of Bellman uncertainty captures the intuitive notion that the uncertainty at each state captures how well an approximate Bellman update matches the true environment. Value uncertainty represents the accumulation of these errors over all future timesteps.
How do these definitions correspond to our intuitions about uncertainty? An application of the environment’s true Bellman update can be viewed as updating the value of each state to reflect information about its future. However, no algorithm in the FDPO setting can apply such an update, because the true environment dynamics are unknown. We may use the dataset to estimate what such an update would look like, but since the limited information in the dataset may not fully specify the properties of the environment, this update will be slightly wrong. It is intuitive to say that the uncertainty at each state corresponds to how well the approximate update matches the truth. Our definition of Bellman uncertainty captures precisely this notion. Further, value uncertainty represents the accumulation of these errors over all future timesteps.
How can we algorithmically implement uncertainty functions? In other words, how can we compute a function with the properties required for Definition 5, using only the information in the dataset? All forms of uncertainty are upper-bounds to a quantity, and a tighter upper-bound means that other results down the line which leverage this quantity will be improved. Therefore, it is worth considering this question carefully.
A trivial approach to computing any uncertainty function is simply to return 11−γ for all s and 〈s, a〉. But although this is technically a valid uncertainty function, it is not very useful, because it does not concentrate with data and does not distinguish between certain and uncertain states. It is very loose and so leads to poor guarantees.
In tabular environments, one way to implement a Bellman uncertainty function is to use a concentration inequality. Depending on which concentration inequality is used, many Bellman uncertainty functions are possible. These approaches lead to Bellman uncertainty which is lower at states with more data, typically in proportion to the square root of the count. To illustrate how to do this, we show in Appendix B.1 how a basic application of Hoeffding’s inequality can be used to derive a state-action-wise Bellman uncertainty. In Appendix B.2, we show an alternative application of Hoeffding’s which results in a state-wise Bellman uncertainty, which is a tighter bound on error. In Appendix B.3, we discuss other techniques which may be useful in tightening further.
When the value function is represented by a neural network, it is not currently known how to implement a Bellman uncertainty function. When an empirical Bellman update is applied to a neural
network, the change in value of any given state is impacted by generalization from other states. Therefore, the counts are not meaningful, and concentration inequalities are not applicable. In the neural network literature, many “uncertainty estimation techniques” have been proposed, which capture something analogous to an intuitive notion of uncertainty; however, none are principled enough to be useful in computing Bellman uncertainty.
B.1 STATE-ACTION-WISE BOUND
We seek to construct an uπD,δ such that |Aπ (rM + γPMv)−Aπ (rD + γPDv)| ≤ uπD,δ with probability at least 1− δ. Firstly, let’s consider the simplest possible bound. v is bounded in [0, 11−γ ], so both Aπ (rM + γPMv) and Aπ (rD + γPDv) must be as well. Thus, their difference is also bounded:
|Aπ (rM + γPMv)−Aπ (rD + γPDv)| ≤ 1
1− γ
Next, consider that for any 〈s, a〉, the expression rD(〈s, a〉) + γPD(〈s, a〉)vπ can be equivalently expressed as an expectation of random variables,
rD(〈s, a〉) + γPD(〈s, a〉)v = 1 n̈D(〈s, a〉) ∑
r,s′∈D(〈s,a〉)
r + γv(s′),
each with expected value
Er,s′∈D(〈s,a〉)[r + γv(s′)] = Er∼R(·|〈s,a〉) s′∼P (·|〈s,a〉) [r + γv(s′)] = [rM + γPMv](〈s, a〉).
Note also that each of these random variables is bounded [0, 11−γ ]. Thus, Hoeffding’s inequality tells us that this mean of bounded random variables must be close to their expectation with high probability. By invoking Hoeffding’s at each of the |S ×A| state-actions, and taking a union bound, we see that with probability at least 1− δ,
|(rM + γPMv)− (rD + γPDv)| ≤ 1
1− γ
√ 1
2 ln 2|S × A| δ n̈−1D
We can left-multiply Aπ and rearrange to get:
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤
( 1
1− γ
√ 1
2 ln 2|S × A| δ
) Aπn̈
− 12 D
Finally, we simply intersect this bound with the 11−γ bound from earlier. Thus, we see that with probability at least 1− δ,
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤ 1
1− γ ·min
((√ 1
2 ln 2|S × A| δ
) Aπn̈
− 12 D , 1
)
B.2 STATE-WISE BOUND
This bound is similar to the previous, but uses Hoeffding’s to bound the value at each state all at once, rather than bounding the value at each state-action.
Choose a collection of possible state-local policies Πlocal ⊆ Dist(A). Each state-local policy is a member of the action simplex.
For any s ∈ S and π ∈ Πlocal, the expression [Aπ(rD + γPDv)](s) can be equivalently expressed as a mean of random variables,
[Aπ(rD + γPDv)](s) = 1
ṅD(s) ∑ a,r,s′∈D(s) π(a|s) π̂D(a|s) (r + γv(s′)),
each with expected value Ea,r,s′∈D(s) [ π(a|s) π̂D(a|s) (r + γv(s′)) ] = E a∼π(·|s)
r∼R(·|〈s,a〉) s′∼P (·|〈s,a〉)
[r + γv(s′)] = [Aπ(rM + γPMv)](s).
Note also that each of these random variables is bounded [0, 11−γ π(a|s) π̂D(a|s) ]. Thus, Hoeffding’s inequality tells us that this sum of bounded random variables must be close to its expectation with high probability. By invoking Hoeffding’s at each of the |S| states and |Πlocal| local policies, and taking a union bound, we see that with probability at least 1− δ,
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤ 1
1− γ
√ 1
2 ln 2|S ×Πlocal| δ (Aπ)◦2n̈−1D
where the term (Aπ)◦2 refers to the elementwise square. Finally, we once again intersect with 11−γ , yielding that with probability at least 1− δ,
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤ 1
1− γ ·min
(√ 1
2 ln 2|S ×Πlocal| δ (Aπ)◦2n̈−1D , 1
)
Comparing this to the Bellman uncertainty function in Appendix B.1, we see two differences. Firstly, we have replaced a factor of |A| with |Πlocal|, typically loosening the bound somewhat (depending on choice of considered local policies). Secondly, Aπ has now moved inside of the square root; since the square root is concave, Jensen’s inequality says that√
(Aπ)◦2n̈−1D ≤ √ (Aπ)◦2 √ n̈−1D = A πn̈ − 12 D
and so this represents a tightening of the bound.
When Πlocal is the set of deterministic policies, this bound is equivalent to that of Appendix B.1. This can easily be seen by noting that for a deterministic policy, all elements of (Aπ)◦2 are either 1 or 0, and so √
(Aπ)◦2n̈−1D = A πn̈ − 12 D
and also that the size of the set of deterministic policies is exactly |A|. An important property of this bound is that it shows that stochastic policies can be often be evaluated with lower error than deterministic policies. We prove this by example. Consider an MDP with a single state s and two actions a0, a1, and a dataset with n̈D(〈s, a0〉) = n̈D(〈s, a1〉) = 2. We can parameterize the policy by a single number ξ ∈ [0, 1] by setting π(a0|s) = ξ, π(a1|s) = 1− ξ. The size of this bound will be proportional to √ ξ2
2 + (1−ξ)2
2 , and setting the derivative equal to zero, we see that the minimum is ξ = 12 . (Of course, we would need to increase the size of our local policy set to include this, in order to be able to actually select it, and doing so will increase the overall bound; this example only shows that for a given policy set, the selected policy may in general be stochastic.)
Finding the optimum for larger local policy sets is non-trivial, so we leave a full treatment of algorithms which leverage this bound for future work.
B.3 OTHER BOUNDS
There are a few other paths by which this bound can be made tighter still. The above bounds take an extra factor of 11−γ because we bound the overall return, rather than simply bounding the reward and transition functions. This was done because a bound on the transition function would add a cost of O( √ S). However, this can be mitigated by intersecting the above confidence interval with a Good-Turing interval, as proposed in Taleghan et al. (2015). Doing so will cause the bound to concentrate much more quickly in MDPs where the transition function is relatively deterministic. We expect this to be the case for most practical MDPs.
Similarly, empirical Bernstein confidence intervals can be used in place of Hoeffding’s, to increase the rate of concentration for low-variance rewards and transitions (Maurer & Pontil, 2009), leading to improved performance in MDPs where these are common.
Finally, we may be able to apply a concentration inequality in a more advanced fashion to compute a value uncertainty function which is not statewise decomposable: we bound some useful notion of value error directly, rather than computing the Bellman uncertainty function and taking the visitation-weighted sum. This would result in an overall tighter bound on value uncertainty by hedging over data across multiple timesteps. However, in doing so, we would sacrifice the monotonic improvement property needed for convergence of algorithms like policy iteration. This idea has a parallel in the robust MDP literature. The bounds in Appendix B.1 can be seen as constructing an sarectangular robust MDP, whereas Appendix B.2 is similar to constructing an s-rectangular robust MDP Wiesemann et al. (2013). More recently, approaches have been proposed which go beyond s-rectangular (Goyal & Grand-Clement, 2018), and such approaches likely have natural parallels in implementing value uncertainty functions.
C PROOFS
C.1 PROOF OF OVER/UNDER DECOMPOSITION
Starting from the definition of suboptimality, we see
SUBOPT(OVB(D)) = Eρ[v π∗M M ]− Eρ[v π∗D M ]
= Eρ[v π∗M M + (−v π D + v π D) + (−v π∗D D + v π∗D D )− v π∗D M ] (valid for any π) ≤ Eρ[v∗M − vπD] + Eρ[v π∗D D − v π∗D M ] (using Eρ[v π D − v π∗D D ] ≤ 0)
Since the above holds for all π,
SUBOPT(OVB(D)) ≤ inf π
( Eρ[v π∗M M − v π D] ) + Eρ[v π∗D D − v π∗D M ]
≤ inf π
( Eρ[v π∗M M − v π D] )
+ sup π
( Eρ[vπD − vπM] ) (using vπ ∗ D D ∈ Π)
= inf π
( Eρ[v π∗M M − v π M] + Eρ[vπM − vπD] ) + sup
π
( Eρ[vπD − vπM] )
C.2 TIGHTNESS OF OVER/UNDER DECOMPOSITION
We show that the bound given in Lemma 1 is tight via a simple example.
Proof. Consider a bandit-structured MDP with a single state and two actions, A and B, with rewards of 0 and 1 respectively, which both lead to terminal states.
First, consider the left-hand side. If an FDPE subroutine estimates the value of both arms to be 1, then the policy which always selects arm A is an optimal policy of the corresponding FDPO algorithm. In this case, the suboptimality is clearly equal to 1. This is clearly the worst-case suboptimality, since it is the largest possible suboptimality in the environment.
On the right-hand side, note that term (A) is 0 when π is the policy that always picks B, while term (B) is 1 when π is the policy that always picks A. Thus, the left-hand and right-hand sides are equal, and the bound is tight.
C.3 RESIDUAL VISITATION LEMMA
We prove a basic lemma showing that the error of any value function is equal to its one-step Bellman residual, summed over its visitation distribution. Though this result is well-known, it is not clearly stated elsewhere in the literature, so we prove it here for clarity. Lemma 2. For any MDP ξ and policy π, consider the Bellman fixed-point equation given by, let vπξ be defined as the unique value vector such that vπξ = A π(rξ +γPξv π ξ ), and let v be any other value
vector. We have
vπξ − v = (I − γAπPξ)−1(Aπ(rξ + γPξv)− v) (1) v − vπξ = (I − γAπPξ)−1(v −Aπ(rξ + γPξv)) (2) |vπξ − v| = (I − γAπPξ)−1|Aπ(rξ + γPξv)− v| (3)
Proof.
Aπ(rξ + γPξv)− v = Aπ(rξ + γPξv)− vπξ + vπξ − v = Aπ(rξ + γPξv)−Aπ(rξ + γPξvπξ ) + vπξ − v = γAπPξ(v − vπξ ) + (vπξ − v) = (vπξ − v)− γAπPξ(vπξ − v) = (I − γAπPξ)(vπξ − v)
Thus, we see (I−γAπPξ)−1(Aπ(rξ +γPξv)−v) = vπξ −v. An identical proof can be completed starting with v −Aπ(rξ + γPξv), leading to the desired result.
C.4 NAÏVE FDPE ERROR BOUND
We show how the error of naı̈ve FDPE algorithms is bounded by the value uncertainty of stateactions visited by the policy under evaluation. Next, in Section 4, we use this bound to derive a suboptimality guarantee for naı̈ve FDPO. Lemma 3 (Naı̈ve FDPE error bound). Consider any naı̈ve fixed-dataset policy evaluation algorithm Enaı̈ve. For any policy π and dataset D, denote vπD := Enaı̈ve(D,π). Let µπD,δ be any value uncertainty function. The following component-wise bound holds with probability at least 1− δ:
|vπM − vπD| ≤ µπD,δ
Proof. Notice that the naı̈ve fixed-point function is equivalent to the Bellman fixed-point equation for a specific MDP: the empirical MDP defined by 〈S,A, rD, PD, γ, ρ〉. Thus, invoking Lemma 2, for any values v we have
|vπD − v| = (I − γAπPD)−1|Aπ(rD + γPDv)− v|.
Since vπM is a value vector, this immediately implies that
|vπD − vπM| = (I − γAπPD)−1|Aπ(rD + γPDvπM)− vπM|.
Since vπM is the solution to the Bellman consistency fixed-point,
|vπD − vπM| = (I − γAπPD)−1|Aπ(rD + γPDvπM)−Aπ(rM + γPMvπM)|.
Using the definition of a value uncertainty function µπD,δ , we arrive at
|vπD − vπM| ≤ µπD,δ completing the proof.
Thus, reducing value uncertainty improves our guarantees on evaluation error. For any fixed policy, value uncertainty can be reduced by reducing the Bellman uncertainty on states visited by that policy. In the tabular setting this means observing more interactions from the state-actions that that policy visits frequently. Conversely, for any fixed dataset, we will have a certain Bellman uncertainty in each state, and policies mostly visit low-Bellman-uncertainty states can be evaluated with lower error.8
8In the function approximation setting, we do not necessarily need to observe an interaction with a particular state-action to reduce our Bellman uncertainty on it. This is because observing other state-actions may allow us to reduce Bellman uncertainty through generalization. Similarly, the most-certain policy for a fixed dataset may not be the policy for which we have the most data, but rather, the policy which our dataset informs us about the most.
Our bound differs from prior work (Jiang, 2019a; Ghavamzadeh et al., 2016) in that it is significantly more fine-grained. We provide a component-wise bound on error, whereas previous results bound the l∞ norm. Furthermore, our bounds are sensitive to the Bellman uncertainty in each individual reward and transition, rather than only relying on the most-uncertain. As a result, our bound does not require all states to have the same number of samples, and is non-vacuous even in the case where some state-actions have no data.
Our bound can also be viewed as an extension of work on approximate dynamic programming. In that setting, the literature contains fine-grained results on the accumulation of local errors (Munos, 2007). However, those results are typically understood as applying to errors induced by approximation via some limited function class. Our bound can be seen as an application of those ideas to the case where errors are induced by limited observations.
C.5 RELATIVE VALUE UNCERTAINTY
The key to the construction of proximal pessimistic algorithms is the relationship between the value uncertainties of any two policies π, π′, for any state-action-wise decomposable value function.
Lemma 4 (Relative value uncertainty). For any two policies π, π′, and any state-action-wise decomposable value uncertainty µ, we have
µπD,δ − µπ ′ D,δ = (I − γAπPD)−1 ( (Aπ −Aπ ′ )(uD,δ + γPDµ π′ D,δ) )
Proof. Firstly, note that since the value uncertainty function is state-action-wise decomposable, we can express it in a Bellman-like form for some state-wise Bellman uncertainty uπD,δ as µ π D,δ = (I − γAπPD)−1 uπD,δ . Further, uπD,δ is itself state-action-wise decomposable, so it can be written as uπD,δ = A πuD,δ .
We can use this to derive the following relationship.
µπD,δ = (I − γAπPD) −1 uπD,δ
µπD,δ − γPDAπµπD,δ = uπD,δ µπD,δ = u π D,δ + γA πPDµ π D,δ
Next, we bound the difference between the value uncertainty of π and π′.
µπD,δ − µπ ′ D,δ = (u π D,δ + γA πPDµ π D,δ)− (uπ ′ D,δ + γA π′PDµ π′ D,δ)
= (uπD,δ − uπ ′ D,δ) + γA πPDµ π D,δ − γAπ ′ PDµ π′ D,δ = (Aπ −Aπ ′ )uD,δ + γA πPDµ π D,δ − γ(Aπ ′ −Aπ +Aπ)PDµπ ′ D,δ
= γAπPD ( µπD,δ − µπ ′ D,δ ) + (Aπ −Aπ ′ )uD,δ + γ(A π −Aπ ′ )PDµ π′ D,δ
= γAπPD ( µπD,δ − µπ ′ D,δ ) + (Aπ −Aπ ′ )(uD,δ + γPDµ π′ D,δ)
This is a geometric series, so (I − γAπPD) ( µπD,δ − µπ ′ D,δ ) = (Aπ − Aπ′)(uD,δ + γPDµπ ′ D,δ). Left-multiplying by (I − γAπPD)−1 we arrive at the desired result.
C.6 NAÏVE FDPE RELATIVE ERROR BOUND
Lemma 5 (Naı̈ve FDPE relative error bound). Consider any naı̈ve fixed-dataset policy evaluation algorithm Enaı̈ve. For any policy π and dataset D, denote vπD := Enaı̈ve(D,π). Then, for any other policy π′, the following bound holds with probability at least 1− δ:
|vπD − vπM| ≤ µπD,δ ≤ µπ ′ D,δ + (I − γAπPD)−1 (
1
(1− γ)2
) TVS(π, π′)
Proof. The goal of this section is to construct an error bound for which we can optimize π without needing to compute any uncertainties. To do this, we must replace this quantity with a looser upper bound.
First, consider a state-action-wise decomposable value uncertainty function µπD,δ . We have µ π D,δ = (I − γPDAπ)−1uπD,δ for some uπD,δ , where uπD,δ = AπuD,δ for some uD,δ .
Note that state-action-wise Bellman uncertainty can be trivially bounded as uD,δ ≤ 11−γ 1̈. Also, since (I−γPDAπ)−1 ≤ 11−γ , any state-action-wise decomposable value uncertainty can be trivially bounded as µπD,δ ≤ 1(1−γ)2 1̇.
We now invoke Lemma 4. We then substitute the above bounds into the second term, after ensuring that all coefficients are positive.
µπD,δ − µπ ′ D,δ = (I − γAπPD)−1 ( (Aπ −Aπ ′ )(uD,δ + γPDµ π′ D,δ) )
≤ (I − γAπPD)−1 ( |Aπ −Aπ ′ |+(uD,δ + γPDµπ ′ D,δ) )
≤ (I − γAπPD)−1 ( |Aπ −Aπ ′ |+ ( 1
1− γ 1̈ + γPD
1
(1− γ)2 1̇ )) = (I − γAπPD)−1 ( |Aπ −Aπ ′ |+ ( 1
1− γ 1̈ +
γ
(1− γ)2 1̈ )) = (I − γAπPD)−1 ( 1
(1− γ)2
) |Aπ −Aπ ′ |+1̈
= (I − γAπPD)−1 (
1
(1− γ)2
) TVS(π, π′)
The third-to-last step follows from the fact that PD is stochastic. The final step follows from the fact that the positive and negative components of the state-wise difference between policies must be symmetric, so |Aπ − Aπ′ |+1̈ is precisely equivalent to the state-wise total variation distance, TVS(π, π′). Thus, we have
µπD,δ ≤ µπ ′ D,δ + (I − γAπPD)−1 (
1
(1− γ)2
) TVS(π, π′).
Finally, invoking Lemma 3, we arrive at the desired result.
C.7 SUBOPTIMALITY OF UNCERTAINTY-AWARE PESSIMISTIC FDPO ALGORITHMS
Let vπD := Eua(D,π). From the definition of the UA family, we have the fixed-point property vπD = A π(rD + γPDv π D) − αuπD,δ , and the standard geometric series rearrangement yields vπD = (I − γAπPD)−1 (AπrD − αuπD,δ). From here, we see:
vπD = (I − γAπPD) −1 (AπrD − αuπD,δ)
= (I − γAπPD)−1AπrD − (I − γAπPD)−1 αuπD,δ = Enaı̈ve(D,π)− αµπD,δ
We now use this to bound overestimation and underestimation error of Eua(D,π) by invoking Lemma 3, which holds with probability at least 1− δ. First, for underestimation, we see:
vπM − vπD = vπM − ( Enaı̈ve(D,π)− αµπD,δ ) = (vπM − Enaı̈ve(D,π)) + αµπD,δ ≤ µπD,δ + αµπD,δ = (1 + α)µπD,δ
and thus, vπM − vπD ≤ (1 + α)µπD,δ . Next, for overestimation, we see: vπD − vπM = ( Enaı̈ve(D,π)− αµπD,δ ) − vπM
= (Enaı̈ve(D,π)− vπM)− αµπD,δ ≤ µπD,δ − αµπD,δ = (1− α)µπD,δ
and thus, vπD − vπM ≤ (1 − α)µπD,δ . Substituting these bounds into Lemma 1 gives the desired result.
C.8 SUBOPTIMALITY OF PROXIMAL PESSIMSITIC FDPO ALGORITHMS
Let vπD := Eproximal(D,π). From the definition of the proximal family, we have the fixed-point property
vπD = A π(rD + γPDv π D)− α
( TVS(π, π̂D)
(1− γ)2 ) and the standard geometric series rearrangement yields
vπD = (I − γAπPD) −1 ( AπrD − α ( TVS(π, π̂D)
(1− γ)2 )) From here, we see:
vπD = (I − γAπPD) −1 ( AπrD − α ( TVS(π, π̂D)
(1− γ)2 )) = (I − γAπPD)−1AπrD − (I − γAπPD)−1 α ( TVS(π, π̂D)
(1− γ)2 ) = Enaı̈ve(D,π)− (I − γAπPD)−1 α ( TVS(π, π̂D)
(1− γ)2 ) Next, we define a new family of FDPE algorithms,
Eproximal-full(D,π) := Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2
)) .
We use Lemma 3, which holds with probability at least 1 − δ, to bound the overestimation and underestimation. First, the underestimation:
vπM − Eproximal-full(D,π) = vπM − ( Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 ))) = (vπM − Enaı̈ve(D,π)) + α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) ≤ µπD,δ + α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) Next, we analagously bound the overestimation:
Eproximal-full(D,π)− vπM = ( Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2
))) − vπM
= (vπM − Enaı̈ve(D,π))− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) ≤ µπD,δ − α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) We can now invoke Lemma 1 to bound the suboptimality of any value-based FDPO algorithm which uses Eproximal-full, which we denote with O VB proximal-full. Crucially, note that since αµ π′
D,δ is not dependent on π, it can be removed from the infimum and supremum terms, and cancels. Substituting and rearranging, we see that with probability at least 1− δ,
SUBOPT(OVBproximal-full(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ [ µπD,δ + α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2
)])
+ sup π
( Eρ [ µπD,δ − α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2 )]) Finally, we see that FDPO algorithms which use Eproximal-full as their subroutine will return the same policy as FDPO algorithms which use Eproximal. First, we once again use the property that µπ ′
D,δ is not dependent on π. Second, we note that since the total visitation of every policy sums to 11−γ , we
know Eρ [ (I − γAπPD)−1 ( 1 γ )] = 1(1−γ)2 for all π, and thus it is also not dependent on π.
arg max π Eρ[Eproximal-full(D,π)] = arg max π
Eρ [ Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 ))] = arg max π Eρ [ Enaı̈ve(D,π)− (I − γAπPD)−1α ( TVS(π, π̂D) (1− γ)2
)] = arg max
π Eρ[Eproximal(D,π)]
Thus, the suboptimality of arg maxπ Eρ[Eproximal(D,π)] must be equivalent to that of arg maxπ Eρ[Eproximal-full(D,π)], leading to the desired result.
C.9 STATE-WISE PROXIMAL PESSIMISTIC ALGORITHMS
In the main body of the work, we focus on proximal pessimistic algorithms which are based on a state-conditional density model, since this approach is much more common in the literature. However, it is also possible to derive proximal pessimistic algorithms which use state-action densities, which have essentially the same properties. In this section we briefly provide the main results.
In this section, we use dπ := (1− γ)(I − γAπPD)−1 to indicate the state visitation distribution of any policy π. Definition 7. A state-action-density proximal pessimistic algorithm with pessimism hyperparameter α ∈ [0, 1] is any algorithm in the family defined by the fixed-point function
fsad-proximal(v π) = Aπ(rD + γPDv π)− α ( |dπ − dπ̂D | (1− γ)2 ) Theorem 4 (State-action-density proximal pessimistic FDPO suboptimality bound). Consider any state-action-density proximal pessimistic value-based fixed-dataset policy optimization algorithm OVBsad-proximal. Let µ be any state-action-wise decomposable value uncertainty function, and α ∈ [0, 1] be a pessimism hyperparameter. For any datasetD, the suboptimality ofOVBsad-proximal is bounded with probability at least 1− δ by
SUBOPT(Osad-proximal(D)) ≤ 2Eρ[µπ̂DD,δ] + infπ
( Eρ[v π∗M M − v π M] + (1 + α) · Eρ [ (I − γAπPD)−1 ( |dπ − dπ̂D |+
| 1. What is the main contribution of the paper regarding reinforcement learning algorithms?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Do you have any questions or suggestions regarding the paper's organization and writing style? | Review | Review
Summary:
The paper proposes a theoretical framework for analyzing the error of reinforcement learning algorithms in a fixed dataset policy optimization (FDPO) setting. In such settings, data has been collected by a single policy that may not be optimal and the learner puts together a model or value function that will have explicit or implicit uncertainty in areas where the data is not dense enough. The authors provide bounds connecting the uncertainty to the loss. They then show that explicitly pessimistic algorithms that fill in the uncertainty with the worst case can minimize the worst case error. Similarly, proximal algorithms that attempt to adhere to the collection policy (as often the case in model-free batch RL) have improved error compared to a naive approach but not as good as an explicitly pessimistic approach.
Review:
The paper provides a general description of the pessimism performance bounds. The theorems appear to be correct and the reasoning sound. I also like the connection to the proximal approach, which is how most model-free batch RL algorithms approach the problem (by sampling close to the collection policy).
However, the paper does need some improvement. Specifically, a connection should be made to more existing literature on pessimism in safe, batch, or apprenticeship RL. In addition, the paper spends a lot of time on definitions and notation that are not explicitly used while the most interesting empirical results are relegated to the appendix, which seems backwards.
On the connections to the literature, the idea of using pessimism in situations where you are learning from a dataset collected by a non-optimal teacher has been investigated in previous works in apprenticeship RL: http://proceedings.mlr.press/v125/cohen20a/cohen20a.pdf or https://papers.nips.cc/paper/4240-blending-autonomous-exploration-and-apprenticeship-learning.pdf
Specifically, the first (Bayesian) paper explicitly reasons about the worst of all possible worlds mentioned in the current submission and seems to have a lot of overlap in the theory. Can the authors distinguish their results from Cohen et al.? The second paper is an example where model-learning agents keep track of the uncertainty in their learned transition and reward functions and use pessimism to fill in uncertainty. So the idea here is not quite new and better connections to this literature need to be made.
The other issue with the paper is its organization and writing. The theoretical results, while general, are not particularly complicated and don’t seem to warrant the amount of notation and definitions on pages 1-3. Specifically, the bandit example isn’t really mentioned in the paper but the figure takes up a lot of valuable space. Over a full page is used to define basic MDP and dataset terms that are widely known and commonly used. The footnotes are whole paragraphs that seem to be just asides. Finally, the grid word results are presented in a figure without any real associated text except for some generalities about what algorithms worked well, Meanwhile, the most interesting and novel contributions of the paper, including the concrete algorithms for applying pessimistic learning, and the empirical analysis on Atari games, are stashed in the (very long) appendix. I strongly suggest the authors reorganize the paper to highlight these strengths instead of notation and footnotes that are tangential to the paper. |
ICLR | Title
The Importance of Pessimism in Fixed-Dataset Policy Optimization
Abstract
We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. Our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. This analysis reveals that for naı̈ve approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. To avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. These theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four MinAtar environments.
N/A
We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. Our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. This analysis reveals that for naı̈ve approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. To avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. These theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four MinAtar environments.
1 INTRODUCTION
We consider fixed-dataset policy optimization (FDPO), in which a dataset of transitions from an environment is used to find a policy with high return.1 We compare FDPO algorithms by their worst-case performance, expressed as high-probability guarantees on the suboptimality of the learned policy. It is perhaps obvious that in order to maximize worst-case performance, a good FDPO algorithm should select a policy with high worst-case value. We call this the pessimism principle of exploitation, as it is analogous to the widely-known optimism principle (Lattimore & Szepesvári, 2020) of exploration.2
Our main contribution is a theoretical justification of the pessimism principle in FDPO, based on a bound that characterizes the suboptimality incurred by an FDPO algorithm. We further demonstrate how this bound may be used to derive principled algorithms. Note that the core novelty of our work is not the idea of pessimism, which is an intuitive concept that appears in a variety of contexts; rather, our contribution is a set of theoretical results rigorously explaining how pessimism is important in the specific setting of FDPO. An example conveying the intuition behind our results can be found in Appendix G.1.
We first analyze a family of non-pessimistic naı̈ve FDPO algorithms, which estimate the environment from the dataset via maximum likelihood and then apply standard dynamic programming techniques. We prove a bound which shows that the worst-case suboptimality of these algorithms is guaranteed to be small when the dataset contains enough data that we are certain about the value of every possible policy. This is caused by the outsized impact of value overestimation errors on suboptimality, sometimes called the optimizer’s curse (Smith & Winkler, 2006). It is a fundamental consequence of ignoring the disconnect between the true environment and the picture painted by our limited observations. Importantly, it is not reliant on errors introduced by function approximation.
1We use the term fixed-dataset policy optimization to emphasize the computational procedure; this setting has also been referred to as batch RL (Ernst et al., 2005; Lange et al., 2012) and more recently, offline RL (Levine et al., 2020). We emphasize that this is a well-studied setting, and we are simply choosing to refer to it by a more descriptive name.
2The optimism principle states that we should select a policy with high best-case value.
We contrast these findings with an analysis of pessimistic FDPO algorithms, which select a policy that maximizes some notion of worst-case expected return. We show that these algorithms do not require datasets which inform us about the value of every policy to achieve small suboptimality, due to the critical role that pessimism plays in preventing overestimation. Our analysis naturally leads to two families of principled pessimistic FDPO algorithms. We prove their improved suboptimality guarantees, and confirm our claims with experiments on a gridworld.
Finally, we extend one of our pessimistic algorithms to the deep learning setting. Recently, several deep-learning-based algorithms for fixed-dataset policy optimization have been proposed (Agarwal et al., 2019; Fujimoto et al., 2019; Kumar et al., 2019; Laroche et al., 2019; Jaques et al., 2019; Kidambi et al., 2020; Yu et al., 2020; Wu et al., 2019; Wang et al., 2020; Kumar et al., 2020; Liu et al., 2020). Our work is complementary to these results, as our contributions are conceptual, rather than algorithmic. Our primary goal is to theoretically unify existing approaches and motivate the design of pessimistic algorithms more broadly. Using experiments in the MinAtar game suite (Young & Tian, 2019), we provide empirical validation for the predictions of our analysis.
The problem of fixed-dataset policy optimization is closely related to the problem of reinforcement learning, and as such, there is a large body of work which contains ideas related to those discussed in this paper. We discuss these works in detail in Appendix E.
2 BACKGROUND
We anticipate most readers will be familiar with the concepts and notation, which is fairly standard in the reinforcement learning literature. In the interest of space, we relegate a full presentation to Appendix A. Here, we briefly give an informal overview of the background necessary to understand the main results.
We represent the environment as a Markov Decision Process (MDP), denoted M := 〈S,A,R, P, γ, ρ〉. We assume without loss of generality that R(〈s, a〉) ∈ [0, 1], and denote its expectation as r(〈s, a〉). ρ represents the start-state distribution. Policies π can act in the environment, represented by action matrix Aπ , which maps each state to the probability of each state-action when following π. Value functions v assign some real value to each state. We use vπM to denote the value function which assigns the sum of discounted rewards in the environment when following policy π. A dataset D contains transitions sampled from the environment. From a dataset, we can compute the empirical reward and transition functions, rD and PD, and the empirical policy, π̂D.
An important concept for our analysis is the value uncertainty function, denoted µπD,δ , which returns a high-probability upper-bound to the error of a value function derived from datasetD. Certain value uncertainty functions are decomposable by states or state-actions, meaning they can be written as the weighted sum of more local uncertainties. See Appendix B for more detail.
Our goal is to analyze the suboptimality of a specific class of FDPO algorithms, called value-based FDPO algorithms, which have a straightforward structure: they use a fixed-dataset policy evaluation (FDPE) algorithm to assign a value to each policy, and then select the policy with the maximum value. Furthermore, we consider FDPE algorithms whose solutions satisfy a fixed-point equation. Thus, a fixed-point equation defines a FDPE objective, which in turn defines a value-based FDPO objective; we call the set of all algorithms that implement these objectives the family of algorithms defined by the fixed-point equation.
3 OVER/UNDER DECOMPOSITION OF SUBOPTIMALITY
Our first theoretical contribution is a simple but informative bound on the suboptimality of any value-based FDPO algorithm. Next, in Section 4, we make this concrete by defining the family of naı̈ve algorithms and invoking this bound. This bound is insightful because it distinguishes the impact of errors of value overestimation from errors of value underestimation, defined as: Definition 1. Consider any fixed-dataset policy evaluation algorithm E on any dataset D and any policy π. Denote vπD := E(D,π). We define the underestimation error as Eρ[vπM − vπD] and overestimation error as Eρ[vπD − vπM].
The following lemma shows how these quantities can be used to bound suboptimality.
Lemma 1 (Value-based FDPO suboptimality bound). Consider any value-based fixed-dataset policy optimization algorithmOVB, with fixed-dataset policy evaluation subroutine E . For any policy π and dataset D, denote vπD := E(D,π). The suboptimality of OVB is bounded by
SUBOPT(OVB(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ[vπM − vπD] ) + sup
π
( Eρ[vπD − vπM] ) Proof. See Appendix C.1.
This bound is tight; see Appendix C.2. The bound highlights the potentially outsized impact of overestimation on the suboptimality of a FDPO algorithm. To see this, we consider each of its terms in isolation:
SUBOPT(OVB(D)) ≤ inf π
( (A1)︷ ︸︸ ︷ Eρ[v π∗M M − v π M] + (A2)︷ ︸︸ ︷ Eρ[vπM − vπD] ) ︸ ︷︷ ︸
(A)
+ sup π
( (B1)︷ ︸︸ ︷ Eρ[vπD − vπM] ) ︸ ︷︷ ︸
(B)
The term labeled (A) reflects the degree to which the dataset informs us of a near-optimal policy. For any policy π, (A1) captures the suboptimality of that policy, and (A2) captures its underestimation error. Since (A) takes an infimum, this term will be small whenever there is at least one reasonable policy whose value is not very underestimated.
On the other hand, the term labeled (B) corresponds to the largest overestimation error on any policy. Because it consists of a supremum over all policies, it will be small only when no policies are overestimated at all. Even a single overestimation can lead to significant suboptimality.
We see from these two terms that errors of overestimation and underestimation have differing impacts on suboptimality, suggesting that algorithms should be designed with this asymmetry in mind. We will see in Section 5 how this may be done. But first, let us further understand why this is necessary by studying in more depth a family of algorithms which treats its errors of overestimation and underestimation equivalently.
4 NAÏVE ALGORITHMS
The goal of this section is to paint a high-level picture of the worst-case suboptimality guarantees of a specific family of non-pessimistic approaches, which we call naı̈ve FDPO algorithms. Informally, the naı̈ve approach is to take the limited dataset of observations at face value, treating it as though it paints a fully accurate picture of the environment. Naı̈ve algorithms construct a maximum-likelihood MDP from the dataset, then use standard dynamic programming approaches on this empirical MDP. Definition 2. A naı̈ve algorithm is any algorithm in the family defined by the fixed-point function
fnaı̈ve(v π) := Aπ(rD + γPDv π).
Various FDPE and FDPO algorithms from this family could be described; in this work, we do not study these implementations in detail, although we do give pseudocode for some implementations in Appendix D.1.
One example of a naı̈ve FDPO algorithm which can be found in the literature is certainty equivalence (Jiang, 2019a). The core ideas behind naı̈ve algorithms can also be found in the function approximation literature, for example in FQI (Ernst et al., 2005; Jiang, 2019b). Additionally, when available data is held fixed, nearly all existing deep reinforcement learning algorithms are transformed into naı̈ve value-based FDPO algorithms. For example, DQN (Mnih et al., 2015) with a fixed replay buffer is a naı̈ve value-based FDPO algorithm. Theorem 1 (Naı̈ve FDPO suboptimality bound). Consider any naı̈ve value-based fixed-dataset policy optimization algorithm OVBnaı̈ve. Let µ be any value uncertainty function. With probability at least 1− δ, the suboptimality of OVBnaı̈ve is bounded with probability at least 1− δ by
SUBOPT(OVBnaı̈ve(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ[µπD,δ] ) + sup
π Eρ[µπD,δ]
Proof. This result follows directly from Lemma 1 and Lemma 3.
The infimum term is small whenever there is some reasonably good policy with low value uncertainty. In practice, this condition can typically be satisfied, for example by including expert demonstrations in the dataset. On the other hand, the supremum term will only be small if we have low value uncertainty for all policies – a much more challenging requirement. This explains the behavior of pathological examples, e.g. in Appendix G.1, where performance is poor despite access to virtually unlimited amounts of data from a near-optimal policy. Such a dataset ensures that the first term will be small by reducing value uncertainty of the near-optimal data collection policy, but does little to reduce the value uncertainty of any other policy, leading the second term to be large.
However, although pathological examples exist, it is clear that this bound will not be tight on all environments. It is reasonable to ask: is it likely that this bound will be tight on real-world examples? We argue that it likely will be. We identify two properties that most real-world tasks share: (1) The set of policies is pyramidal: there are an enormous number of bad policies, many mediocre policies, a few good policies, etc. (2) Due to the size of the state space and cost of data collection, most policies have high value uncertainty.
Given that these assumptions hold, naı̈ve algorithms will perform as poorly on most real-world environments as they do on pathological examples. Consider: there are many more policies than there is data, so there will be many policies with high value uncertainty; naı̈ve algorithms will likely overestimate several of these policies, and erroneously select one; since good policies are rare, the selected policy will likely be bad. It follows that running naı̈ve algorithms on real-world problems will typically yield suboptimality close to our worst-case bound. And, indeed, on deep RL benchmarks, which are selected due to their similarity to real-world settings, overestimation has been widely observed, typically correlated with poor performance (Bellemare et al., 2016; Van Hasselt et al., 2016; Fujimoto et al., 2019).
5 THE PESSIMISM PRINCIPLE
“Behave as though the world was plausibly worse than you observed it to be.” The pessimism principle tells us how to exploit our current knowledge to find the stationary policy with the best worstcase guarantee on expected return. We consider two specific families of pessimistic algorithms, the uncertainty-aware pessimistic algorithms and proximal pessimistic algorithms, and bound the worst-case suboptimality of each. These algorithms each include a hyperparameter, α, controlling the amount of pessimism, interpolating from fully-naı̈ve to fully-pessimistic. (For a discussion of the implications of the latter extreme, see Appendix G.2.) Then, we will compare the two families, and see how the proximal family is simply a trivial special case of the more general uncertainty-aware family of methods.
5.1 UNCERTAINTY-AWARE PESSIMISTIC ALGORITHMS
Our first family of pessimistic algorithms is the uncertainty-aware (UA) pessimistic algorithms. As the name suggests, this family of algorithms estimates the state-wise Bellman uncertainty and penalizes policies accordingly, leading to a pessimistic value estimate and a preference for policies with low value uncertainty. Definition 3. An uncertainty-aware pessimistic algorithm, with a Bellman uncertainty function uπD,δ and pessimism hyperparameter α ∈ [0, 1], is any algorithm in the family defined by the fixed-point function
fua(v π) = Aπ(rD + γPDv π)− αuπD,δ This fixed-point function is simply the naı̈ve fixed-point function penalized by the Bellman uncertainty. This can be interpreted as being pessimistic about the outcome of every action. Note that it remains to specify a technique to compute the Bellman uncertainty function, e.g. Appendix B.1, in order to get a concrete algorithm. It is straightforward to construct algorithms from this family by modifying naı̈ve algorithms to subtract the penalty term. Similar algorithms have been explored in the safe RL literature (Ghavamzadeh et al., 2016; Laroche et al., 2019) and the robust MDP literature (Givan et al., 1997), where algorithms with high-probability performance guarantees are useful in the context of ensuring safety.
Theorem 2 (Uncertainty-aware pessimistic FDPO suboptimality bound). Consider an uncertaintyaware pessimistic value-based fixed-dataset policy optimization algorithm OVBua . Let uπD,δ be any Bellman uncertainty function, µπD,δ be a corresponding value uncertainty function, and α ∈ [0, 1] be any pessimism hyperparameter. The suboptimality of OVBua is bounded with probability at least 1− δ by
SUBOPT(OVBua (D)) ≤ inf π
( Eρ[v π∗M M − v π M] + (1 + α) · Eρ[µπD,δ] ) + (1− α) · ( sup π Eρ[µπD,δ] )
Proof. See Appendix C.7.
This bound should be contrasted with our result from Theorem 1. With α = 0, the family of pessimistic algorithms reduces to the family of naı̈ve algorithms, so the bound is correspondingly identical. We can add pessimism by increasing α, and this corresponds to a decrease in the magnitude of the supremum term. When α = 1, there is no supremum term at all. In general, the optimal value of α lies between the two extremes.
To further understand the power of this approach, it is illustrative to compare it to imitation learning. Consider the case where the dataset contains a small number of expert trajectories but also a large number of interactions from a random policy, i.e. when learning from suboptimal demonstrations (Brown et al., 2019). If the dataset contained only a small amount of expert data, then both an UA pessimistic FDPO algorithm and an imitation learning algorithm would return a high-value policy. However, the injection of sufficiently many random interactions would degrade the performance of imitation learning algorithms, whereas UA pessimistic algorithms would continue to behave similarly to the expert data.
5.2 PROXIMAL PESSIMISTIC ALGORITHMS
The next family of algorithms we study are the proximal pessimistic algorithms, which implement pessimism by penalizing policies that deviate from the empirical policy. The name proximal was chosen to reflect the idea that these algorithms prefer policies which stay “nearby” to the empirical policy. Many FDPO algorithms in the literature, and in particular several recently-proposed deep learning algorithms (Fujimoto et al., 2019; Kumar et al., 2019; Laroche et al., 2019; Jaques et al., 2019; Wu et al., 2019; Liu et al., 2020), resemble members of the family of proximal pessimistic algorithms; see Appendix E. Also, another variant of the proximal pessimistic family, which uses state density instead of state-conditional action density, can be found in Appendix C.9.
Definition 4. A proximal pessimistic algorithm with pessimism hyperparameter α ∈ [0, 1] is any algorithm in the family defined by the fixed-point function
fproximal(v π) = Aπ(rD + γPDv π)− α (
TVS(π, π̂D) (1− γ)2 ) Theorem 3 (Proximal pessimistic FDPO suboptimality bound). Consider any proximal pessimistic value-based fixed-dataset policy optimization algorithm OVBproximal. Let µ be any state-action-wise decomposable value uncertainty function, and α ∈ [0, 1] be a pessimism hyperparameter. For any dataset D, the suboptimality of OVBproximal is bounded with probability at least 1− δ by
SUBOPT(Oproximal(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ [ µπD,δ + α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2 )]) + sup
π
( Eρ [ µπD,δ − α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2 )]) Proof. See Appendix C.8.
Once again, we see that as α grows, the large supremum term shrinks; similarly, by Lemma 5, when we have α = 1, the supremum term is guaranteed to be non-positive.3 The primary limitation of
3Initially, it will contain µπ ′
D,δ , but this can be removed since it is not dependent on π.
the proximal approach is the looseness of the value lower-bound. Intuitively, this algorithm can be understood as performing imitation learning, but permitting minor deviations. Constraining the policy to be near in distribution to the empirical policy can fail to take advantage of highly-visited states which are reached via many trajectories. In fact, in contrast to both the naı̈ve approach and the UA pessimistic approach, in the limit of infinite data this approach is not guaranteed to converge to the optimal policy. Also, note that when α ≥ 1− γ, this algorithm is identical to imitation learning.
5.3 THE RELATIONSHIP BETWEEN UNCERTAINTY-AWARE AND PROXIMAL ALGORITHMS
Though these two families may appear on the surface to be quite different, they are in fact closely related. A key insight of our theoretical work is that it reveals the important connection between these two approaches. Concretely: proximal algorithms are uncertainty-aware algorithms which use a trivial value uncertainty function.
To see this, we show how to convert an uncertainty-aware penalty into a proximal penalty. let µ be any state-action-wise decomposable value uncertainty function. For any dataset D, we have
µπD,δ = µ π̂D D,δ + (I − γA πPD) −1 ( (Aπ −Aπ̂D )(uD,δ + γPDµπ̂DD,δ) )
(Lemma 4)
≤ µπ̂DD,δ + (I − γA πPD)
−1 ( TVS(π, π′) (
1
(1− γ)2
)) . (Lemma 5)
We began with the uncertainty penalty. In the first step, we rewrote the uncertainty for π into the sum of two terms: the uncertainty for π̂D, and the difference in uncertainty between π and π̂D on various actions. In the second step, we chose our state-action-wise Bellman uncertainty to be 11−γ , which is a trivial upper bound; we also upper-bound the signed policy difference with the total variation. This results in the proximal penalty.4
Thus, we see that proximal penalties are equivalent to uncertainty-aware penalties which use a specific, trivial uncertainty function. This result suggests that uncertainty-aware algorithms are strictly better than their proximal counterparts. There is no looseness in this result: for any proximal penalty, we will always be able to find a tighter uncertainty-aware penalty by replacing the trivial uncertainty function with something tighter.
However, currently, proximal algorithms are quite useful in the context of deep learning. This is because the only uncertainty function that can currently be implemented for neural networks is the trivial uncertainty function. Until we discover how to compute uncertainties for neural networks, proximal pessimistic algorithms will remain the only theoretically-motivated family of algorithms.
6 EXPERIMENTS
We implement algorithms from each family to empirically investigate whether their performance of follows the predictions of our bounds. Below, we summarize the key predictions of our theory.
• Imitation. This algorithm simply learns to copy the empirical policy. It performs well if and only if the data collection policy performs well.
• Naı̈ve. This algorithm performs well only when almost no policies have high value uncertainty. This means that when the data is collected from any mostly-deterministic policy, performance of this algorithm will be poor, since many states will be missing data. Stochastic data collection improves performance. As the size of the dataset grows, this algorithm approaches optimality.
• Uncertainty-aware. This algorithm performs well when there is data on states visited by near-optimal policies. This is the case when a small amount of data has been collected from a near-optimal policy, or a large amount of data has been collected from a worse policy. As the size of the dataset grows, this algorithm to approaches optimality. This approach outperforms all other approaches.
4When constructing the penalty, we can ignore the first term, which does not contain π, and so is irrelevant to optimization.
• Proximal. This algorithm roughly mirrors the performance of the imitation approach, but improves upon it. As the size of the dataset grows, this algorithm does not approach optimality, as the penalty persists even when the environment’s dynamics are perfectly captured by the dataset.
Our experimental results qualitatively align with our predictions in both the tabular and deep learning settings, giving evidence that the picture painted by our theoretical analysis truly describes the FDPO setting. See Appendix D for pseudocode of all algorithms; see Appendix F for details on the experimental setup; see Appendix G.3 for additional experimental considerations for deep learning experiments that will be of interest to practicioners. For an open-source implementation, including full details suitable for replication, please refer to the code in the accompanying GitHub repository: github.com/anonymized
Tabular. The first tabular experiment, whose results are shown in Figure 1(a), compares the performance of the algorithms as the policy used to collect the dataset is interpolated from the uniform random policy to an optimal policy using -greedy. The second experiment, whose results are shown in Figure 1(b), compares the performance of the algorithms as we increase the size of the dataset from 1 sample to 200000 samples. In both experiments, we notice a qualitative difference between the trends of the various algorithms, which aligns with the predictions of our theory.
Neural network. The results of these experiments can be seen in Figure 2. Similarly to the tabular experiments, we see that the naı̈ve approach performs well when data is fully exploratory, and poorly when data is collected via an optimal policy; the pure imitation approach performs better when the data collection policy is closer to optimal. The pessimistic approach achieves the best of both worlds: it correctly imitates a near-optimal policy, but also learns to improve upon it somewhat when the data is more exploratory. One notable failure case is in FREEWAY, where the performance of the pessimistic approach barely improves upon the imitation policy, despite the naı̈ve approach performing near-optimally for intermediate values of .
7 DISCUSSION AND CONCLUSION
In this work, we provided a conceptual and mathematical framework for thinking about fixed-dataset policy optimization. Starting from intuitive building blocks of uncertainty and the over-under decomposition, we showed the core issue with naı̈ve approaches, and introduced the pessimism principle as the defining characteristic of solutions. We described two families of pessimistic algorithms, uncertainty-aware and proximal. We see theoretically that both of these approaches have advantages over the naı̈ve approach, and observed these advantages empirically. Comparing these two families of pessimistic algorithms, we see both theoretically and empirically that uncertainty-aware
algorithms are strictly better than proximal algorithms, and that proximal algorithms may not yield the optimal policy, even with infinite data.
Future directions. Our results indicate that research in FDPO should not focus on proximal algorithms. The development of neural uncertainty estimation techniques will enable principled uncertainty-aware deep learning algorithms. As is evidenced by our tabular results, we expect these approaches to yield dramatic performance improvements, rendering algorithms derived from the proximal family (Kumar et al., 2019; Fujimoto et al., 2019; Laroche et al., 2019; Kumar et al., 2020) obsolete.
On ad-hoc solutions. It is undoubtably disappointing to see that proximal algorithms, which are far easier to implement, are fundamentally limited in this way. It is tempting to propose various adhoc solutions to mitigate the flaws of proximal pessimistic algorithms in practice. However, in order to ensure that the resulting algorithm is principled, one must be careful. For example, one might consider tuning α; however, doing the tuning requires evaluating each policy in the environment, which involves gaining information by interacting with the environment, which is not permitted by the problem setting. Or, one might consider e.g. an adaptive pessimism hyperparameter which decays with the size of the dataset; however, in order for such a penalty to be principled, it must be based on an uncertainty function, at which point we may as well just use an uncertainty-aware algorithm.
Stochastic policies. One surprising property of pessimsitic algorithms is that the optimal policy is often stochastic. This is because the penalty term included in their fixed-point objective is often minimized by stochastic policies. For the penalty of proximal pessimistic algorithms, it is easy to see that this will be the case for any non-deterministic empirical policy; for UA pessimsitic algorithms, it is dependent on the choice of Bellman uncertainty function, but often still holds (see Appendix B.2 for the derivation of a Bellman uncertainty function with this property). This observation lends mathematical rigor to the intuition that agents should ‘hedge their bets’ in the face of epistemic uncertainty. This property also means that the simple approach of selecting the argmax action is no longer adequate for policy improvement. In Appendix D.2.2 we discuss a policy improvement procedure that takes into account the proximal penalty to find the stochastic optimal policy.
Implications for RL. Finally, due to the close connection between the FDPO and RL settings, this work has implications for deep reinforcement learning. Many popular deep RL algorithms utilize a replay buffer to break the correlation between samples in each minibatch (Mnih et al., 2015). However, since these algorithms typically alternate between collecting data and training the network, the replay buffer can also be viewed as a ‘temporarily fixed’ dataset during the training phase. These algorithms are often very sensitive to hyperparemters; in particular, they perform poorly when the number of learning steps per interaction is large (Fedus et al., 2020). This effect can be explained by our analysis: additional steps of learning cause the policy to approach its naı̈ve FDPO fixed-point, which has poor worst-case suboptimality. A pessimistic algorithm with a better fixed-point could therefore allow us to train more per interaction, improving sample efficiency. A potential direction of future work is therefore to incorporate pessimism into deep RL.
A BACKGROUND
We write vectors using bold lower-case letters, a, and matrices using upper-case letters, A. To refer to individual cells of a vector or rows of a matrix, we use function notation, a(x). We write the identity matrix as I . We use the notation Ep[·] to denote the average value of a function under a distribution p, i.e. for any space X , distribution p ∈ Dist(X ), and function a : X → R, we have Ep[a] := Ex∼p[a(x)].
When applied to vectors or matrices, we use <,>,≤,≥ to denote element-wise comparison. Similarly, we use | · | to denote the element-wise absolute value of a vector: |a|(x) = |a(x)|. We use |a|+ to denote the element-wise maximum of a and the zero vector. To denote the total variation distance between two probability distributions, we use TV(p,q) = 12 |p−q|1. When P andQ are conditional probability distributions, we adopt the convention TVX (p,q) = 〈 12 |p(·|x)−q(·|x)|1 : x ∈ X〉, i.e., the vector of total variation distances conditioned on each x ∈ X .
Markov Decision Processes. We represent the environment with which we are interacting as a Markov Decision Process (MDP), defined in standard fashion: M := 〈S,A,R, P, γ, ρ〉. S and A denote the state and action space, which we assume are discrete. We useZ := S×A as the shorthand for the joint state-action space. The reward functionR : Z → Dist([0, 1]) maps state-action pairs to distributions over the unit interval, while the transition function P : Z → Dist(S) maps state-action pairs to distributions over next states. Finally, ρ ∈ Dist(S) is the distribution over initial states. We use r to denote the expected reward function, r(〈s, a〉) := Er∼R(·|〈s,a〉)[r], which can also be interpreted as a vector r ∈ R|Z|. Similarly, note that P can be described as P : (Z×S)→ R, which can be represented as a stochastic matrix P ∈ R|Z|×|S|. In order to emphasize that these reward and transition functions correspond to the true environment, we sometimes equivalently denote them as rM, PM. To denote the vectors of a constant whose sizes are the state and state-action space, we use a single dot to mean state and two dots to mean state-action, e.g., 1̇ ∈ R|S| and 1̈ ∈ R|Z|. A policy π : S → Dist(A) defines a distribution over actions, conditioned on a state. We denote the space of all possible policies as Π. We define an “activity matrix” for each policy, Aπ ∈ RS×Z , which encodes the state-conditional state-action distribution of π, by letting Aπ(s, 〈ṡ, a〉) := π(a|s) if s = ṡ, otherwise Aπ(s, 〈ṡ, a〉) := 0. Acting in the MDP according to π can thus be represented by AπP ∈ R|S|×|S| or PAπ ∈ R|Z|×|Z|. We define a value function as any v : Π → S → R or q : Π → Z → R whose output is bounded by [0, 11−γ ]. Note that this is a slight generalization of the standard definition (Sutton & Barto, 2018) since it accepts a policy as an input. We use the shorthand vπ := v(π) and qπ := q(π) to denote the result of applying a value function to a specific policy, which can also be represented as a vector, vπ ∈ R|S| and qπ ∈ R|Z|. To denote the output of an arbitrary value function on an arbitrary policy, we use unadorned v and q. The expected return of an MDPM, denoted vM or qM, is the discounted sum of rewards acquired when interacting with the environment:
vM(π) := ∞∑ t=0 (γAπP ) t Aπr qM(π) := ∞∑ t=0 (γPAπ) t r
Note that vπM = A πqπM. An optimal policy of an MDP, which we denote π ∗ M, is a policy for which the expected return vM is maximized under the initial state distribution: π∗M := arg maxπ Eρ[vπM]. The statewise expected returns of an optimal policy can be written as vπ ∗ M M . Of particular interest are value functions whose outputs obey fixed-point relationships, vπ = f(vπ) for some f : (S → R) → (S → R). The Bellman consistency equation for a policy π, BπM(x) := Aπ(r + γPx), which uniquely identifies the vector of expected returns for π, since vπM is the only vector for which vπM = BπM(vπM) holds. Finally, for any state s, the probability of being in the state s′ after t time steps when following policy π is [(AπP )t](s, s′). Furthermore, ∑∞ t=0 (γA πP ) t
= (I − γAπP )−1. We refer to (I − γAπP )−1 as the discounted visitation of π.
Datasets. We next introduce basic concepts that are helpful for framing the problem of fixeddataset policy optimization. We define a dataset of d transitions D := {〈s, a, r, s′〉}d, and denote the space of all datasets as D. In this work, we specifically consider datasets sampled from a data distribution Φ : Dist(Z); for example, the distribution of state-actions reached by following some stationary policy. We use D ∼ Φd to denote constructing a dataset of d tuples 〈s, a, r, s′〉, by first sampling each 〈s, a〉 ∼ Φ, and then sampling r and s′ i.i.d. from the environment reward function and transition function respectively, i.e. each r ∼ R(·|〈s, a〉) and s′ ∼ P (·|〈s, a〉).5 We sometimes index D using function notation, using D(s, a) to denote the multiset of all 〈r, s′〉 such that 〈s, a, r, s′〉 ∈ D. We use n̈D ∈ R|Z| to denote the vectors of counts, that is, n̈D(〈s, a〉) := |D(s, a)|. We sometimes use state-wise versions of these vectors, which we denote with ṅD. It is further useful to consider the maximum-likelihood reward and transition functions, computed by averaging all rewards and transitions observed in the dataset for each state-action. To this end, we define empirical reward vector rD(〈s, a〉) := ∑ r,s′∈D(〈s,a〉) r |D(〈s,a〉)| and empirical transition
matrix PD(s′|〈s, a〉) := ∑ r,ṡ′∈D(〈s,a〉) I(ṡ′=s′) |D(〈s,a〉)| at all state-actions for which n̈D(〈s, a〉) > 0. Where with n̈D(〈s, a〉) = 0, there is no clear way to define the maximum-likelihood estimates of 5Note that this is in some sense a simplifying assumption. In practice, datasets will typically be collected using a trajectory from a non-stationary policy, rather than i.i.d. sampling from the stationary distribution of a stationary policy. This greatly complicates the analysis, so we do not consider that setting in this work.
reward and transition, so we do not specify them. All our results hold no matter how these values are chosen, so long as rD ∈ [0, 11−γ ] and PD is stochastic. The empirical policy of a dataset D is defined as π̂D(a|s) := |D(〈s,a〉)||D(〈s,·〉)| except where n̈D(〈s, a〉) = 0, where it can similarly be any valid action distribution. The empirical visitation distribution of a dataset D is computed in the same way as the visitation distribution, but with PD replacing P , i.e. (I − γAπPD)−1.
Problem setting. The primary focus of this work is on the properties of fixed-dataset policy optimization (FDPO) algorithms. These algorithms take the form of a function O : D → Π, which maps from a dataset to a policy.6 Note that in this work, we consider D ∼ Φd, so the dataset is a random variable, and therefore O(D) is also a random variable. The goal of any FDPO algorithm is to output a policy with minimum suboptimality, i.e. maximum return. Suboptimality is a random variable dependent on D, computed by taking the difference between the expected return of an optimal policy and the learned policy under the initial state distribution,
SUBOPT(O(D)) = Eρ[v π∗M M ]− Eρ[v O(D) M ].
A related concept is the fixed-dataset policy evaluation algorithm, which is any function E : D → Π→ S → R, which uses a dataset to compute a value function. In this work, we focus our analysis on value-based FDPO algorithms, the subset of FDPO algorithms that utilize FDPE algorithms.7 A value-based FDPO algorithm with FDPE subroutine Esub is any algorithm with the following structure:
OVBsub (D) := arg max π Eρ[Esub(D,π)].
We define a fixed-point family of algorithms, sometimes referred to as just a family, in the following way. Any family called family is based on a specific fixed-point identity ffamily. We use the notation Efamily to denote any FDPE algorithm whose output vπD := Efamily(D,π) obeys vπD = ffamily(vπD). Finally, OVBfamily refers to any value-based FDPO algorithm whose subroutine is Efamily. We call the set of all algorithms that could implement Efamily the family of FDPE algorithms, and the set of all algorithms that could implement OVBfamily as the family of FDPO algorithms.
B UNCERTAINTY
Epistemic uncertainty measures an agent’s knowledge about the world, and is therefore a core concept in reinforcement learning, both for exploration and exploitation; it plays an important role in this work. Most past analyses in the literature implicitly compute some form of epistemic uncertainty to derive their bounds. An important analytical choice in this work is to cleanly separate the estimation of epistemic uncertainty from the problem of decision making. Our approach is to first define a notion of uncertainty as a function with certain properties, then assume that such a function exists, and provide the remainder of our technical results under such an assumption. We also describe several approaches to computing uncertainty.
Definition 5 (Uncertainty). A function uD,δ : Z → R is a state-action-wise Bellman uncertainty function if for a dataset D ∼ Φd, it obeys with probability at least 1 − δ for all policies π and all values v:
uD,δ ≥ |(rM + γPMv)− (rD + γPDv)|
6This formulation hides a dependency on ρ, the start-state distribution of the MDP. In general, ρ can be estimated from the dataset D, but this estimation introduces some error that affects the analysis. In this work, we assume for analytical simplicity that ρ is known a priori. Technically, this means that it must be provided as an input to O. We hide this dependency for notational clarity.
7Intuitively, these algorithms use a policy evaluation subroutine to convert a dataset into a value function, and return an optimal policy according to that value function. Importantly, this definition constrains only the objective of the approach, not its actual algorithmic implementation, i.e., it includes algorithms which never actually invoke the FDPE subroutine. For many online model-free reinforcement learning algorithms, including policy iteration, value iteration, and Q-learning, we can construct closely analogous value-based FDPO algorithms; see Appendix D.1. Furthermore, model-based techniques can be interpreted as using a model to implicitly define a value function, and then optimizing that value function; our results also apply to model-based approaches.
A function uπD,δ : S → R is a state-wise Bellman uncertainty function if for a dataset D ∼ Φd and policy π, it obeys with probability at least 1− δ for all policies π and all values v:
uπD,δ ≥ |Aπ (rM + γPMv)−Aπ (rD + γPDv)| A function µπD,δ : S → R is a value uncertainty function if for a dataset D ∼ Φd and policy π, it obeys with probability at least 1− δ for all policies π and all values v:
µπD,δ ≥ ∞∑ t=0 (γAπPD) t |Aπ (rM + γPMv)−Aπ (rD + γPDv)|
We refer to the quantity returned by an uncertainty function as uncertainty, e.g., value uncertainty refers to the quantity returned by a value uncertainty function.
Given a state-action-wise Bellman uncertainty function uD,δ , it is easy to verify that AπuD,δ is a state-wise Bellman uncertainty function. Similarly, given a state-wise Bellman uncertainty function uπD,δ , it is easy to verify that (I − γAπPD) −1 uπD,δ is a value uncertainty function. Uncertainty functions which can be constructed in this way are called decomposable. Definition 6 (Decomposablility). A state-wise Bellman uncertainty function uπD,δ is state-actionwise decomposable if there exists a state-action-wise Bellman uncertainty function uD,δ such that uπD,δ = A
πuD,δ . A value uncertainty function µ is state-wise decomposable if there exists a statewise Bellman uncertainty function uπD,δ such that µ = (I − γAπPD) −1 uπD,δ , and further, it is state-action-wise decomposable if that uπD,δ is itself state-action-wise decomposable.
Our definition of Bellman uncertainty captures the intuitive notion that the uncertainty at each state captures how well an approximate Bellman update matches the true environment. Value uncertainty represents the accumulation of these errors over all future timesteps.
How do these definitions correspond to our intuitions about uncertainty? An application of the environment’s true Bellman update can be viewed as updating the value of each state to reflect information about its future. However, no algorithm in the FDPO setting can apply such an update, because the true environment dynamics are unknown. We may use the dataset to estimate what such an update would look like, but since the limited information in the dataset may not fully specify the properties of the environment, this update will be slightly wrong. It is intuitive to say that the uncertainty at each state corresponds to how well the approximate update matches the truth. Our definition of Bellman uncertainty captures precisely this notion. Further, value uncertainty represents the accumulation of these errors over all future timesteps.
How can we algorithmically implement uncertainty functions? In other words, how can we compute a function with the properties required for Definition 5, using only the information in the dataset? All forms of uncertainty are upper-bounds to a quantity, and a tighter upper-bound means that other results down the line which leverage this quantity will be improved. Therefore, it is worth considering this question carefully.
A trivial approach to computing any uncertainty function is simply to return 11−γ for all s and 〈s, a〉. But although this is technically a valid uncertainty function, it is not very useful, because it does not concentrate with data and does not distinguish between certain and uncertain states. It is very loose and so leads to poor guarantees.
In tabular environments, one way to implement a Bellman uncertainty function is to use a concentration inequality. Depending on which concentration inequality is used, many Bellman uncertainty functions are possible. These approaches lead to Bellman uncertainty which is lower at states with more data, typically in proportion to the square root of the count. To illustrate how to do this, we show in Appendix B.1 how a basic application of Hoeffding’s inequality can be used to derive a state-action-wise Bellman uncertainty. In Appendix B.2, we show an alternative application of Hoeffding’s which results in a state-wise Bellman uncertainty, which is a tighter bound on error. In Appendix B.3, we discuss other techniques which may be useful in tightening further.
When the value function is represented by a neural network, it is not currently known how to implement a Bellman uncertainty function. When an empirical Bellman update is applied to a neural
network, the change in value of any given state is impacted by generalization from other states. Therefore, the counts are not meaningful, and concentration inequalities are not applicable. In the neural network literature, many “uncertainty estimation techniques” have been proposed, which capture something analogous to an intuitive notion of uncertainty; however, none are principled enough to be useful in computing Bellman uncertainty.
B.1 STATE-ACTION-WISE BOUND
We seek to construct an uπD,δ such that |Aπ (rM + γPMv)−Aπ (rD + γPDv)| ≤ uπD,δ with probability at least 1− δ. Firstly, let’s consider the simplest possible bound. v is bounded in [0, 11−γ ], so both Aπ (rM + γPMv) and Aπ (rD + γPDv) must be as well. Thus, their difference is also bounded:
|Aπ (rM + γPMv)−Aπ (rD + γPDv)| ≤ 1
1− γ
Next, consider that for any 〈s, a〉, the expression rD(〈s, a〉) + γPD(〈s, a〉)vπ can be equivalently expressed as an expectation of random variables,
rD(〈s, a〉) + γPD(〈s, a〉)v = 1 n̈D(〈s, a〉) ∑
r,s′∈D(〈s,a〉)
r + γv(s′),
each with expected value
Er,s′∈D(〈s,a〉)[r + γv(s′)] = Er∼R(·|〈s,a〉) s′∼P (·|〈s,a〉) [r + γv(s′)] = [rM + γPMv](〈s, a〉).
Note also that each of these random variables is bounded [0, 11−γ ]. Thus, Hoeffding’s inequality tells us that this mean of bounded random variables must be close to their expectation with high probability. By invoking Hoeffding’s at each of the |S ×A| state-actions, and taking a union bound, we see that with probability at least 1− δ,
|(rM + γPMv)− (rD + γPDv)| ≤ 1
1− γ
√ 1
2 ln 2|S × A| δ n̈−1D
We can left-multiply Aπ and rearrange to get:
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤
( 1
1− γ
√ 1
2 ln 2|S × A| δ
) Aπn̈
− 12 D
Finally, we simply intersect this bound with the 11−γ bound from earlier. Thus, we see that with probability at least 1− δ,
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤ 1
1− γ ·min
((√ 1
2 ln 2|S × A| δ
) Aπn̈
− 12 D , 1
)
B.2 STATE-WISE BOUND
This bound is similar to the previous, but uses Hoeffding’s to bound the value at each state all at once, rather than bounding the value at each state-action.
Choose a collection of possible state-local policies Πlocal ⊆ Dist(A). Each state-local policy is a member of the action simplex.
For any s ∈ S and π ∈ Πlocal, the expression [Aπ(rD + γPDv)](s) can be equivalently expressed as a mean of random variables,
[Aπ(rD + γPDv)](s) = 1
ṅD(s) ∑ a,r,s′∈D(s) π(a|s) π̂D(a|s) (r + γv(s′)),
each with expected value Ea,r,s′∈D(s) [ π(a|s) π̂D(a|s) (r + γv(s′)) ] = E a∼π(·|s)
r∼R(·|〈s,a〉) s′∼P (·|〈s,a〉)
[r + γv(s′)] = [Aπ(rM + γPMv)](s).
Note also that each of these random variables is bounded [0, 11−γ π(a|s) π̂D(a|s) ]. Thus, Hoeffding’s inequality tells us that this sum of bounded random variables must be close to its expectation with high probability. By invoking Hoeffding’s at each of the |S| states and |Πlocal| local policies, and taking a union bound, we see that with probability at least 1− δ,
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤ 1
1− γ
√ 1
2 ln 2|S ×Πlocal| δ (Aπ)◦2n̈−1D
where the term (Aπ)◦2 refers to the elementwise square. Finally, we once again intersect with 11−γ , yielding that with probability at least 1− δ,
|Aπ(rM + γPMv)−Aπ (rD + γPDvπM)| ≤ 1
1− γ ·min
(√ 1
2 ln 2|S ×Πlocal| δ (Aπ)◦2n̈−1D , 1
)
Comparing this to the Bellman uncertainty function in Appendix B.1, we see two differences. Firstly, we have replaced a factor of |A| with |Πlocal|, typically loosening the bound somewhat (depending on choice of considered local policies). Secondly, Aπ has now moved inside of the square root; since the square root is concave, Jensen’s inequality says that√
(Aπ)◦2n̈−1D ≤ √ (Aπ)◦2 √ n̈−1D = A πn̈ − 12 D
and so this represents a tightening of the bound.
When Πlocal is the set of deterministic policies, this bound is equivalent to that of Appendix B.1. This can easily be seen by noting that for a deterministic policy, all elements of (Aπ)◦2 are either 1 or 0, and so √
(Aπ)◦2n̈−1D = A πn̈ − 12 D
and also that the size of the set of deterministic policies is exactly |A|. An important property of this bound is that it shows that stochastic policies can be often be evaluated with lower error than deterministic policies. We prove this by example. Consider an MDP with a single state s and two actions a0, a1, and a dataset with n̈D(〈s, a0〉) = n̈D(〈s, a1〉) = 2. We can parameterize the policy by a single number ξ ∈ [0, 1] by setting π(a0|s) = ξ, π(a1|s) = 1− ξ. The size of this bound will be proportional to √ ξ2
2 + (1−ξ)2
2 , and setting the derivative equal to zero, we see that the minimum is ξ = 12 . (Of course, we would need to increase the size of our local policy set to include this, in order to be able to actually select it, and doing so will increase the overall bound; this example only shows that for a given policy set, the selected policy may in general be stochastic.)
Finding the optimum for larger local policy sets is non-trivial, so we leave a full treatment of algorithms which leverage this bound for future work.
B.3 OTHER BOUNDS
There are a few other paths by which this bound can be made tighter still. The above bounds take an extra factor of 11−γ because we bound the overall return, rather than simply bounding the reward and transition functions. This was done because a bound on the transition function would add a cost of O( √ S). However, this can be mitigated by intersecting the above confidence interval with a Good-Turing interval, as proposed in Taleghan et al. (2015). Doing so will cause the bound to concentrate much more quickly in MDPs where the transition function is relatively deterministic. We expect this to be the case for most practical MDPs.
Similarly, empirical Bernstein confidence intervals can be used in place of Hoeffding’s, to increase the rate of concentration for low-variance rewards and transitions (Maurer & Pontil, 2009), leading to improved performance in MDPs where these are common.
Finally, we may be able to apply a concentration inequality in a more advanced fashion to compute a value uncertainty function which is not statewise decomposable: we bound some useful notion of value error directly, rather than computing the Bellman uncertainty function and taking the visitation-weighted sum. This would result in an overall tighter bound on value uncertainty by hedging over data across multiple timesteps. However, in doing so, we would sacrifice the monotonic improvement property needed for convergence of algorithms like policy iteration. This idea has a parallel in the robust MDP literature. The bounds in Appendix B.1 can be seen as constructing an sarectangular robust MDP, whereas Appendix B.2 is similar to constructing an s-rectangular robust MDP Wiesemann et al. (2013). More recently, approaches have been proposed which go beyond s-rectangular (Goyal & Grand-Clement, 2018), and such approaches likely have natural parallels in implementing value uncertainty functions.
C PROOFS
C.1 PROOF OF OVER/UNDER DECOMPOSITION
Starting from the definition of suboptimality, we see
SUBOPT(OVB(D)) = Eρ[v π∗M M ]− Eρ[v π∗D M ]
= Eρ[v π∗M M + (−v π D + v π D) + (−v π∗D D + v π∗D D )− v π∗D M ] (valid for any π) ≤ Eρ[v∗M − vπD] + Eρ[v π∗D D − v π∗D M ] (using Eρ[v π D − v π∗D D ] ≤ 0)
Since the above holds for all π,
SUBOPT(OVB(D)) ≤ inf π
( Eρ[v π∗M M − v π D] ) + Eρ[v π∗D D − v π∗D M ]
≤ inf π
( Eρ[v π∗M M − v π D] )
+ sup π
( Eρ[vπD − vπM] ) (using vπ ∗ D D ∈ Π)
= inf π
( Eρ[v π∗M M − v π M] + Eρ[vπM − vπD] ) + sup
π
( Eρ[vπD − vπM] )
C.2 TIGHTNESS OF OVER/UNDER DECOMPOSITION
We show that the bound given in Lemma 1 is tight via a simple example.
Proof. Consider a bandit-structured MDP with a single state and two actions, A and B, with rewards of 0 and 1 respectively, which both lead to terminal states.
First, consider the left-hand side. If an FDPE subroutine estimates the value of both arms to be 1, then the policy which always selects arm A is an optimal policy of the corresponding FDPO algorithm. In this case, the suboptimality is clearly equal to 1. This is clearly the worst-case suboptimality, since it is the largest possible suboptimality in the environment.
On the right-hand side, note that term (A) is 0 when π is the policy that always picks B, while term (B) is 1 when π is the policy that always picks A. Thus, the left-hand and right-hand sides are equal, and the bound is tight.
C.3 RESIDUAL VISITATION LEMMA
We prove a basic lemma showing that the error of any value function is equal to its one-step Bellman residual, summed over its visitation distribution. Though this result is well-known, it is not clearly stated elsewhere in the literature, so we prove it here for clarity. Lemma 2. For any MDP ξ and policy π, consider the Bellman fixed-point equation given by, let vπξ be defined as the unique value vector such that vπξ = A π(rξ +γPξv π ξ ), and let v be any other value
vector. We have
vπξ − v = (I − γAπPξ)−1(Aπ(rξ + γPξv)− v) (1) v − vπξ = (I − γAπPξ)−1(v −Aπ(rξ + γPξv)) (2) |vπξ − v| = (I − γAπPξ)−1|Aπ(rξ + γPξv)− v| (3)
Proof.
Aπ(rξ + γPξv)− v = Aπ(rξ + γPξv)− vπξ + vπξ − v = Aπ(rξ + γPξv)−Aπ(rξ + γPξvπξ ) + vπξ − v = γAπPξ(v − vπξ ) + (vπξ − v) = (vπξ − v)− γAπPξ(vπξ − v) = (I − γAπPξ)(vπξ − v)
Thus, we see (I−γAπPξ)−1(Aπ(rξ +γPξv)−v) = vπξ −v. An identical proof can be completed starting with v −Aπ(rξ + γPξv), leading to the desired result.
C.4 NAÏVE FDPE ERROR BOUND
We show how the error of naı̈ve FDPE algorithms is bounded by the value uncertainty of stateactions visited by the policy under evaluation. Next, in Section 4, we use this bound to derive a suboptimality guarantee for naı̈ve FDPO. Lemma 3 (Naı̈ve FDPE error bound). Consider any naı̈ve fixed-dataset policy evaluation algorithm Enaı̈ve. For any policy π and dataset D, denote vπD := Enaı̈ve(D,π). Let µπD,δ be any value uncertainty function. The following component-wise bound holds with probability at least 1− δ:
|vπM − vπD| ≤ µπD,δ
Proof. Notice that the naı̈ve fixed-point function is equivalent to the Bellman fixed-point equation for a specific MDP: the empirical MDP defined by 〈S,A, rD, PD, γ, ρ〉. Thus, invoking Lemma 2, for any values v we have
|vπD − v| = (I − γAπPD)−1|Aπ(rD + γPDv)− v|.
Since vπM is a value vector, this immediately implies that
|vπD − vπM| = (I − γAπPD)−1|Aπ(rD + γPDvπM)− vπM|.
Since vπM is the solution to the Bellman consistency fixed-point,
|vπD − vπM| = (I − γAπPD)−1|Aπ(rD + γPDvπM)−Aπ(rM + γPMvπM)|.
Using the definition of a value uncertainty function µπD,δ , we arrive at
|vπD − vπM| ≤ µπD,δ completing the proof.
Thus, reducing value uncertainty improves our guarantees on evaluation error. For any fixed policy, value uncertainty can be reduced by reducing the Bellman uncertainty on states visited by that policy. In the tabular setting this means observing more interactions from the state-actions that that policy visits frequently. Conversely, for any fixed dataset, we will have a certain Bellman uncertainty in each state, and policies mostly visit low-Bellman-uncertainty states can be evaluated with lower error.8
8In the function approximation setting, we do not necessarily need to observe an interaction with a particular state-action to reduce our Bellman uncertainty on it. This is because observing other state-actions may allow us to reduce Bellman uncertainty through generalization. Similarly, the most-certain policy for a fixed dataset may not be the policy for which we have the most data, but rather, the policy which our dataset informs us about the most.
Our bound differs from prior work (Jiang, 2019a; Ghavamzadeh et al., 2016) in that it is significantly more fine-grained. We provide a component-wise bound on error, whereas previous results bound the l∞ norm. Furthermore, our bounds are sensitive to the Bellman uncertainty in each individual reward and transition, rather than only relying on the most-uncertain. As a result, our bound does not require all states to have the same number of samples, and is non-vacuous even in the case where some state-actions have no data.
Our bound can also be viewed as an extension of work on approximate dynamic programming. In that setting, the literature contains fine-grained results on the accumulation of local errors (Munos, 2007). However, those results are typically understood as applying to errors induced by approximation via some limited function class. Our bound can be seen as an application of those ideas to the case where errors are induced by limited observations.
C.5 RELATIVE VALUE UNCERTAINTY
The key to the construction of proximal pessimistic algorithms is the relationship between the value uncertainties of any two policies π, π′, for any state-action-wise decomposable value function.
Lemma 4 (Relative value uncertainty). For any two policies π, π′, and any state-action-wise decomposable value uncertainty µ, we have
µπD,δ − µπ ′ D,δ = (I − γAπPD)−1 ( (Aπ −Aπ ′ )(uD,δ + γPDµ π′ D,δ) )
Proof. Firstly, note that since the value uncertainty function is state-action-wise decomposable, we can express it in a Bellman-like form for some state-wise Bellman uncertainty uπD,δ as µ π D,δ = (I − γAπPD)−1 uπD,δ . Further, uπD,δ is itself state-action-wise decomposable, so it can be written as uπD,δ = A πuD,δ .
We can use this to derive the following relationship.
µπD,δ = (I − γAπPD) −1 uπD,δ
µπD,δ − γPDAπµπD,δ = uπD,δ µπD,δ = u π D,δ + γA πPDµ π D,δ
Next, we bound the difference between the value uncertainty of π and π′.
µπD,δ − µπ ′ D,δ = (u π D,δ + γA πPDµ π D,δ)− (uπ ′ D,δ + γA π′PDµ π′ D,δ)
= (uπD,δ − uπ ′ D,δ) + γA πPDµ π D,δ − γAπ ′ PDµ π′ D,δ = (Aπ −Aπ ′ )uD,δ + γA πPDµ π D,δ − γ(Aπ ′ −Aπ +Aπ)PDµπ ′ D,δ
= γAπPD ( µπD,δ − µπ ′ D,δ ) + (Aπ −Aπ ′ )uD,δ + γ(A π −Aπ ′ )PDµ π′ D,δ
= γAπPD ( µπD,δ − µπ ′ D,δ ) + (Aπ −Aπ ′ )(uD,δ + γPDµ π′ D,δ)
This is a geometric series, so (I − γAπPD) ( µπD,δ − µπ ′ D,δ ) = (Aπ − Aπ′)(uD,δ + γPDµπ ′ D,δ). Left-multiplying by (I − γAπPD)−1 we arrive at the desired result.
C.6 NAÏVE FDPE RELATIVE ERROR BOUND
Lemma 5 (Naı̈ve FDPE relative error bound). Consider any naı̈ve fixed-dataset policy evaluation algorithm Enaı̈ve. For any policy π and dataset D, denote vπD := Enaı̈ve(D,π). Then, for any other policy π′, the following bound holds with probability at least 1− δ:
|vπD − vπM| ≤ µπD,δ ≤ µπ ′ D,δ + (I − γAπPD)−1 (
1
(1− γ)2
) TVS(π, π′)
Proof. The goal of this section is to construct an error bound for which we can optimize π without needing to compute any uncertainties. To do this, we must replace this quantity with a looser upper bound.
First, consider a state-action-wise decomposable value uncertainty function µπD,δ . We have µ π D,δ = (I − γPDAπ)−1uπD,δ for some uπD,δ , where uπD,δ = AπuD,δ for some uD,δ .
Note that state-action-wise Bellman uncertainty can be trivially bounded as uD,δ ≤ 11−γ 1̈. Also, since (I−γPDAπ)−1 ≤ 11−γ , any state-action-wise decomposable value uncertainty can be trivially bounded as µπD,δ ≤ 1(1−γ)2 1̇.
We now invoke Lemma 4. We then substitute the above bounds into the second term, after ensuring that all coefficients are positive.
µπD,δ − µπ ′ D,δ = (I − γAπPD)−1 ( (Aπ −Aπ ′ )(uD,δ + γPDµ π′ D,δ) )
≤ (I − γAπPD)−1 ( |Aπ −Aπ ′ |+(uD,δ + γPDµπ ′ D,δ) )
≤ (I − γAπPD)−1 ( |Aπ −Aπ ′ |+ ( 1
1− γ 1̈ + γPD
1
(1− γ)2 1̇ )) = (I − γAπPD)−1 ( |Aπ −Aπ ′ |+ ( 1
1− γ 1̈ +
γ
(1− γ)2 1̈ )) = (I − γAπPD)−1 ( 1
(1− γ)2
) |Aπ −Aπ ′ |+1̈
= (I − γAπPD)−1 (
1
(1− γ)2
) TVS(π, π′)
The third-to-last step follows from the fact that PD is stochastic. The final step follows from the fact that the positive and negative components of the state-wise difference between policies must be symmetric, so |Aπ − Aπ′ |+1̈ is precisely equivalent to the state-wise total variation distance, TVS(π, π′). Thus, we have
µπD,δ ≤ µπ ′ D,δ + (I − γAπPD)−1 (
1
(1− γ)2
) TVS(π, π′).
Finally, invoking Lemma 3, we arrive at the desired result.
C.7 SUBOPTIMALITY OF UNCERTAINTY-AWARE PESSIMISTIC FDPO ALGORITHMS
Let vπD := Eua(D,π). From the definition of the UA family, we have the fixed-point property vπD = A π(rD + γPDv π D) − αuπD,δ , and the standard geometric series rearrangement yields vπD = (I − γAπPD)−1 (AπrD − αuπD,δ). From here, we see:
vπD = (I − γAπPD) −1 (AπrD − αuπD,δ)
= (I − γAπPD)−1AπrD − (I − γAπPD)−1 αuπD,δ = Enaı̈ve(D,π)− αµπD,δ
We now use this to bound overestimation and underestimation error of Eua(D,π) by invoking Lemma 3, which holds with probability at least 1− δ. First, for underestimation, we see:
vπM − vπD = vπM − ( Enaı̈ve(D,π)− αµπD,δ ) = (vπM − Enaı̈ve(D,π)) + αµπD,δ ≤ µπD,δ + αµπD,δ = (1 + α)µπD,δ
and thus, vπM − vπD ≤ (1 + α)µπD,δ . Next, for overestimation, we see: vπD − vπM = ( Enaı̈ve(D,π)− αµπD,δ ) − vπM
= (Enaı̈ve(D,π)− vπM)− αµπD,δ ≤ µπD,δ − αµπD,δ = (1− α)µπD,δ
and thus, vπD − vπM ≤ (1 − α)µπD,δ . Substituting these bounds into Lemma 1 gives the desired result.
C.8 SUBOPTIMALITY OF PROXIMAL PESSIMSITIC FDPO ALGORITHMS
Let vπD := Eproximal(D,π). From the definition of the proximal family, we have the fixed-point property
vπD = A π(rD + γPDv π D)− α
( TVS(π, π̂D)
(1− γ)2 ) and the standard geometric series rearrangement yields
vπD = (I − γAπPD) −1 ( AπrD − α ( TVS(π, π̂D)
(1− γ)2 )) From here, we see:
vπD = (I − γAπPD) −1 ( AπrD − α ( TVS(π, π̂D)
(1− γ)2 )) = (I − γAπPD)−1AπrD − (I − γAπPD)−1 α ( TVS(π, π̂D)
(1− γ)2 ) = Enaı̈ve(D,π)− (I − γAπPD)−1 α ( TVS(π, π̂D)
(1− γ)2 ) Next, we define a new family of FDPE algorithms,
Eproximal-full(D,π) := Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2
)) .
We use Lemma 3, which holds with probability at least 1 − δ, to bound the overestimation and underestimation. First, the underestimation:
vπM − Eproximal-full(D,π) = vπM − ( Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 ))) = (vπM − Enaı̈ve(D,π)) + α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) ≤ µπD,δ + α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) Next, we analagously bound the overestimation:
Eproximal-full(D,π)− vπM = ( Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2
))) − vπM
= (vπM − Enaı̈ve(D,π))− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) ≤ µπD,δ − α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 )) We can now invoke Lemma 1 to bound the suboptimality of any value-based FDPO algorithm which uses Eproximal-full, which we denote with O VB proximal-full. Crucially, note that since αµ π′
D,δ is not dependent on π, it can be removed from the infimum and supremum terms, and cancels. Substituting and rearranging, we see that with probability at least 1− δ,
SUBOPT(OVBproximal-full(D)) ≤ inf π
( Eρ[v π∗M M − v π M] + Eρ [ µπD,δ + α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2
)])
+ sup π
( Eρ [ µπD,δ − α(I − γAπPD)−1 ( TVS(π, π̂D)
(1− γ)2 )]) Finally, we see that FDPO algorithms which use Eproximal-full as their subroutine will return the same policy as FDPO algorithms which use Eproximal. First, we once again use the property that µπ ′
D,δ is not dependent on π. Second, we note that since the total visitation of every policy sums to 11−γ , we
know Eρ [ (I − γAπPD)−1 ( 1 γ )] = 1(1−γ)2 for all π, and thus it is also not dependent on π.
arg max π Eρ[Eproximal-full(D,π)] = arg max π
Eρ [ Enaı̈ve(D,π)− α ( µπ ′ D,δ + (I − γAπPD)−1 (
TVS(π, π̂D) (1− γ)2 ))] = arg max π Eρ [ Enaı̈ve(D,π)− (I − γAπPD)−1α ( TVS(π, π̂D) (1− γ)2
)] = arg max
π Eρ[Eproximal(D,π)]
Thus, the suboptimality of arg maxπ Eρ[Eproximal(D,π)] must be equivalent to that of arg maxπ Eρ[Eproximal-full(D,π)], leading to the desired result.
C.9 STATE-WISE PROXIMAL PESSIMISTIC ALGORITHMS
In the main body of the work, we focus on proximal pessimistic algorithms which are based on a state-conditional density model, since this approach is much more common in the literature. However, it is also possible to derive proximal pessimistic algorithms which use state-action densities, which have essentially the same properties. In this section we briefly provide the main results.
In this section, we use dπ := (1− γ)(I − γAπPD)−1 to indicate the state visitation distribution of any policy π. Definition 7. A state-action-density proximal pessimistic algorithm with pessimism hyperparameter α ∈ [0, 1] is any algorithm in the family defined by the fixed-point function
fsad-proximal(v π) = Aπ(rD + γPDv π)− α ( |dπ − dπ̂D | (1− γ)2 ) Theorem 4 (State-action-density proximal pessimistic FDPO suboptimality bound). Consider any state-action-density proximal pessimistic value-based fixed-dataset policy optimization algorithm OVBsad-proximal. Let µ be any state-action-wise decomposable value uncertainty function, and α ∈ [0, 1] be a pessimism hyperparameter. For any datasetD, the suboptimality ofOVBsad-proximal is bounded with probability at least 1− δ by
SUBOPT(Osad-proximal(D)) ≤ 2Eρ[µπ̂DD,δ] + infπ
( Eρ[v π∗M M − v π M] + (1 + α) · Eρ [ (I − γAπPD)−1 ( |dπ − dπ̂D |+
| 1. What is the main contribution of the paper regarding policy evaluations in deep RL algorithms?
2. What are the strengths and weaknesses of the proposed approach in combating overestimation in policy improvements?
3. How does the decomposition in Lemma 1 provide new insights into the sub-optimality of naive algorithms?
4. Can the results in Sec. 6 be improved by choosing an optimal value of alpha for the proximal algorithm?
5. Are there any practical implications of the paper's findings for real-world applications of RL algorithms? | Review | Review
The message of this paper is that naive policy evaluations common in current (deep) RL algorithms, can lead to a dangerous overestimation of the value function. This overestimation of the value function can then lead to policy improvements with poor theoretical guarantees. To combat overestimation, the authors propose to penalize state-action pairs that are rarely visited. As an easier to implement alternative, and closer to existing algorithms in the literature, the authors also study another penalty term that penalizes deviation from the data generating policy. The authors show on a numerical example that the more principled penalty term that depends on visitation counts is better performing, and that the proximal penalty term only yields minor improvements over imitation learning (i.e. returning the data generating policy).
The main contribution of the paper is to decompose the sub-optimality upper bound into terms that either overestimate or underestimate the total reward that can be collected in the true MDP. The authors argue that the overestimation is especially problematic for (the many) RL algorithms that are subject to such overestimation, as there is a high chance of existence of a policy that performs poorly on the true MDP but has high reward on the empirical MDP (the MDP with empirical estimates of the reward and transitions), resulting in a large sub-optimality.
As far as I am aware, this decomposition is new. But I wonder if beyond formalizing the sub-optimality of naive algorithms, it has other theoretical or practical applications. The notion of pessimism is typical in the analysis of theoretically grounded algorithms (e.g. CPI in Approximately Optimal Approximate Reinforcement Learning, Kakade et al. 2002), where deviation from ‘known’ state-action pairs is typically maximally penalized with the worst possible value (i.e. a sub-optimality of 1 / (1-\gamma)). So I wonder if the decomposition in overestimation/underestimation terms in Lemma 1 allows for new theoretical insights and algorithmic developments or if it is more of a rewriting, and similar results can be obtained by more carefully choosing the empirical MDP D, such that the overestimation term disappears with high probability even in the worst case and only the underestimation term remains. As is, I understand the reasons for exhibiting both underestimation/overestimation terms in order to analyze ‘naive’ algorithms in the sense of Sec. 4, but is there an advantage for this decomposition and for the algorithm in Sec. 5.1 compared to choosing the optimal policy without an penalty term but in a more carefully constructed MDP D’ that doesn’t allow for overestimation? Similarly, is there any benefit in not choosing \alpha = 1 in Sec. 5.1? Is there an optimal choice for \alpha for Sec. 5.2?
As for the practical implications, the results in Sec. 6 are quite depressing since algorithms with a proximal penalty are easier to implement than with the uncertainty penalty. What was the \alpha in the experiments? I wonder if the results can be improved for the proximal algorithm if a better choice of \alpha is used depending on the optimality of the data generating policy or on the size of the dataset.
Overall, the paper is an interesting read and its message is well presented and supported. However, I am wondering if the theoretical contributions can serve another purpose than warning about the poor theoretical guarantees of ‘naive’ algorithms, and hope the authors can correct me if I underappreciated the importance of these derivations. |